Desktop
Desktop
1
Desktop 7.1
Digital Route AB shall have no liability for any errors or damage of any kind resulting from the use of this document.
DigitalRoute® and MediationZone® are registered trademarks of Digital Route AB. All other trade names and marks mentioned
herein are the property of their respective holders.
2
Desktop 7.1
Table of Contents
1. Introduction ............................................................................................................. 12
1.1. Prerequisites .................................................................................................. 12
1.2. Execution Context .......................................................................................... 12
1.3. Commands .................................................................................................... 13
1.3.1. Agent Command .................................................................................. 13
2. Desktop Overview ..................................................................................................... 14
2.1. Security ........................................................................................................ 14
2.1.1. Login/Logout ...................................................................................... 14
2.1.2. Locks ................................................................................................ 16
2.1.3. Connection Failure ............................................................................... 16
2.1.4. Encryption .......................................................................................... 16
2.2. Administration and Management ....................................................................... 17
2.2.1. Desktop Background Color .................................................................... 17
2.2.2. Dynamic Update .................................................................................. 17
2.2.3. Folders ............................................................................................... 18
2.2.4. Configuration Naming .......................................................................... 18
2.2.5. Date and Time Format Codes ................................................................. 19
2.2.6. Properties used for Desktop ................................................................... 20
2.2.7. Text Editor .......................................................................................... 27
2.2.8. Configuration List Editor ....................................................................... 30
2.2.9. UDR Browser ...................................................................................... 31
2.2.10. Meta Information Model ...................................................................... 32
2.3. Desktop User Interface .................................................................................... 36
2.3.1. Tabs .................................................................................................. 37
2.3.2. Menus and Buttons ............................................................................... 37
2.3.3. Configuration Navigator ........................................................................ 39
2.3.4. Status Bar ........................................................................................... 45
3. Configuration ........................................................................................................... 46
3.1. Menus and Buttons ......................................................................................... 46
3.1.1. Configuration Menus ............................................................................ 46
3.1.2. Configuration Buttons ........................................................................... 48
3.2. Alarm Detection ............................................................................................. 49
3.2.1. Alarm Detection Menus ......................................................................... 49
3.2.2. Alarm Detection Buttons ....................................................................... 50
3.2.3. Defining an Alarm Detection ................................................................. 50
4. Working with workflows ............................................................................................ 67
4.1. Workflow ...................................................................................................... 67
4.1.1. Workflow Types ................................................................................... 67
4.1.2. Multithreading ..................................................................................... 73
4.1.3. Workflow Menus ................................................................................. 74
4.1.4. Workflow Buttons ................................................................................ 75
4.1.5. Agent Pane ......................................................................................... 76
4.1.6. Workflow Template .............................................................................. 78
4.1.7. Workflow Table ................................................................................... 90
4.1.8. Workflow Properties ............................................................................. 93
4.1.9. Validation ......................................................................................... 113
4.1.10. Version Management ......................................................................... 114
4.1.11. Workflow Monitor ............................................................................ 114
4.1.12. Deactivation Issues ........................................................................... 122
4.2. Workflow Group ........................................................................................... 123
4.2.1. Creating a Workflow Group Configuration .............................................. 123
4.2.2. Managing a Workflow Group ................................................................ 125
4.2.3. Workflow Group States ........................................................................ 135
4.2.4. Suspend Execution ............................................................................. 136
4.2.5. Suspend Execution Editor .................................................................... 136
3
Desktop 7.1
4
Desktop 7.1
5
Desktop 7.1
6
Desktop 7.1
7
Desktop 7.1
8
Desktop 7.1
9
Desktop 7.1
10
Desktop 7.1
11
Desktop 7.1
1. Introduction
MediationZone® is a data mediation foundation, based on a distributed real-time architecture on which
any type of mediation functionality can be deployed.
The system is based on workflow technology, where mediation processes can be modeled in a graph-
ical user interface. Workflow activities are performed by software Agents, that are linked into flows
providing the required mediation functionality.
1.1. Prerequisites
The reader of this document should be familiar with:
• Databases
• Distributed systems
For information about Terms and Abbreviations used in this document, see the Terminology document.
There are two kinds of Execution Contexts. One that can execute any type of workflow and one that
can run stand-alone. The stand-alone version only works with real-time workflows that are configured
to not depend on external entities. The purpose with a stand-alone workflow is to allow it to run without
relying on the platform. For example: Assume a work environment where either the network is unreli-
able, or the workflow must guarantee uptime, even if the platform - for some reason - has terminated.
If the platform is down, a stand-alone Execution Context keeps track of all events that occurred, and
once the platform is up and running again, these events are propagated to the platform. The Debug
Event and internal events for statistics are not remembered.
12
Desktop 7.1
An Execution Context features a Web Interface showing the workflows running. The Web Interface
should only be used in case that the platform process is unavailable or the user is unable to stop a
workflow due to communication failure between the platform and the Execution Context.
1.3. Commands
A workflow agent may support execution of commands while it is executing. Such commands are
agent specific and can be invoked from either the command-line tool mzsh or the workflow monitor.
A command will in most cases affect the data in the workflow in some way, for instance by flushing
an internal cache of data to downstream agents.
13
Desktop 7.1
2. Desktop Overview
Desktop is the user interface application that enables you to manage, navigate, and monitor Medi-
ationZone® . With Desktop you create workflows. A workflow is a set of agents that are connected
to each other and represent a flow of data processing. All the agents in a workflow operate on a specific
data type, and most agents need to know the structure, the introspection, of the data in order to operate
properly.
This chapter describes applications, features and settings used in within the MediationZone® Desktop.
2.1. Security
2.1.1. Login/Logout
When the MediationZone® Desktop application is started a login window is presented. In order to
gain access to the Desktop, the user has to be authenticated by supplying a Username and a Password.
Once the Username and Password have been successfully entered, the Desktop will be available.
Depending on the logged in user, access is granted to different parts of the system. Note that all parts
of Desktop are visible to all users, regardless of permission restrictions. Configuration and operation
options may however be disabled.
• a single Desktop
To restrict to one login, enter the following line to the platform.xml file: <property
name="mz.security.user.restricted.login" value="true"/>
The name of the logged in user and the name of the MediationZone® system that Desktop has connected
to are available in the status area at the bottom of Desktop.
A login banner can be added to the login window, the purpose of the login banner is to provide inform-
ation to the user before logging in. To enable the login banner the property mz.security.lo-
gin.banner is added to the platform.xml file. The value of the property is the name of a file
with the text that should be displayed in the banner. The text in the login banner can be formatted using
HTML-tags.
14
Desktop 7.1
For further information on configuration spaces, see the Configuration Spaces document.
To be able to do this the desktop.xml file must be updated to describe what systems that are
available. First open the desktop.xml file in a text editor. Duplicate the configuration element, one
below the other. There will now be two configuration elements.
The properties that may be added are defined in the desktop.xml file and are modified in order to
comply with the relevant MediationZone® instance.
Example 1.
<configlist prompt="true">
<config name="Desktop1">
.
.
.
<property name="pico.rcp.platform.host" value="10.0.1.33"/>
<property name="pico.rcp.platform.port" value="4111"/>
</config>
<config name="Desktop2">
.
.
.
<property name="pico.rcp.platform.host" value="10.0.1.33"/>
<property name="pico.rcp.platform.port" value="4192"/>
</config>
</configlist>
15
Desktop 7.1
2.1.2. Locks
Configurations that you edit are locked for manipulation by other users. When you open a locked
Configuration, a message box appears with information about the user that has access and can edit it.
To edit a locked, or read-only, Configuration you can save a copy of it by a different name.
Locks are not persistent. If the system is restarted, all locks will be forgotten.
2.1.4. Encryption
A Configuration in MediationZone® is persisted using XML and therefore more or less available in
a readable form to any user (See Section 7.3, “Configuration Browser”). Some Configurations may
be sensitive and possibly contain descriptions that are proprietary and must be protected. To protect
such Configurations, MediationZone® features the ability to encrypt Configurations using a pass-
phrase. A Configuration will thereby only be readable given that the pass-phrase is known by the user.
In case the pass-phrase is lost the Configuration should be considered lost as well.
There are Configurations that generate information to the system. For example: The Ultra format that
renders UDRs. A user can have access to the UDRs without knowing the pass-phrase for the Config-
uration source, by setting the user group execute permission. A user can also import a format or ana-
lysis package which execute permission is configured.
16
Desktop 7.1
Encrypted Configurations retain their encryption and pass-phrase across export and import. This means
that in order to open a Configuration that is imported from another system, you need its pass-phrase.
The Database profile and some of the agents can use passwords from External References. These can
be encrypted, either by using the default key, or by using a crypto service keystore file. See Section 9.5.4,
“Using passwords in External References” for further information.
To Change the Desktop Application Background Color, add the following text into
$MZ_HOME/etc/desktop.xml:
value can be any of the following colors: blue, green, yellow, orange, red, darkblue, darkgreen, magenta,
or darkred.
In addition, to tell the difference between different spaces, you can vary the background color of each
space. For further information on configuration spaces, see the Configuration Spaces document.
To be able to dynamically update TCP/IP Host- and Port parameters you need to set them to either
Default or Per Workflow in the Workflow Properties dialog box. See Figure 83, “The Workflow
Table Tab”.
To update, select Dynamic Update from the Edit menu. On the title bar of the monitor dialog-box the
text Dynamic Update followed by a number appears. It represents the number of times that you have
updated the workflow configuration while running, that is since the last time you started it.
17
Desktop 7.1
2.2.3. Folders
Folders enable the user to categorize Configurations, and simplify their maintenance and operation.
Folders could for instance be created based on traffic type, decoding for a specific network element,
as well as being based on geographic location etc.
MediationZone® includes a system folder named Default. This folder cannot be renamed nor removed.
Some named items in the MediationZone® environment are used when constructing file names. To
avoid potential conflicts in the file systems, MediationZone® will convert the illegal characters when
constructing the file names.
The following characters are considered to be legal. Any other character will result in a validation error.
• a-z
• A-Z
• 0-9
• - (dash)
• _ (underscore)
MediationZone® has an internal key for every Configuration. This key is used to identify the Config-
uration. Renaming a Configuration will not change this key. The key is constructed by using the system
name and the date when the Configuration was created. The generated key can be viewed by selecting
18
Desktop 7.1
the Show Properties option in the right click menu in the Configuration Navigator, as well as in the
Configuration Browser and Configuration Tracer.
The date syntax conforms to the Java class SimpleDateFormat. This section contains a summary
only. For a full description see:
http://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html
19
Desktop 7.1
Example 2.
The following examples show how date and time patterns are interpreted in the U.S. local. The
given date and time are 2001-07-04 12:08:56 local time in the U.S. Pacific Time time zone.
Format Example
"yyyy.MM.dd G 'at' HH:mm:ss z" 2001.07.04 AD at 12:08:56 PDT
"EEE, MMM d, ''yy" Wed, Jul 4, '01
"h:mm a" 12:08 PM
"hh 'o''clock' a, zzzz" 12 o'clock PM, Pacific Daylight Time
"K:mm a, z" 0:08 PM, PDT
"yyyyy.MMMMM.dd GGG hh:mm aaa" 02001.July.04 AD 12:08 PM
"EEE, d MMM yyyy HH:mm:ss Z" Wed, 4 Jul 2001 12:08:56 -0700
"yyMMddHHmmssZ" 010704120856-0700
The text is color coded according to the following definitions; brown = strings,
dark blue = functions, light blue = constants, green = types, orange = user
defined types, purple = key words, red = comments
mz.gui.edit- Default value: notepad.exe
or.command
This property specifies the command used for starting the editor you want to
use for editing APL code or Ultra Formats. If you, for example, want to use
Emacs, and are running on Windows, the command should be emacs.exe
while in Linux/Unix it should be emacs.
mz.gui.edit- Default value: 8,10,12,14,18,20,24,36
or.menufont-
sizes This property specifies the font sizes you want to be able to choose from when
editing APL code or Ultra Formats in the APL Code Editor and the Ultra
Format Editor. The current value is displayed between the "-" and "+" magni-
fying glasses to the left in the button list in the editors and can be changed by
clicking on the magnifying glasses, or using the key combinations CTRL+
and CTRL- The current value can also be changed by opening the right-click
popup menu and selecting Font Size.
20
Desktop 7.1
Figure 8.
Note! Setting this property to true may cause the startup of Desktop
to be bit slower.
The text is color coded according to the following definitions; brown = strings,
dark blue = functions, light blue = constants, green = types, orange = user
defined types, purple = key words, red = comments
pico.bootstrap- Default value: com.digitalroute.ui.MZDesktopMain
class
This property specifies the bootstrap classes used for desktop.xml.
pico.swing Default value: yes
This property specifies how you want notifications to be made for this pico.
For Desktops you usually want notifications to be made in the GUI, and in
that case this property should be set to yes, meaning that Swing will be used.
For other picos, such as the Platform and the Execution Context, this property
will usually be excluded, which will result in notifications being sent to the
console instead.
pico.type Default value: desktop
This property specifies the type of pico instance used for the Desktop. See the
Terminology document for further information.
swing.aatext Default value: true
This property specifies that Java anti-aliasing should be used, which will im-
prove the display of graphical elements in the GUI.
21
Desktop 7.1
22
Desktop 7.1
23
Desktop 7.1
The default value is false, which means that the start message will be
logged. Excluding the property entirely will have the same effect. Setting
the property to false will result in no logging of the start message.
pico.logdateformat Default value: "YYYY-MM-DD"
This property specifies the date format to be used in the log files.
See http://docs.oracle.com/javase/8/docs/api/java/text/SimpleDate-
Format.html for further information.
pico.name Default value: "<pico instance type>"
This property specifies the name of the pico instance used for the Desktop.
If this property is not included, the name of the config element will be
used, see Section 2.2.6.1.4, “Configuration Properties for Multiple Desktops
in Desktop.xml” for further information.
pico.pid Default value: $MZ_HOME/core/log
24
Desktop 7.1
This property specifies the directory you want the Desktop to write process
ID (PID) file to.
This property specifies the directory you want the Desktop to write standard
errors to.
This property specifies the directory you want the Desktop to write standard
output to.
This property specifies the pico temp directory you want the Desktop to
use.
To enable logging for the Desktop, add the following lines in the desktop.xml file:
where the pico.log.level specifies the log level. Available levels are:
• Finest
• Fine
• Warning
• Severe
• Off (default, which is also the same as having no logging properties included)
If you want to configure several Desktops that connect to different Platforms, add <config> sections
for each Desktop in the desktop.xml file according to the following example.
25
Desktop 7.1
Example 3.
<configlist prompt="true">
<config name="Desktop1">
.
.
.
<property name="pico.rcp.platform.host" value="10.0.1.33"/>
<property name="pico.rcp.platform.port" value="4111"/>
</config>
<config name="Desktop2">
.
.
.
<property name="pico.rcp.platform.host" value="10.0.1.33"/>
<property name="pico.rcp.platform.port" value="4192"/>
</config>
</configlist>
The following properties, that are usually not included in desktop.xml, have to be added for each
Desktop:
This property specifies the directory that should be used for the pico-
cache that is cashing information about all running picos, which is used
by all servers and clients.
pico.tmpdir Default value: ${mz.home}/tmp
This property specifies the temp directory you want to use for you picos.
pico.rcp.server.host Default value: ""
26
Desktop 7.1
27
Desktop 7.1
You can also press the CTRL+H keys to perform this action.
You can also press the CTRL+F keys to perform this action.
Find Again Repeats the search for last entered text in the Find/Replace dialog.
You can also press the CTRL+G keys to perform this action.
Go to Line... Opens the Go to Line dialog where you can enter which line in the code you
want to go to. Click OK and you will be redirected to the entered line.
You can also press the CTRL+L keys to perform this action.
Show Definition If you right click on a function in the code that has been defined somewhere
else and select this option, you will be redirected to where the function has
been defined.
If the function has been defined within the same Configuration, you will simply
jump to the line where the function is defined. If the function has been defined
in another Configuration, the Configuration will be opened and you will jump
directly to the line where the function has been defined.
You can also click on a function and press the CTRL+F3 keys to perform this
action.
Note! If you have references to an external function with the same name
as a function within the current code, some problems may occur. The
Show Definition option will point to the function within the current
code, while the external function is the one that will be called during
workflow execution.
Show Usages If you right click on a function where it is defined in the code and select this
option, a dialog called Usage Viewer will open and display a list of the Con-
figurations that are using the function.
You can also select a function and press the CTRL+F4 keys to perform this
action.
UDR Assistance... Opens the UDR Internal Format Browser from wihich the UDR Fields may be
inserted into the code area.
You can also press the CTRL+U keys to perform this action.
MIM Assistance... Opens the MIM Browser from which the available MIM Resources may be
inserted into the code area.
You can also press the CTRL+M keys to perform this action.
Import... Imports the contents from an external text file into the editor. Note that the file
has to reside on the host where the client is running.
28
Desktop 7.1
Export... Exports the current contents into a new file to, for instance, allow editing in
another text editor or usage in another MediationZone® system.
Use External Editor Opens the editor specified by the property mz.gui.editor.command in
the $MZ_HOME/etc/desktop.xml file.
Example 4.
Example:
mz.gui.editor.command = notepad.exe
You can also press the CTRL+SPACE keys to perform this action.
Indent Adjusts the indentation of the code to make it more readable.
You can also press the CTRL+I keys to perform this action.
Jump to Pair Moves the cursor to the matching parenthesis or bracket.
You can also press the CTRL+SHIFT+P keys to perform this action.
Toggle Comments Adds or removes comment characters at the beginning of the current line or
selection.
You can also press the CTRL+7 keys to perform this action.
Surround With Adds a code template that surrounds the current line or selection:
• if Condition (CTRL+ALT+I)
To access APL Code Completion, place the cursor where you want to add an APL function, press
CTRL+SPACE and select the correct function or UDR format. In order to reduce the number of hits,
type the initial characters of the APL function. The characters to the left of the cursor will be used as
a filter.
29
Desktop 7.1
Add Click to open a dialog box where you can add an item to the Configuration list.
Edit Select a row and click Edit; an Update dialog box opens and enables you to modify the
data entry.
Select an entry from the Configuration list and click Up or Down to move it to an upper
or lower position
30
Desktop 7.1
Note! To change the order of any of the appended rows from ascending to descending, or vise
versa, click a column's heading. The new order will not be saved for the next time you will open
this view.
1. UDR types created in the Ultra Format Editor. These include events and sessions for the Aggreg-
ation sub-system.
For further information about UDR types and fields, see the MediationZone® Ultra Format Management
user's guide.
Formats created in Ultra Format Editor usually have the following structure:
Note! There are a number of agents, for example Diameter and Inter
Workflow, that have predefined UDR types with corresponding folder
names.
UDR Fields Displays the UDR type fields Type, in a tree structure.
31
Desktop 7.1
• Read-only - Red
• Default - Blue
Show Optional If enabled, fields declared as optional are displayed in black italic text.
Show Readonly Check to display read-only fields; the text appears in red.
Note! Clearing this check-box also affects the blue text entries. These are
reserved fields that you cannot modify.
Datatype If enabled, only fields that match the selected data type are displayed
1. Single selection of UDR Type. The UDR Type may be chosen either by double-clicking the UDR
Type or by selecting it followed by OK or Apply. OK and double-click dismisses the dialog.
2. Multiple selections of UDR Type. Many UDR Types may be chosen at once by selecting them
followed by OK or Apply. OK or double-click dismisses the dialog.
3. Single selection of UDR fields. Same as for UDR Type, but fields are selected instead.
4. Multiple selections of UDR field. Same as for UDR Type but fields are selected instead.
5. Field input assistance. Fields may be inserted in the target text field by double-clicking them or
selecting them followed by Apply. The OK button is not available.
MIM is based on the fact that individual agents may supply information during run-time that other
agents may need to use. The MIM information is used in various parts of MediationZone® , for instance,
when selecting which MIM resources to use in a file name, or when selecting what data to identify
UDRs delivered to outlast.
MediationZone® uses Java Management Extensions (JMX) to monitor MIM tree attributes in running
workflows. For more information, refer to Section 8.3, “Workflow Monitoring”.
32
Desktop 7.1
MIM resources for each agent will have their values assigned at any time, depending on type. As an
example, the Disk collection agent publishes the MIM resource Source Filename which is set at Begin
Batch. The agent will put the name of the collected file in this resource before it starts collecting the
file.
Batch Batch MIMs are dynamic values, populated during batch processing. An example of such a
value is outbound UDRs.
Global A global MIM value can be accessed whenever during the execution phase of a workflow.
For instance static values such as agent Name.
Header Header MIM values are populated when a batch is received for processing (an agent emits
Begin Batch). For example Source Filename (published by the Disk collection agent).
Trailer Trailer MIM values are populated after a batch is processed (an agent emits End Batch). An
example of such a value is Target Filename (published by the Disk forwarding agent).
By default, all agents may publish the following MIM resources depending on their introspection types.
Agent Name This MIM parameter contains the name of the agent as defined in the
Workflow Editor. All agents publish this resource. The value is set when
the workflow starts executing.
<route name> Queue Full This MIM parameter contains the number of queue state changes and the
Count value is updated each time a route's queue enters "full" state.
33
Desktop 7.1
Inbound UDRs (or Bytes) This MIM parameter contains the number of incoming UDRs (or Bytes)
since last Begin Batch. The value is updated continuously during batch
processing. This MIM is not valid for collection agents.
Note! Some MIM resources will not be available until the agent to which they belong has been
configured.
There are also pico specific MIM resources representing information about the picos' JVMs. These
are:
Available CPUs This MIM parameter states the number of processors that are available for
the JVM.
There are workflow specific MIM resources representing information of a running workflow as well.
These are:
Batch Cancelled This MIM parameter states if the current batch has been cancelled.
34
Desktop 7.1
Transaction ID Each batch closed by all MediationZone® workflows will receive a unique
transaction ID, cancelled batches as well. This MIM parameter contains the unique
transaction ID.
Workflow ID This MIM parameter contains the unique identification name of every workflow
in MediationZone® .
Workflow Name This MIM parameter contains the name of the current workflow. The value is set
when the workflow is activated.
35
Desktop 7.1
The available MIM resources are displayed, ordered in a tree structure. The MIM resource may be
chosen either by double-clicking the MIM resource or by selecting it followed by Apply. Cancel dis-
misses the dialog.
• The left part of the Desktop window includes the Configuration Navigator pane. The Configuration
Navigator holds all Configurations in MediationZone® and enables easy navigation between the
different Configurations. For further information, see Section 2.3.3, “Configuration Navigator”.
• The right part of the Desktop window holds all Configurations, Inspectors and Tools that have
been opened, each of them shown in a separate tab.
• Inspection - When Workflows are executed, the agents may generate various kinds of data, such
as logging errors into the System Log, or sending erroneous data to the Error Correction System
(ECS). The inspectors allow the user to view such information and are further described in Sec-
tion 6, “Inspection”.
• Tools - MediationZone® provides different tools to, for example, view logs, statistics, and pico
instance information, and to import and export Configurations. The tools are described in Section 7,
“Tools”.
For information on how to create a new configuration, and how to open an inspector or a tool, refer
to Section 2.3.2.2, “Desktop Standard Buttons”.
36
Desktop 7.1
2.3.1. Tabs
Configurations and tools are opened in separate tabs, in the right part of the MediationZone® Desktop
window. However, dialogs that are opened from a tool or configuration, e g an Agent configuration
dialog or the MIM Browser, will be opened in a dialog box and not in another tab.
To the right of the list with tabs, you have a button for viewing open tabs, which may be useful
in case you have many configurations open at the same time.
Click on this button and a menu will open, containing all the currently opened tabs.
Figure 17.
When exiting from Desktop, all tabs will be closed, and unless you have set the property
mz.gui.restart.tabs to true, they will not be remembered and restored the next time the
Desktop is started. See section Section 2.2.6.1.1, “Default Properties in Desktop.xml” for further in-
formation.
To reorder the tabs, click a tab and drag it to a different position along the top of the window.
To move a tab to a separate Desktop window, click it and then drag it outside the current window.
This can be useful when running and analyzing several workflows in the workflow monitor, to be able
to view the monitors side by side. If there is only one tab open in the Desktop and the tab is moved to
a separate window, the original Desktop window will be closed. A Tool, Inspector or Configuration
can only be open in one tab at a time.
Desktop The Desktop main menus are found at the top of the Desktop window. The menus are dynamic and
Main change according to the type of Configuration, Inspector or Tool that has been opened in the currently
Menus displayed tab. Refer to Section 3, “Configuration”, Section 6, “Inspection” and Section 7, “Tools”
for more details about the specific menus and menu items. For a description of the Desktop standard
menus, see Section 2.3.2.1, “Desktop Standard Menus”.
The following figure shows the main menus that are visible for a workflow configuration.
Desktop The Desktop buttons are located in the upper left part of the MediationZone® Desktop window.
Buttons Refer to Section 2.3.2.2, “Desktop Standard Buttons” for a description of the buttons.
37
Desktop 7.1
Tab There are different closing options available for a tab and these are selected from a right-click menu.
Right- See Section 2.3.2.3, “Tab Right-Click Menu” for more information.
Click
Menu
Tab The button panel is visible at the top of a tab. It is dynamic and changes according to the type of
Button Configuration, Inspector or Tool that has been opened in the currently displayed tab. For a description
Panel of the specific buttons, refer to Section 3, “Configuration”, Section 6, “Inspection”, and Section 7,
“Tools”.
The following figure shows the button panel visible for a workflow configuration.
Item Description
Change Pass- From the File menu, select Change Password and the Change Password dialog
word... box opens.
Exit Select to exit from MediationZone® Desktop. To login as another user, you have
to exit and then start the MediationZone® Desktop again.
Note! If you press F1, you will open the relevant topic for the dialog or window you currently
have active. However, for the various configurations in the Configuration menu, you may have
to scroll to the right section.
38
Desktop 7.1
Show/Hide Configura- To show or hide the Configuration Navigator pane in the left area of the
tion Navigator Desktop. The Configuration Navigator is described in more detail in Sec-
tion 2.3.3, “Configuration Navigator”. You can also toggle this option by
pressing CTRL+1.
New Configuration To create a new MediationZone® configuration. The configuration is opened
in a tab in the right part of the Desktop window. The different configuration
types that can be created are described in more detail in Section 3, “Config-
uration”. You can press CTRL+F1 to open this menu.
Inspection To open an MediationZone® Inspector. The Inspector is opened in a tab in
the right part of the Desktop window. The MediationZone® Inspectors are
described in more detail in Section 6, “Inspection”. You can press CTRL+F2
to open this menu.
Tools To open a MediationZone® Tool. The Tool is opened in a tab in the right
part of the Desktop window. The MediationZone® Tools are described in
more detail in Section 7, “Tools”. You can press CTRL+F3 to open this
menu.
When you have developed your own DTK plugins available in the Extensions menu, you can press
CTRL+F4 to open this menu.
Hint! You can also activate the button panel by pressing the CTRL key twice. You can then use
the arrow keys to move between the different buttons in the panel.
In the Configuration Navigator you can also filter what Configurations to be shown, by selecting
Configurations of a specific type. The Configuration Navigator can be hidden or visible. By default,
it is visible and all Configurations are displayed.
The Configuration Navigator supports a set of operations that can be performed for the Configurations
by using the right-click menu. For each Configuration you can also open a Properties dialog where
permissions can be set and where you can view history, references and basic information. Refer to
Section 7.3.5, “Properties” for more information.
To show or hide the Configuration Navigator pane, click the Show/Hide Configuration
Navigator button in the upper left part of the Desktop.
39
Desktop 7.1
• Default - where all configurations are stored if no other folder is specified when saving the config-
uration.
• SystemTask - includes workflows for performing different background routines. For further inform-
ation, refer to Section 4.1.1.4, “System Task Workflows”.
Each folder listed in the Configuration Navigator pane has a number attached to its name. This number
indicates how many Configurations that are stored in the folder.
40
Desktop 7.1
Cut Select this option to put one or more Configurations on the clipboard for
moving the Configuration to another location. Select the menu option Paste
in the folder where the Configurations should be stored.
This option is not applicable if the Configuration is locked. For further in-
formation see Section 2.1.2, “Locks”.
Copy Select this option to put one or more Configurations on the clipboard for
copy the Configurations to another location. Select the menu option Paste
in the folder where the copied Configurations should be stored.
Paste Select this option to store Configurations that have been cut or copied to
the clipboard into a folder.
Delete... Select this option to delete the selected Configuration(s). If the Configuration
is referenced by another Configuration, a warning message will be displayed,
informing you that you cannot remove the Configuration. For further inform-
ation see Section 7.3.5.3, “The References Tab”.
Rename... Select this option to change the name of the selected Configuration. Take
special precaution when renaming a Configuration. If, for example, an APL
script is renamed, workflows that are using this script will become invalid.
This is especially important to know when renaming folders containing
many ultra format Configurations or APL. Renaming a folder with ultra
formats or APL Configurations will make all referring Configurations inval-
id.
Encrypt... Select this option to encrypt the selected Configurations.
Decrypt... Select this option to decrypt the selected Configurations.
Validate... Select this option to validate the Configuration. A validation message will
be shown to the user.
Show Properties Select this option to launch the Properties dialog for the selected Configur-
ation. For further information, see Section 2.3.3.2, “Properties”.
Documentation Select this option to launch the Documentation dialog for the selected
Configuration. For further information, see Section 2.3.3.3, “Documenta-
tion”.
2.3.3.2. Properties
To open the Properties dialog, right-click on a Configuration and then select Show Properties.
41
Desktop 7.1
This dialog contains four different folders; Basic, which contains basic information about the Config-
uration, Permission, where you set permissions for different users, References, where you can see
which other Configurations that are referenced by the selected Configuration, or that refers to the se-
lected Configuration, and History which displays the revision history for the Configuration. The Basic
folder is displayed by default.
The Basic tab is the default tab in the Properties dialog and contains the following information:
• Modify the permissions of user groups to read, modify, and execute the Configura-
tion.
Modified by Displays the user name of the user that made the last modifications to the Configuration.
Modified Displays the date when the Configuration was last modified.
If you want to use the information somewhere else you can highlight the information and press CTRL-
C to copy the information to the clipboard.
The Permissions tab contains settings for what different user groups are allowed to do with the Con-
figuration:
42
Desktop 7.1
As access permissions are assigned to user groups, and not individual users, it is important to make
sure that the users are included in the correct user groups to allow access to different Configurations.
R W X E Permission Description
R - - - Allowed only to view the Configuration, given that the
user is granted access to the application.
- W - - Allowed to edit and delete the Configuration.
- - X - Allowed only to execute the Configuration.
R W - - Allowed to view, edit and delete the Configuration, given
that the user is granted access to the application.
- W X - Allowed to edit, delete and execute the Configuration.
R - X - Allowed to view and execute the Configuration, given
that the user is granted access to the application.
R W X - Full access.
- - - E Encrypted.
The References tab contains information about which other Configurations that the current Configur-
ation is referring to, and which other Configurations that the current Configuration is referenced by:
43
Desktop 7.1
The References tab contains two sub tabs; Used By, that displays all the Configurations that uses the
current Configuration, and Uses, that displays all the Configurations that the current Configuration
uses.
If you want to edit any of the Configurations, you can double click on the Configuration to open it for
editing.
If you want to clear the history for the Configuration, click on the Clear Configuration History button.
The version number will not be affected by this.
2.3.3.3. Documentation
To open the Documentation dialog, right-click on a configuration and then select Documentation.
In this dialog, you can provide information on the selected configuration, for example, a description
and the purpose of the configuration. You can use markdown syntax if preferred. The text entered is
then included in the automated documentation that you can generate using the Documentation Gen-
44
Desktop 7.1
erator tool. When you have completed the text you want to include, click OK to save. For further in-
formation on the Documentation Generator tool, see Section 7.5, “Documentation Generator”.
Actions The first section shows desktop actions. It could either be a text message with
user information such as "Saved myWorkflow" or a progress bar when data is
being loaded from the platform to the desktop.
Operations Inform- An icon for displaying the status of the Configuration Monitor. While operations
ation are being performed, for example when workflows are in building state, the
icon will indicate that the operations are in progress. If any warnings have been
detected during the operations, a warning sign is shown on top of the Configur-
ation Monitor icon. When pressing the icon, the Configuration Monitor will be
displayed. For more information regarding the Configuration Monitor, see
Section 7.4, “Configuration Monitor”.
User Specifies the user that is logged in to the desktop.
System Information Specifies the system name as well as the host and port that the desktop is con-
nected towards.
45
Desktop 7.1
3. Configuration
This section includes a detailed description of the following Configuration types:
• Alarm Detection
• Audit Profile
• Database Profile
• Redis Profile
• Workflow
• Workflow Group
Other Configuration types are described separately in their respective user's guide.
To create a new MediationZone® configuration, click the New Configuration button . To open
an existing MediationZone® Configuration, double-click a Configuration in the Configuration Navig-
ator, or right-click a Configuration and then select Open Configuration(s).... The Configuration will
be visible in a tab in the right part of the Desktop window.
The Desktop standard menus are described in Section 2.3.2.1, “Desktop Standard Menus”.
Item Description
New Creates a new Configuration that will be visible in a new tab. You can only create
a Configuration of the same kind as the one in the tab you are working in. To create
another type of Configuration, click the New Configuration button in the upper
left part of the MediationZone® Desktop window.
Open... Opens a saved Configuration that will be visible in a new tab. You can only open
a Configuration of the same kind as the one in the tab you are working in. To open
another type of Configuration, double-click a Configuration in the Configuration
Navigator, or right-click a Configuration and then select Open Configuration(s)....
Save Saves the Configuration.
After clicking Save, a dialog box opens. In the Version Comment text box, type
a description of the changes that you have made, then click OK. This information
will be visible in the Historical Configurations panel, for further information see
Section 3.1.1.3, “The View Menu”.
46
Desktop 7.1
Note! Use only a-z, A-Z, 0-9, "-" and "_" to name a Configuration.
Close Select to close the tab that includes the Configuration. If you have not saved the
current Configuration before clicking Close, a pop-up message will remind you
to do so.
Change Pass- After clicking Change Password..., the Change Password dialog box opens.
word...
Exit Select to exit from MediationZone® Desktop. To login as another user, you have
to exit and then start the MediationZone® Desktop again.
Item Description
Set Permissions... To set the owner of the Configuration as well as Read, Write and Execute per-
missions for the groups accessing the Configuration. For further information, see
Section 7.2, “Access Controller”.
Item Description
47
Desktop 7.1
History Each time a Configuration is saved, a new version is created. Many versions of a Config-
uration may exist but only the last version can be modified and executed. The old versions
are kept for log and rollback reasons. Select History to examine old Configurations in
the Historical Configurations panel. The panel will appear at the bottom of the Config-
uration tab and holds a list of all versions. Arrow buttons are used to step back and for-
ward between the different versions. Rollback to an old version of a Configuration is
handled by opening and saving the old version. A comment is automatically added,
stating that the current version was created from a historic one.
References Click to see the Reference Viewer listing references to and from the active Configuration.
The Reference Viewer includes the following tabs:
• Used By: Displays a list of other Configurations that refer to the Configuration. For
example: a workflow group that refers to a workflow.
• Uses: Displays a list of other configurations that the Configuration refers to. For ex-
ample: a workflow configuration that refers to a specific profile.
• Access: Displays the group of users that may access the configuration, and the user
that created (owns) the configuration.
Button Description
New Click to create a new configuration that will be visible in a new tab. You can
only create a configuration of the same kind as the one in the tab you are working
in. To create another type of configuration, click the New Configuration button
in the upper left part of the MediationZone® Desktop window.
Open... Click to open a saved Configuration that will be visible in a new tab. You can
only open a Configuration of the same kind as the one in the tab you are working
in. To open another type of Configuration, double-click a Configuration in the
Configuration Navigator, or right-click a Configuration and then select Open
Configuration(s)....
Save Click to save the Configuration.
48
Desktop 7.1
Set Permissions... To set the owner of the Configuration as well as Read, Write and Execute per-
missions for the groups accessing the Configuration. For further information,
see Section 7.2, “Access Controller”.
References Click to see the Reference Viewer listing references to and from the active
Configuration. The Reference Viewer includes the following tabs:
• Used By: Displays a list of other Configurations that refer to the Configuration.
For example: a workflow group that refers to a workflow.
• Uses: Displays a list of other configurations that the configuration refers to.
For example: a workflow configuration that refers to a specific profile.
• Access: Displays the group of users that may access the Configuration, and
the user that created (owns) the configuration.
An alarm can be in either one of two states: new or closed. An open alarm is an indication of a certain
occurrence or situation that has not been resolved yet. A closed alarm is a resolved indication.
To create a new Alarm Detection configuration, click the New Configuration button in the upper left
part of the MediationZone® Desktop window, and then select Alarm Detection from the menu.
To open an existing Alarm Detection Configuration, double-click the Configuration in the Configur-
ation Navigator, or right-click a Configuration and then select Open Configuration(s)....
There is one menu item that is specific for Alarm Detection, and it is described in the next coming
section.
49
Desktop 7.1
Item Description
Workflow Alarm Value To to define a variable to use in the APL code, see the APL Refer-
Names... ence Guide, and Section 3.2.3.1.5, “Workflow Alarm Value” for
further information.
• An object such as host, pico instance, or workflow, that the alarm should supervise
• The parameter that you want the alarm to supervise. For example: Statistics value.
• Two conditions within an alarm guard the same object: WF, Host, or Pico Instance.
To define an alarm:
1. Create an Alarm Detection configuration by clicking the New Configuration button in the upper
left part of the MediationZone® Desktop window, and then selecting Alarm Detection from the
menu.
50
Desktop 7.1
2. Click on the Edit menu and select the Validate option to check if your Configuration is valid.
3. Click on the Edit menu and select the Workflow Alarm Value Names option to define a variable
you can use in the APL code, see the APL Reference Guide, and in the Workflow Alarm condition,
see Section 3.2.3.1.5, “Workflow Alarm Value”.
4. Enter a statement that describes the Alarm Detection that you are defining in the Description field.
5. Select the importance priority the alarm should have in the Severity drop-down list.
6. Use the Alarm Detection Enabled check box to run alarm detection on or off.
7. At the bottom of the Alarm Detection Configuration click on the Add button.
Note!
1. An alarm is generated only if ALL conditions in the Alarm Detection are met.
51
Desktop 7.1
• System Event
• Workflow Throughput
The Host Statistic Value condition enables you to configure an alarm detection for the Host Statistic
parameters. For further information see Section 7.12.1, “Host Statistics”.
52
Desktop 7.1
Example 5.
Note! The parameters in the following example do not apply to any specific system and are
presented here only to enhance understanding of the alarm condition.
You want the system to generate a warning if the primary host is being overworked.
The Alarm will be triggered only if the Statistic Value has been higher than 1200 throughout
the last 3 hours. Note that if a momentary drop in value has occurred during the last 3 hours,
the alarm will not be triggered.
The System Event condition enables you to setup an Alarm Detection for the various MediationZone®
Event types.
53
Desktop 7.1
Type Select an event-related reason for an alarm to be invoked. For detailed description of every
event type see Section 5.5, “Event Types”.
Filter Use this table to define a filter of criteria for the alarm messages that you are interested in.
The Edit Match Value dialog box opens. Click the Add button to add a value.
Limits See Section 3.2.3.1.1, “Host Statistic Value”.
54
Desktop 7.1
Example 6.
Note: The parameters in the following example do not apply to any specific system and are
presented here only to enhance understanding of the alarm condition.
A Telecom provider wants the MediationZone® system to generate an alarm if a certain workflow
fails to write to ECS , in more than 3 attempts during the last 24 hours.
2. On the Edit Alarm Condition dialog box, from the Event Type drop-down list, select Workflow
State Event.
3. On the Filter table double-click Workflow Name; the Edit Match Value dialog box opens.
5. Enter a limit of Occurred more than 3 times during the last 24 hours.
The Alarm will be triggered by every 4th occurrence of a "Workflow State Event" during the
last 24 hours.
55
Desktop 7.1
The Pico Instance Statistic Value condition enables you to configure an Alarm Detection that guards
the Pico Instance statistic value of a specific EC. For further information about the Pico Instance see
Section 7.8, “Pico Viewer”.
Pico Instance From the drop-down list select the Pico Instance of which you want to collect
statistical data.
Statistic Value See Section 7.12.2, “Pico Instance”
56
Desktop 7.1
Note! The parameters in the following example do not apply to any specific system and
are presented here only to enhance understanding of the alarm condition.
A Telecom provider wants the system to generate an alarm if the following two events occur
simultaneously:
• Too many files are open on that same particular Pico Instance.
1. Configure an Alarm Detection that supervises EC1 with the Pico Instance Statistic Value
condition. Use this condition twice:
57
Desktop 7.1
4. Enter a limit of 900000 KB with - Note!- no time limit. This means that whenever this limit
is exceeded, AND the other conditions are met, an alarm is generated.
5. From the Alarm Detection dialog select the alarm condition Pico Instance Statistic Value
once again.
The Unreachable Execution Context condition enables you to configure an Alarm Detection that will
alert you if the connection, between the platform and the EC that the alarm supervises, fails.
Note: Selecting Any from the drop-down list applies the condition to
all the clients.
Unreachable due to normal Check to invoke an alarm whenever the connection between the
shutdown platform and the client fails due to a normal shutdown of the client
58
Desktop 7.1
Note! The parameters in the following example do not apply to any specific system and
are presented here only to enhance understanding of the alarm condition.
A telecom provider wants the system to generate an alarm if connection to any EC cannot be
re-established within 10 minutes.
1. Configure an Alarm Detection that uses the Unreachable Execution Context condition.
The Alarm will be triggered whenever the system detects a loss of connection between the
platform and one of its ECs, for a period that is longer than 10 minutes.
The Workflow Alarm Value condition is a customizable alarm condition. It enables you to have the
Alarm Detection watch over a variable, that you create and assign through the APL code. To apply
the Workflow Alarm Condition use the following guidelines:
59
Desktop 7.1
• Create a variable
1. From the Alarm Detection Editor Edit menu, select Workflow Alarm Value Names; The Workflow
Alarm Value dialog box opens.
2. Click the Add button and enter a variable name. For example: CountBillingFiles.
3. Click OK and then close the Workflow Alarm Value dialog box.
consume {
dispatchAlarmValue("CountBillingFiles",1);
udrRoute(input);
}
1. At the bottom of the Alarm Detection Configuration, click Add; the Add Alarm Condition dialog
box opens.
2. From the Alarm Condition drop-down list select Workflow Alarm Value.
3. From the Value drop-down list select the name of the variable that you created.
4. Click Browse to select the Workflow that the Alarm Detection should guard.
5. Configure the Limits according to the description of Figure 50, “The Workflow Alarm Value” and
click OK.
60
Desktop 7.1
compares this total value with the alarm limit (exceeds or falls below), and generates
an alarm message accordingly.
Note: Checking Summation means that the During last entry refers to the time period
during which a sum is added up. Once the set period has ended, that sum is compared
with the limit value.
For All workflows: Check to add up the values (see Summation above) of all the
workflows that the alarm supervises. Alarm Detector compares this total value with
the alarm limit (exceeds or falls below), and generates an alarm message accordingly.
Note: Can be checked only when workflow is set to Any.
For further information about Limits see Section 3.2.3.1.1, “Host Statistic Value”.
The Workflow Execution Time condition enables you to generate an alarm whenever the execution
time of a particular or all workflows exceed or fall below the time limit that you specify.
Workflow The default workflow value is Any. Use this value when you want to apply the condition
to all the workflows. Otherwise, click Browse to select a workflow that you apply the
condition to.
61
Desktop 7.1
Note: The parameters in the following example do not apply to any specific system and are
presented here only to enhance understanding of the alarm condition.
A telecom provider wants the system to identify a workflow that has recently run out of input,
and to generate an alarm that warns about a too-short processing time.
An alarm is generated whenever an active workflow seems to process data too fast (in less than
2 seconds).
The Workflow Group Execution Time alarm condition enables you to generate an alarm whenever the
execution time of a workflow group exceeds or falls below the time limit that you specify.
62
Desktop 7.1
Workflow Group Click Browse to enter the address of the workflow group to which you want to
apply the alarm
63
Desktop 7.1
Note! The parameters in the following example do not apply to any specific system and
are presented here only to enhance understanding of the alarm condition.
You want the system to generate an alarm if a billing workflow group has been active longer
than 3 hours.
1. Configure an Alarm Detection that uses the Workflow Group Execution Time condition.
2. On the Edit Alarm Condition dialog box click Browse to enter the workflow group you want
the alarm detection to supervise.
The Alarm will be triggered if the workflow group has been active longer than 3 hours.
The Workflow Throughput alarm condition enables you to create an alarm if the volume-per-time
processing rate of a particular workflow exceeds, or falls below, the throughput limit that you specify.
64
Desktop 7.1
Workflow Select a workflow which throughput value, the processing speed, is to be supervised.
For further information about the Throughput value calculation, see Throughput Calcu-
lation. An alarm is generated if the Throughput value is not within the condition limits.
Limits For information about Limits see Section 3.2.3.1.1, “Host Statistic Value”.
65
Desktop 7.1
Note! The parameters in the following example do not apply to any specific system and
are presented here only to enhance understanding of the alarm condition.
You want the system to warn you on detection of decreased processing rate.
2. On the Edit Alarm Condition dialog box click Browse to select the workflow which processing
rate is to be supervised.
The Alarm will be triggered by every occurrence of a workflow slowing down its processing
rate to a throughput that is lower than 50000 units per second.
66
Desktop 7.1
4.1. Workflow
A workflow configuration enables you to create a workflow consisting of:
• A Workflow Template: the schema of the agents and routes that you draw on the Workflow template.
• A list of workflows that share the same settings and are included in the configuration.
1. Workflow Table: The appearance of the table that displays the list of workflows
2. Error handling
3. Audit settings
4. Execution options
To create a new workflow configuration, click the New Configuration button in the upper left part
of the MediationZone® Desktop window, and then select Workflow from the menu. Select Workflow
Type and then click Create.
• Batch
• Real-Time
• Task
• System Task
In a batch workflow data is collected by a single collecting agent in a transaction-safe manner; A single
batch is collected (only once) and is fully processed by the workflow before the next batch is collected.
Batch workflows are mainly used either in post-paid billing systems, for example, for handling batches
of UDRs (files).
A batch workflow:
67
Desktop 7.1
• Stops either when it finishes processing the input, or when being aborted.
In Real-Time workflows most of the collecting agents communicate in a two-way manner; they receive
requests and provide replies.
A real-time workflow:
• Once started, is always active. A real-time workflow is started either manually or by a scheduled
trigger, and stops either manually or due to an error.
• Processes in memory. Transaction safety must be handled prior to collection and after distribution.
• Real-time workflow error handling rarely leads to aborting the workflow. Errors are registered in
system log and the workflow continues to run. Note that you cannot embed an exception within the
processing agents. For further information about real-time agent error handling, see the relevant
agent's user guide.
Note! Real-time workflows use the Inter Workflow agent to forward data to a batch Workflow.
• Alarm Cleaner
68
Desktop 7.1
• Archive Cleaner
• Configuration Cleaner
• ECS Maintenance
• Statistics Cleaner
• System Backup
1. To open a System Task, double-click a SystemTask workflow or workflow group in the Configur-
ation Browser pane.
You modify all the System Task workflow configurations at template level. The workflow properties
are all set to Final and cannot be modified.
• You can modify a System Task configuration, including its scheduling criteria, but you cannot
create or remove a System Task Configuration.
• The Archive Cleaner Workflow lets you modify only its scheduling criteria. For further in-
formation see Section 4.2.2.5.3, “Scheduling”.
The Alarm Cleaner Workflow enables you to periodically delete old Alarm messages from the database.
69
Desktop 7.1
To configure the Alarm Cleaner System Task workflow, enter the number of days that define a period
during which an alarm message should remain in the database.
The Archive Cleaner System Task enables you to remove old archived files from the file system.
Archive Cleaner operates according to data that it receives from the Archive profiles.
Note! You can modify only the scheduling criteria of the Archive Cleaner. See the Archive
profile manual. Since scheduling can only be applied to workflow groups, you modify the
Archive Cleaner scheduling from the workflow group configuration.
Enables you to specify the maximal age of an old Configuration before it is removed. See Section 3,
“Configuration”.
When the Configuration Cleaner is applied, every space is included. For further information on config-
uration spaces, see the Configuration Spaces documentation.
Note! You cannot remove the most recent Configuration with the Configuration Cleaner, only
historical ones.
1. Select a Configuration type from the table and then click the entry in the Keep column; a drop-down
list appears.
70
Desktop 7.1
• Versions: Keep only a certain amount of versions of the configurations. For example, the
last 10 versions.
Value Specifies number of days or versions, that represent the period during which configurations
are kept.
Enables you to remove old ECS data from the file system. For information about the ECS Maintenance
System Task workflow see the Error Correction System manual.
Enables you to remove old statistics data that has been collected by the Statistics server and stored in
the database.
Minute Level Records Specifies the number of days during which a minute-level record should
be kept in the database.
Hour Level Records Specifies the number of days during which an hour-level record should be
kept in the database.
Day Level Records Specifies the number of days during which a day-level record should be
kept in the database.
Enables you to create a backup of all the Configurations in MediationZone® . A backup file is saved
on the host machine where the platform application is installed.
The System Backup files are stored under $MZ_HOME/backup/yyyy_MM, where yyyy_MM is
translated to the current year and month. The system saves a backup file and names it according to the
following format: backup_<date>.zip.
System Backup also enables you to specify the maximal age of backup files before they are removed
from the host disk.
• Cleanup: Lets you configure the time during which a backup should be kept on disk before it is deleted
71
Desktop 7.1
Enable System Backup Check to enable the system backup. The default value is On.
Use Encryption Check to enable encryption of the backup
Password Enter a password
Figure 64. The System Backup Cleaner System Task - The Cleanup Tab
Imported Files Every time the System Importer imports a Configuration to the system, Me-
diationZone® saves it as a backup on the platform. Enter the period, in days,
during which the imported files should remain on disk.
System Backup Files Defines the maximal age of system backup files before they are removed
from the host disk.
The System Log Cleaner deletes the System Log periodically. You set the frequency values for deleting
different message types, on the System Log Cleaner dialog box.
Error/Disaster Enter the maximal age of Error and Disaster messages before they are removed
from the database.
72
Desktop 7.1
Information Enter the number of days during which Information messages should be kept in the
database.
4.1.2. Multithreading
Multithreading enables a workflow to operate on more than one UDR at a time.
By default, while a batch workflow handles one active thread at a time, a Real-time workflow always
executes multithreaded.
A workflow that is configured to multithreading can only handle data of the UDR type amongst agents
that are configured with a Thread Buffer. Otherwise, during the same period of time, other data types
can be processed anywhere else within the workflow.
By using asynchronous agents in a workflow that is configured with multithreading, you increase the
workflow multithreading capabilities even further.
Note! Agents that route bytearray data in a real-time workflow do not use a buffer.
73
Desktop 7.1
To configure a batch workflow agent with multithreading, use the Thread Buffer tab of the agent
Configuration. See Section 4.1.6.2.1, “Thread Buffer Tab”.
The menu items that are specific for workflow configurations are described in the next coming sections:
Item Description
Print... Select to print out the workflow configuration.
Workflow Properties Select this option to open the Workflow Properties dialog where workflow
related data is configured. For further information, see Section 4.1.8,
“Workflow Properties”.
Preferences Select this option to change the appearance of the workflow template. For
further information, see Section 4.1.6.3, “Visualization”.
4.1.3.3. Agents
Collection Use this option to select a Collection agent to include in the workflow. The menu to
choose from varies depending on the workflow type that you have opened, see Sec-
tion 4.1.1, “Workflow Types”. Note that you can also add an agent from the agent pane,
see Section 4.1.5, “Agent Pane” for more info.
Processing Use this option to select a Processing agent to include in the workflow. The menu to
choose from varies depending on the workflow type that you have opened, see Sec-
74
Desktop 7.1
tion 4.1.1, “Workflow Types”. Note that you can also add an agent from the agent pane,
see Section 4.1.5, “Agent Pane” for more info.
Forwarding Use this option to select a Forwarding agent to include in the workflow. The menu to
choose from varies depending on the workflow type that you have opened, see Sec-
tion 4.1.1, “Workflow Types”. Note that you can also add an agent from the agent pane,
see Section 4.1.5, “Agent Pane” for more info.
4.1.3.4. Workflows
Add Workflows... Use this option to select workflows to add to the Workflow table at the bottom
of the Workflow Editor. See Section 4.1.7, “Workflow Table” for further in-
formation.
Delete Workflow Select this option to remove the entire workflow that is associated to the cell
marked in the Workflow table. See Section 4.1.7, “Workflow Table” for further
information.
Filter and Search... Select this option to open a "filter and search bar" below the Workflow table.
See Section 4.1.7, “Workflow Table” for further information.
Open Monitor Select this option to open Workflow Monitor if the selected Workflow is
valid.
Import Table... Select this option to open an import dialog where you import an export file.
For further information, see Section 4.1.7, “Workflow Table”.
Export Table... Select this option to open an export dialog where you select and save workflow
configurations in a file. For further information, see Section 4.1.7, “Workflow
Table”.
The additional buttons that are specific for workflow configurations are described in the next coming
sections:
Button Description
Cut Cuts selections to the clipboard buffer.
Workflow Properties Select this option to open the Workflow Properties dialog where workflow
related data is configured. For further information, see Section 4.1.8,
“Workflow Properties”.
Print... Select to print out the workflow configuration.
Zoom Out Zoom out the workflow illustration by modifying the zoom percentage
number that you find on the toolbar. The default value is 100(%). Clicking
75
Desktop 7.1
the button between the Zoom Out and Zoom In buttons will reset the zoom
level to the default value. Changing the view scale does not affect the Con-
figuration.
Zoom In Zoom in the workflow illustration by modifying the zoom percentage number
that you find on the toolbar. The default value is 100(%). Clicking the button
between the Zoom Out and Zoom In buttons will reset the zoom level to
the default value. Changing the view scale does not affect the Configuration.
Real-time and batch workflow configurations contain three types of agents, sorted under three different
tabs, depending on their introspection. Introspection is in MediationZone® referred to as the type of
data an agent expects and delivers. The tabs are called: Collection, Processing and Forwarding.
A collection agent must have one or several outgoing introspection types and forwarding agents must
have one or several incoming introspection types. The workflow editor validates that the introspection
types between two connected agents are compatible.
Note! Real-time collection agents may also be receivers within the workflow. This is called bi-
directional capability and is used when the collector must respond back to the network element.
76
Desktop 7.1
Different agents act differently on the Begin Batch and End Batch messages. The forwarding agents,
for instance, need a set of boundaries in order to close a file or commit a database transaction.
4.1.5.4.1. Script
A user may write a script to be executed regularly. Usually, to clean up directories and other instances
which need to be attended to periodically.
Warning! It is strongly recommended that you run script task workflows on a separate Execution
Context, especially if you are running Sun Solaris. Running script task workflows on the same
Execution Context as other workflows may cause unpredictable errors and loss of data.
During a short time before the exec() runs the actual script, the fork() functionality allocates
the same amount of memory for the script as used for the Execution Context. If the memory is
not available the Execution Context will abort with out of memory error and must then be restar-
ted.
For information about installation of an Execution Context, see the Installation Instructions.
Script Name The name, including the full path, to the script performing the task.
Parameters Parameters expected by the Script Name. This field is optional.
4.1.5.4.2. SQL
A user may write SQL statements or SQL scripts to be executed regularly. Usually, involving cleaning
up tables which need to be attended to periodically.
77
Desktop 7.1
Database Click Browse to select a Database profile. Note: You create database profiles in
the Database profile configuration.
SQL Statement Enter a PL/SQL script or an SQL statements to this text box.
Note: Group several SQL statements within a block. For a single SQL statement
omit the semicolon (;) at the end.
• Clicking the New Configuration button in the upper left part of the MediationZone® Desktop
window, and then selecting Workflow from the menu. Select a Workflow Type; Batch, Realtime,
or Task, and then Create.
78
Desktop 7.1
When the first agent is placed on the workflow template one row is automatically created in the
Workflow table.
To create a data flow, agents need to be connected to each other. To do that click the left mouse button
on the center of the source agent and without releasing the left mouse button, move the pointer to the
target agent and released there. This will create a connection (route) between the two agents indicating
the data flow.
All editing and triggering from the workflow template generate changes to the workflow configuration
through the workflow configuration. Examples of this are adding and removing agents, altering agent
positions and editing agent settings and preferences. The workflow table will be affected if it includes
columns that correspond to an agent removed from the workflow template.
When an agent is deployed into the work template it receives a default name located underneath it.
The same applies to routes when they are added. These names may be modified to ease identification
in monitoring facilities and logs.
Resting the cursor over an agent in the template, displays parts of its Configuration in a tool-tip text.
4.1.6.1. Configuration
Note! Due to the agents relationships within a workflow configuration, it is preferable that all
agents and routes are added before the Configuration is started.
Each agent in the workflow configuration has a specific Configuration window named after the agent
type. Each route in the workflow is by default either asynchronous or synchronous, but this can also
be configured per route. These Configurations can be accessed by double-clicking the agent or route,
or selecting the agent or route and right-clicking to reveal a pop-up menu. The popup menu contains
different options depending on if you have selected an agent or route.
The right-click menu for agents contains the options Configuration, Copy/Paste/Cut, MIM Browser,
and Workflow Properties.
79
Desktop 7.1
The Configuration dialog contains configuration information located in tabs. The leftmost tab contains
configuration parameters that are unique for each agent while possible additional tabs are of a more
generic kind and may be recognized in other agent windows. A description of components in the first
tab is available in the user guide of the respective agent, while the remaining tabs are described in
Section 4.1.6.2, “Agent Services”.
Agents and their configurations can be copied using the Copy/ Paste functions in the Edit menu. Select
the source agent followed by Copy and then Paste. This will deploy a copy of the selected agent and
its configuration.
An agent name is modified by selecting the name (clicking on it) and typing a new name. Agent names
can also be edited in the configuration dialog, displayed when the agent is double-clicked. Agent names
must be unique within a Workflow configuration and may only contain the a-z, A-Z, 0-9, "-" and "_"
characters. However, the agent name cannot begin with the characters "-" or "_".
The right-click menu for routes contains the options Configuration, Copy/Paste/Cut, MIM Browser,
Workflow Properties, and Route Styles.
If you do not want to use the default configuration for the selected route, select the Override Default
check box and select either of the Asynchronous or Synchronous options, and click OK to save your
changes. A small A (for Asynchronous routes) or S (for Synchronous routes) will be visible at the start
of the route. If you want to see the type for all routes, and not just for the ones you have stated explicitly,
you can select to the option Show All Route Types in the Preferences dialog, see section Sec-
tion 4.1.6.3, “Visualization” for further information.
80
Desktop 7.1
If you click on the Route Styles option, you can determine the appearance of the route; Orthogonal,
Bezier, or Straight.
A route name is modified by selecting the name (clicking on it) and typing a new name. Route names
must be unique within a Workflow Configuration and may only contain the a-z, A-Z, 0-9, "-" and "_"
characters. However, the route name cannot begin with the characters "-" or "_".
By default, a batch workflow utilizes one active thread at a time. By configuring a buffer storage for
an agent it will be possible for yet another thread to be created, this is also called multithreading. One
thread will be populating the buffer, and another pulling it for data. Adding yet another buffer for an-
other agent will add yet another thread, and so on.
This is especially useful in complex workflows with many agents. All batch agents that receive UDRs
can utilize this functionality.
Note! A workflow that is configured with multithreading can only handle data of the UDR type.
If bytearrays are routed into an agent utilizing this service, an exception will be thrown.
Open the Configuration window of the agent and select the Thread Buffer tab. The tab is present in
batch processing and batch forwarding agents.
Use Buffer Enables multithreading. For further information, see Section 4.1.2, “Multithreading”.
Print Statistics Statistics to be used when trying out where to use the Thread Buffer in the workflow.
After each batch execution, the full and empty percentage of the threads utilizing
the buffer is logged in the in the event area in the bottom of the workflow monitor
window.
For information on how to interpret the results, see Section 4.1.6.2.1.1, “Analyzing
Thread Buffer Statistics”.
A UDR may be queued up while another thread is busy processing a reference to it. Workflows routing
the same UDR on several routes and involving further processing of its data, must consequently be
reconfigured to avoid this . A simple workaround is to route the UDR to an Analysis agent for cloning
before routing it to the other agents (one unique clone per route).
By using the Print Statistics alternative in the Thread Buffer Tab, buffer statistics will be logged
for the whole batch execution and show the full and empty percentage for the threads utilizing the
thread buffer. For information about multithreading in a batch Workflow, refer to Section 4.1.2.2,
“Threads in a Batch Workflow”.
81
Desktop 7.1
• The number within brackets, which is [5] in the example, is the batch counter id.
• Turnover is the total number of UDRs that have passed through the buffer.
• Available indicates how often (of the total turnover time) the buffer has been available for the
incoming queue to forward a UDR and for the outgoing queue to fetch a UDR.
• Incoming queue:
Full is logged for the incoming thread and indicates how often (of the total turnover time) the
buffer has been full and an incoming UDR had to wait for available buffer space.
In the example, Full indicates that for 46% of the incoming UDRs there was a delay because of
a full buffer.
• Outgoing queue:
Empty is logged for the outgoing thread and indicates how often (of the total turnover time) an
outgoing queue had to wait for data because of an empty buffer.
In the example, Empty indicates that for 41% of the attempts to fetch a UDR, the buffer was empty.
The percentage values for Empty and Full must be as low as possible, and as equal as possible. The
latter may be hard to achieve, since the agents may differ too much in processing complexity. If possible,
add and configure another agent to take over some of the processing steps from the most complex
agent.
See Section 4.1.6.2.1, “Thread Buffer Tab” for how to configure the thread buffer.
For batch Collection agents such as Disk, FTP, and SFTP, there is a service available, found in the
agents Configuration dialog in the Filename Sequence tab. The Filename Sequence is used when
wanting to collect files containing a sequence number in the file name. The sequence number is expected
to be found on a specific position in the file name and will have a fixed or dynamic size.
Note! When collecting from several sources, the Filename Sequence service is applied on the
data that arrives from all the sources, as though all the information arrives from a single source.
This means that the Filename Sequence number count is not specific to any of the sources.
82
Desktop 7.1
Example 13.
TTFILE0001-TTFILE9999 Length: 4
TTFILE1-TTFILE9999 Length: 0
Note! Next Sequence Number is set for every Workflow in the Workflow
table in the workflow configuration.
The Sort Order service is available on some batch collection agents and is used to sort matched files
before collection.
The sort pattern is expected to occur on a specific position in the file name or to be located using a
regular expression.
83
Desktop 7.1
Note! Regular expressions according to Java syntax apply. For further information, see:
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Most FTP and SCP servers follow the Unix modification date format for
file time stamps. The modification date resolution is one minute for files
that are time stamped during the last six months. After six months a res-
olution of one day is applied.
Value Pattern The method used to locate the item (part of the file name) to be the target for
the sorting. This could be either Position that indicates that the item is located
at a fixed position in the file name or Regular Expression indicating that the
item will be fetched using a regular expression.
Position If Position is enabled, the Start Position value states the offset in the file name
where the sorting item starts. The first character has offset 0 (zero).
The Length value states the length of the sorting item (part of the file name) if
it has a static length (padded with leading zeros). If the length of the sorting item
(part of the file name) is dynamic, the default value zero (0) will be used.
Regular Expression If enabled, the sorting item is extracted from the file name using the regular ex-
pression. If the file name does not end with a digit this option is the proper
choice.
84
Desktop 7.1
Example 14.
FILEA_1354.log
FILEB_23.log
FILEC_1254.log,
Use \d+ in the regular expression. Depending on the selected Sort Dir-
ection, the files are sorted in the following order:
FILEC_1254.log
FILEA_1354.log
FILEB_23.log
The Filename Template service is available to batch forwarding agents that are responsible for creating
a file. The configuration contains MIM resources for all available agents in the workflow, whose values
may be used when constructing a filename for the outgoing file.
Since this service includes a selection of MIM resources from available agents in the workflow, it is
advised to add all agents to the workflow, and to assign route and agent names, before the filename
template configuration is completed.
Note! Filename Template also provides you with the so called Dynamic Directory support. This
means that you can change the output directory during execution of a workflow, which input
data is bytearray. See Input Data in the configuration Target tab of your relevant forwarding
agent.
By creating directories and subdirectories, which names consist of MIM values, and by adding
appropriate APL code, you configure the output directory to sort the out data into directories
that are created during the workflow execution. For further information see Section 4.1.6.2.5,
“Defining a MIM Resource of FNTUDR Type”.
85
Desktop 7.1
The list contains MIM resources or user defined values that will create the file name. The order of the
items in the list controls the order in the file name.
The table containing MIM resources, user defined values, separators and/or directory delimiters will
create the filepath or filename. The order of the items in the table define the filepath or filename.
Since the service utilizes a selection of MIM resources from available agents in the workflow, it is
advised to add all agents to the workflow before the filename template configuration is completed.
Create Non-Existing Directories When checked, non-existing directories stated in the path will
Checkbox be created. If unchecked the agent will abort if needed directory
is missing.
MIM Defined Determines if the Value will be selected from a MIM resource.
The MIM resource of type FNTUDR will be represented in the template table in
the same way as other MIMs, however will have an other appearance when the
filename or filepath is presented. A MIM FNTUDR value can represent a sub path
with delimiters or a part of a filename or a directory. For further information about
86
Desktop 7.1
how to use the FNTUDR in filename templates, see Section 4.1.6.2.5, “Defining
a MIM Resource of FNTUDR Type”.
User Defined Determines if the Value will be a user defined constant entered in the text field.
Directory Delim- Determines if the Value will be a directory delimiter indicating that the file sub
iter path will have a directory delimiter at that specified position. It is not allowed to
have two directory delimiters directly after each other or to have a delimiter in
the beginning or end of a filename template.
The MIM resource of the special UDR type FNTUDR can include, begin or/and
end with directory delimiters this must be noted when adding delimiters in the
template. For further information about using the FNTUDR in filename templates,
see Section 4.1.6.2.5, “Defining a MIM Resource of FNTUDR Type”.
Size Number of allocated characters in the file name for the selected MIM resource
(or user defined constant). If the actual value is smaller than this number the re-
maining space will be padded with the chosen padding character. If left empty the
number of characters allocated in the file name will be equal to the MIM value or
the constant.
Padding Character to pad remaining space with if Size is set. If Size is not set this value is
ignored.
Alignment Left or right alignment within the allocated size. If Size is not set this value is ig-
nored.
Separator Separating character to add to the file name after the MIM value or constant.
Date Format Adds a timestamp to the file name in the selected way. For further information
about the format, see Section 2.2.5, “Date and Time Format Codes”.
87
Desktop 7.1
Example 15.
Assume a workflow containing a Disk collection agent named DI1 and two Disk forwarding
agents named DO1 and DO2. The desired output file names from both forwarding agents are
as follows:
A-B-C.DAT Where A is the name of the Disk forwarding agent, B is the name of the currently
collected file, C is the number of UDRs in the outgoing file and .DAT is a customer suffix. It
is desired that the number of UDRs in the files takes up six characters and is aligned right.
The following file name template configuration applies to the first agent:
The following file name template configuration applies to the second agent:
If two files FILE1 and FILE2 are processed where 100 UDRs goes to DO1 and 250 goes to DO2
from each file, the resulting file names would be:
DO1-FILE1-000100.DAT
DO2-FILE1-000250.DAT
DO1-FILE2-000100.DAT
DO2-FILE2-000250.DAT
A MIM resource with a value of the FNTUDR type included in the filename template is treated
somewhat differently than other MIM resources. A FNTUDR value is a text string that can contain
delimiters. The delimiters in the FNTUDR value will be replaced by directory delimiters when determ-
ining the target file path. The FNTUDR is defined in the FNT folder.
88
Desktop 7.1
Example 16.
The following example shows a set of APL code that creates a FNTUDR value and publishes
it as a MIM resource. To use the FNTUDR value in a filename template, the MIM resource
must be added in the filename template configuration.
import ultra.FNT;
consume {
fntAddString(fntudr, "dir");
fntAddString(fntudr, "1");
fntAddDirDelimiter(fntudr);
fntAddString(fntudr, "dir2");
fntAddDirDelimiter(fntudr);
fntAddString(fntudr, "partOfFileName");
udrRoute(input);
The following filename template configuration utilizes the FNTUDR published by the APL
code.
The resulting output file in the previous example will be saved in a file with the following sub
path from the root directory.
dir0/dir1/dir2/partOfFileName20070101_1
For further information about how to manipulate FNTUDRs with APL functions and how to publish
MIM resources, see the APL Reference Guide and Section 2.2.10, “Meta Information Model”.
4.1.6.3. Visualization
In the workflow editor, zoom in or out the workflow illustration by modifying the zoom percentage
number that you find on the tool bar. The default value is 100(%). To change the zoom value click the
89
Desktop 7.1
increase or decrease icons. Clicking the button between these icons will reset the zoom level to the
default value. Changing the view scale does not affect the configuration.
It is also possible to change the appearance of the workflow template in other ways. It is done by
opening the Preferences dialog found in the Edit menu.
Route Style Sets the style of the routes in the workflow. Route style can also be changed for
one specific route, not affecting the entire workflow. This is done by right-clicking
the route in the workflow and selecting Route Styles. The default route style is
Bezier.
Grid Style Determines how the gris should be displayed; Invisible, Dot or Line. Invisible
is default.
Grid Size With this slider you can change the grid density. A large number will increase
the distance between agents.
Show All Route The route type, asynchronous or synchronous, is indicated with a small bold A
Types or S for all routes where the type has been configured explicitly, see Sec-
tion 4.1.6.1.2, “Right click menu for routes”. However, if you want to display the
route type for routes with default configuration as well, you can select this option
Note! This option is only visible when you are in a real-time workflow.
There must always be at least one runnable workflow per workflow configuration otherwise it will not
be valid.
The three leftmost columns gather the workflow meta data: Valid, ID, and Name. A workflow table
will always contain these columns. The ID is automatically generated starting at 1. The ID is unique
within the workflow configuration. The Name will be generated based on the ID, for example 'Work-
flow_1'. The names can be edited, however if two workflows have exactly the same name a validation
error will occur.
The workflow table will be populated depending on settings made in the Workflow Properties dialog
Workflow Table tab. For example adding rows and field type settings is done there and will propagate
changes in the workflow table.
90
Desktop 7.1
Apart from the three first columns, columns in the table represent fields of default and per workflow
type. See Section 4.1.8, “Workflow Properties” for further information about the field types. Default
fields have the default value displayed in the instance table as <Val>, where Val represent the actual
default value. If no default value is set in the agent in the template, < > is displayed. If the field is of
a per workflow type and yet not defined, an error message is shown in the cell pointing out that the
cell is not valid.
The columns can be rearranged by clicking the column heading. Both headings for one single column
and ones that collect a number of headings can be used to change the order, either descending or as-
cending.
Edit cell The command is used to put the cell in edit mode if the content is allowed to be
altered. If it is not the Edit Cell command will be grayed out and not possible to
select. Enabling editing can be done either by double-clicking in the cell or by
typing any key on the keyboard.
A cell can be locked to a certain input type and some cells will only accept numbers
or a string.
Clear cell The command is used to remove the content of the selected cell. If it is a field of
Default type this command will change the cell content to the default value set
in the template. If the cell content must not be cleared, the command will be
grayed out from the menu and will not be possible to select.
Edit from default The command is used to edit the cell by inserting the default value for that field
that you set in the template. The command is only available for cells that render
from fields of Default type.
Enable External Select this to mark the field as an External Reference. Then, through Edit Cell,
Reference you enter the Local Key reference. The value is applied during the workflow run-
time. For further information see Section 9.5, “External Reference Profile”.
Disable External Select this to remove the External Reference value as well as mode. For further
Reference information see Section 9.5, “External Reference Profile”.
Add Workflow The command adds a workflow (a row) at the bottom of the table. The added
workflow will instantly get an ID and Name.
91
Desktop 7.1
Add Workflows The command adds the number of workflows (rows) that you specify.
Delete Workflow The command removes the entire workflow that is associated to the cell marked.
If removed the ID number of that workflow will never again return within that
workflow configuration. To remove more than one workflow, select all the relevant
cells.
Duplicate Work- The command duplicates the entire workflow that is associated with the marked
flow cell. The new workflow is added at the bottom of the table. More than one cell
can be marked to enable duplication of several workflows at a time. Note: New
IDs and Names are generated.
Show Validation The command will open an information dialog were a message regarding the
Message validity of the template, workflow and cell is stated. The dialog must be closed
to return to the Configuration.
Show Specific Select to see the references of the specific workflow.
References
Note: Selecting Show References from the View menu, displays references that
are relevant to the workflow configuration.
Open Monitor Opens Workflow Monitor if the selected workflow is valid.
Export Table Opens an export dialog-box where you select and save workflow configurations
in a file. With this export file you transfer and update workflow table data either
on your current machine or on a different client. The export file can be created
in any of the following formats:
• .csv
• .ssv
• .tsv
The .csv export file contains a header row, comma (,) delimited fields, and text
values that are delimited by a quotation mark(").
Exported fields that contain profiles are given a unique string identifier. The ID
and Name fields are exported as well.
In the export file, External References are enclosed in braces ({}) and preceded
by a dollar symbol ($). For example: ${mywf_abcd}. For further information
see Section 9.5, “External Reference Profile”.
Import Table Opens an import dialog-box where you import an export file. This file might
contain, for example, data that has been saved in the workflow table, locally or
on a different client. This command supports, the following file formats:
• .csv
• .ssv
• .tsv
2. If the ID number of the imported workflow is -1, the imported entry is added
to the bottom of the table.
92
Desktop 7.1
Note: MediationZone® keeps track of the number of rows that you add to the
workflow table by using a row counter. If the row counter number is 98, and
the imported workflow's ID is -1, the imported workflow is stored with 99 as
the ID number.
3. If the ID number of the imported workflow does not exist in the table and is
not equal to -1, the imported entry is added to the table. The ID number remains
the same number as it was in the import file.
Filter and Search The command opens a search and filter bar below the workflow table. The search
can be performed for all columns. In the search field the search words or numbers
can be entered. When Find next is selected the search starts. The filter feature
works on workflow name only. The workflow table is updated as you're typing
text in the Filter Name field.
Using all lower case letters in the search and filter text field will result in case
insensitive search and filtering. If upper case letters are used anywhere in the text
field the search will be case sensitive.
The search and filter bar is closed by selecting the x symbol to the left of the bar.
Field Selection Use this drop-down list to display in the table below either fields of a certain
agent, or by selecting Show All, all the fields in the table below.
Show Final Check to list in the table below only the fields that are set to Final.
93
Desktop 7.1
Show Unavailable Check to list the fields that are write-protected; The unavailable fields are
listed grayed-out and set to Final.
Name The names of fields of the workflow table are listed on this column and include:
• Execution Settings
• Throughput MIM
• Debug Type
Final Check to block this variable from appearance on the workflow editor table.
Note: This variable can still be modified, but only from its configuration. For
example: if the variable belongs to an agent, open the agent configuration
dialog-box to modify the variable.
Default You can check to set the field value to Default only if it is already set to a
certain value in the configuration. You can modify a Default value from the
workflow table; the default value remains in the field and appears grayed-out,
and the new value appears in black text on its left, within the same field.
Per Workflow Check to be able to set the value of the relevant field for each workflow on
the workflow table, separately.
Note: If you cannot set the profile per workflow in the Workflow table, see
the user guide of the agent that the profile is assigned to, for further informa-
tion.
Enable External Ref- Check to enable the use of external reference values from within the workflow
erence table. For example: a properties file. For further information see Section 9.5,
“External Reference Profile”.
Profile Click Browse to specify the External Reference profile. For further inform-
ation see Section 9.5.1.3.1, “To create an External Reference profile:”.
Numbers of Rows to Enter the number of workflows that you want to add to this configuration; the
Add workflow table will grow longer accordingly.
1. On the Workflow Table tab check either Default or Per Workflow for the Execution Context
field.
2. Click OK.
3. On the workflow table, right-click the Execution Context field that you want to edit, and select Edit
cell; the table cell now includes a ... button.
4. Click the ... button; the Edit Execution Settings dialog-box opens. For a detailed description see
Execution Settings in Section 4.1.8.4, “Execution Tab”.
94
Desktop 7.1
Abort Immediately If enabled, the workflow immediately aborts on the first Cancel Batch message
from any agent in the workflow. The erroneous data batch is kept in its ori-
ginal place and must be moved/deleted manually before the workflow can be
started again.
Abort After X Consec- If enabled, the value of X indicates the number of allowed Cancel Batch calls,
utive Cancel Batch from any agent in a workflow before the workflow will be aborted. The
counter is reset between each successfully processed data batch. Thus, if 5
is entered, the workflow will abort on the 6th data batch in a row that is repor-
ted erroneous. All erroneous files, but the last one, are removed from the
stream and placed in ECS.
Never Abort The workflow will never abort. However, as with the other error handling
options, the System Log is always updated for each Cancel Batch message,
and files are sent to ECS.
A UDR that contains information on selected MIMs can be associated with the batch. This is useful
when reprocessing a batch from ECS, the fields of the Error UDR will appear as MIMs in the collecting
workflow.
The batch UDR may be populated from Analysis or Aggregation agents as well. This is useful in case
of wanting to enter other values than MIMs.
The ECS Batch Error UDR section will be grayed out until one of the Abort after X consecutive cancel
batch or Never abort alternatives is selected.
Error Code Drop-down list where an Error Code as defined in the Error Correction System
Inspector can be selected.
Error UDR Type The error UDR to be associated with the batch. The appropriate format can be
selected from the UDR Internal Format Browser dialog opened by selecting
the browser button.
Depending on the selected UDR type the columns UDR Field and MIM Resource
will be populated.
UDR Field A list of all the fields available for the selected Error UDR Type.
95
Desktop 7.1
MIM Resource The MIM Resource column will be populated by clicking (Map to MIM...). The
preferred MIM to map to the Error UDR Type fields can then be selected from
the MIM Browser dialog.
Logged MIMs The column Error MIMs holds information on what MIM resources to be logged
in the System Log when the workflow aborts or sends UDRs and batches to ECS.
These values may also be viewed from ECS (the MIM column).
The most relevant resources to select are things that identifies the data batch,
such as the Source Filename, if available.
Note that this is only a short summary of the functionality description is given. For further information,
see the MediationZone® Error Correction System user's guide.
Method - The column will be populated through any of the APL functions
constraining or auditSet. The first is used on Counter columns, and
the latter on Value columns.
MIM - The column will be populated with the MIM value selected in the
MIM Resource column.
96
Desktop 7.1
In Batch Workflow:
Note! If you select to configure the distribution using EC groups, the selected distri-
bution type will also be applied on the ECs within the groups.
Hint You can combine both individual ECs and EC groups in the Execution Con-
texts list. The selected distribution will then be applied for all ECs stated either in-
dividually or in groups.
• Sequential - Valid only if Execution Contexts are defined. Starts the workflow on the
first EC/EC group in the list. If this EC/EC group is not available, it will proceed with
the next in line.
97
Desktop 7.1
• Workflow Count - Starts the workflow on the EC running the fewest number of
workflows. If the Execution Contexts list contains at least one entry, only this/these
ECs/EC groups will be considered.
• Machine Load - Starts the workflow on the EC with the lowest machine load. If the
Execution Contexts list contains at least one entry, only this/these ECs/EC groups will
be considered. Which EC to select is based on information from the System Statistics
sub-system.
• Round Robin - Starts the workflow on the available ECs/EC groups in turn, but not
necessarily in a specific order. If EC1, EC2 and EC3 are defined, the workflow may
first attempt to start on EC2. The next time it may start on EC3 and then finally on EC1.
This order is then repeated. If an EC is not available, the workflow will be started on
any other available EC.
Debug Select Event to channel debug results (see Debug in APL coding) as any other event.
Type
Select File to save debug results in MZ_HOME/tmp/debug. The file name is made up of
the names of the workflow template and of the workflow itself, for example:
MZ_HOME/tmp/debug/Default.radius_wf.workflow_2.
If you save debug results in a file, and you restart the workflow, this file gets overwritten
by the debug information that is generated by the second execution. To avoid losing debug
data of earlier executions, set Number of Files to Keep to a number that is higher than 0
(zero).
• Number of Files to Keep: Enter the number of debug output files that you want to save.
When this limit is reached, the oldest file is overwritten. If you set this limit to 0 (zero),
the log file is overwritten everytime the workflow starts.
98
Desktop 7.1
Example 17.
According to this example there are totally 11 files that are being overwritten
one-by-one and the rotation order is :
Default.radius_wf.workflow_2
|
V
Default.radius_wf.workflow_2.1
|
V
Default.radius_wf.workflow_2.2
|
V
:
:
|
V
Default.radius_wf.workflow_2.n
|
V
Deleted
• Always Create a New Log File - Use this option to create a new debug output file each
time the workflow executes. In this case a timestamp will be appended to the file name
described above.
Example 18.
Default.radius_wf.workflow_2.1279102896375
Default.radius_wf.workflow_2.1279102902908
Default.radius_wf.workflow_2.1279102907149
99
Desktop 7.1
Note! MediationZone® will not manage the debug output files when this option
is used. It is up to the user to make sure that the disk does not fill up.
If another MIM value than the default is preferred for calculating the throughput the User
Defined checkbox is ticked. From the browser button a MIM Browser dialog is opened
and available MIM values for the workflow configuration is shown and a new calculation
point can be selected.
Since the MIM value shall represent the amount of data entered into the workflow since
the start (for batch workflows from the start of the current transaction) the MIM value
must be of a dynamic numeric type, as it will change as the workflow is running.
In Real-time workflow:
Note! If you select to configure the distribution using EC groups, the selected
distribution type will also be applied on the ECs within the groups.
100
Desktop 7.1
Hint You can combine both individual ECs and EC groups in the Execution
Contexts list. The selected distribution will then be applied for all ECs stated
either individually or in groups.
• Sequential - Valid only if ECs/EC groups are defined. Starts the workflow on the
first EC in the list. If this EC is not available, it will proceed with the next in line.
• Workflow Count - Starts the workflow on the EC running the fewest number of
workflows. If the Execution Contexts list contains at least one entry, only this/these
ECs/EC groups will be considered.
• Machine Load - Starts the workflow on the EC with the lowest machine load. If the
Execution Contexts list contains any entries, only this/these ECs/EC groups will be
considered. Which EC to select is based on information from the System Statistics
sub-system.
• Round Robin - Starts the workflow on the available ECs/EC groups in turn, but not
necessarily in a specific order. If EC1, EC2 and EC3 are defined, the workflow may
first attempt to start on EC2. The next time it may start on EC3 and then finally on
EC1. This order is then repeated. If an EC is not available, the workflow will be
started on any other available EC.
EC Determines on what Execution Context(s), EC(s), the workflow may execute. If several
are entered, the selected Distribution is considered. If no EC is selected, MediationZone®
will consider all available ECs as possible targets.
An EC is added by selecting Add and in the list presenting available ECs select one.
A stand-alone workflow must be configured to run on a stand-alone EC. Only one stand-
alone EC can be configured.
The value that you enter here is the size of each route's queue in the workflow.
Queue By selecting Queue Worker Strategy, you can determine how the workflow should
Worker handle queue selection, which may be useful if you have several different collectors.
Strategy
You have the following options:
• Default
101
Desktop 7.1
With the Default strategy, queues are selected in route insertion order. As long as
there are queued UDRs available on the first queue, that queue will be polled. This
means that routes with later insertion order may not receive as many UDRs as they
have capacity for, and get no or little throughput. This type of condition may be de-
tected by looking at the Queue Throughput for workflows in the System Statistics
view.
This is the preferred choice when you work synchronously with responses and process
small amounts of UDRs at any given time (which is not the same as low throughput).
• RoundRobin
The RoundRobin strategy works in the same way as the Default strategy, except
that each workflow thread will be given its own starting position in the routing queue
list. This means that as long as the number of workflow threads is equal to, or greater
than, the number of routing queues, no queue will suffer from starvation.
Use this strategy if the number of workflow threads is equal to, or greater than, the
number of routing queues, and it is desirable to prioritize faster routes before slower
ones.
Note! The insertion order depends on how close to an "exit", i e an agent without
any configured output, the queues are. The queues that are closest to an exit will
be inserted first, and the further a queue is from an exit, the further back in the
insertion list the queue will be.
102
Desktop 7.1
Add Workflow Service Click the Add... icon to select a service to be used by the workflow. In
the Add Workflow Service dialog, select a service from the list and
click Apply after each selected service. Click OK when finished.
Remove Workflow Service Select a service from the Services list and click the Remove icon to
remove the service from the workflow.
4.1.8.5.1.1. Overview
This section describes the Couchbase Monitor Service. With this service you can access the current
status of Cluster Nodes that belong to a configured Couchbase profile.
The service publishes MIM values that enable workflows to detect if a cluster is online and the number
of nodes that are available.
Based on this information the workflows can be configured to mitigate connection problems e g by
attempting to connect to a different Couchbase Cluster.
To open the configuration for the Couchbase Monitor Service, open the Workflow Properties dialog
in a real-time workflow configuration, click on the Services tab, click on the Add button, select the
Couchbase Monitor Service option and click OK.
Click Browse... and select the Couchbase profile you want to apply.
103
Desktop 7.1
Note! The Monitoring setting must be enabled in the selected Couchbase profile in order to
use the Couchbase Monitor Service.
For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.
4.1.8.5.1.3.1. Publishes
4.1.8.5.1.3.2. Accesses
This section describes the Supervision Service. With this service you can create decision tables for
triggering different actions to be executed based on current MIM values. This may, for example, be
useful for overload protection purposes.
4.1.8.5.2.1. Overview
The Supervision Service uses decision tables, where you can define different actions to be taken de-
pending on which conditions are met; log in System Log, use an overload protection configuration,
or generate an event. You can use any MIMs available in the workflow for configuration of conditions,
e g throughput, queue size, etc.
104
Desktop 7.1
The Supervision Service is available for real-time workflows, and can be configured in the Services
tab in the Workflow Properties dialog. The supervision service with action overload can also be
manually triggered with mzsh commands, in case it is needed for maintenance or other purposes. If
the service is manually triggered, it has to be reverted to automatic mode for the settings in the Services
tab to take effect once again. See the Command Line Tool user's guide for further information.
To open the configuration for the Supervision Service, open the Workflow Properties dialog in a
real-time workflow configuration, click on the Services tab, click on the Add button, select the Super-
vision option and click OK.
The configuration for the Supervision service will now appear on the right side of the Services tab.
105
Desktop 7.1
Execution Interval (ms) Enter the time interval, in milliseconds, with which current MIM values
should be checked against the conditions in the decision tables. This config-
uration will be valid for all decision tables.
Decision Tables All the decision tables you have configured are listed in this section. Click
on the Add button to add a new decision table. It may be a good idea to
have different decision tables for different purposes.
Note! Even though you can change the order of your decision tables, this will not affect the
functionality. All decision tables will be applied.
When you select to add a new decision table, the Add Decision Tables dialog will open.
106
Desktop 7.1
In this dialog you configure your decision table. In a decision table you determine which action to take
depending on which conditions are met. These conditions and actions are configured in separate lists
and will then be available for selection in the decision table configuration.
Decision Table Enter a name for your decision table in the Name field.
Table Parameters Click on the buttons Action Lists and Conditions Lists to configure the different
conditions and actions for this table.
Decisions Each configured condition list will be displayed in the Conditions column. Each
of these conditions can be set to either True, False or -. If you want to add more
columns for setting up different combinations of conditions, you can right click
on the Action column heading and select to add more columns. For each column
with a condition combination, you can then select which action to take in the
drop-down list containing all configured actions.
1. Configure conditions. The conditions you configure are based on different MIM parameters having
defined values.
2. Configure actions. When configuring the actions you can select to have a Supervision event gener-
ated, to reject messages, or to log an entry in the System Log. Rejection can be made on all messages
or on a certain percentage; 0, 25, 50, or 100 %.
Hint! If you are using Diameter or Radius agents in your workflow, you can also select from
a range of Diameter and Radius specific overload protection strategies in order to only reject
specific types of messages. See Section 13.2, “Diameter Agents” and Section 13.1, “Radius
Agents” for further information about these strategies.
3. Create the decision table, i e set the conditions to either True, False or - (which means ignore) and
select which action to take.
4. Set a name.
5. Click Add and repeat steps 1 to 4 for all the decision tables you want to create.
Note! When a condition is evaluated to true, the corresponding action will be performed only
once, until any other condition is also evaluated to true. Generally, this means that a minimum
of two conditions is required in the decision table.
To configure conditions:
107
Desktop 7.1
Left Operand Select a MIM parameter you want to use for you condition in this section.
Operator list This is the drop-down list located between the two operands. Select either > (larger
than),< (smaller than), == (equals), or != (not equal).
Right Operand Select what the selected MIM parameter and operator should match; either another
MIM parameter, or a constant.
1. In the Add Decision Tables dialog, click on the Condition Lists button.
5. Select an operator.
8. Repeat step 4 to 7 until you have added all the conditions you want to have in the condition list and
then click Close when you are finished.
List Enter a name for the condition list in the Name field.
Match Select if you want all the conditions in the list to be matched or, if only one condition
is required to match by selecting either of the buttons Any of the Following or All
of the Following.
Conditions This section contains all the different conditions you have added to the list.
108
Desktop 7.1
9. Select if you want to match all conditions in the list, or if you want to match one of the conditions
in the list.
10. Give the list a name and click on the Add button to add the condition list in the Create Decision
Tables dialog.
11. Repeat steps 3 to 10 until you have created all the condition lists you want to have and then click
Close when you are finished.
To configure actions:
1. In the Add Decision Tables dialog, click on the Action Lists button.
Action In this drop-down-list you select if you want an entry to be logged in the System
Log, or if you want an Overload Protection configuration to be configured, or if
you want to generate a Supervision Event that can be sent to various targets depend-
ing on how you configure your Event Notifications.
Note! The Overload Protection option is only available if you have Diameter
or Radius agents in your workflow.
109
Desktop 7.1
4. In this dialog, select which type of action you want to use; System Log, Overload Protection or
Supervision Event. Depending on what you choose, the options in the dialog differs.
• Select the percentage of messages you want to reject; 0, 25, 50 or 100 % in the Reject drop-down
list.
• In the Strategy drop-down list, you select if you want the action to be applied for all requests,
or only for requests following any of the Diameter overload protection strategies.
• Enter the event content in the Content field. This content can then be used when configuring
Event Notifications for this event.
9. Repeat step 4 to 8 until you have added all the actions you want to have in the action list and then
click Close when you are finished.
List Enter a name for the action list in the Name field.
Action This section contains all the actions you have added in this list.
10. Give the list a name and click on the Add button to add the action list in the Create Decision Tables
dialog.
11. Repeat steps 3 to 10 until you have created all the action lists you want to have and then click Close
when you are finished.
In the Decision Table tab you will now have two columns; Conditions and Actions.
110
Desktop 7.1
The Conditions column contains all the condition lists you have created, and in the Actions column
you can set a condition to either True, False (false), or - (Ingore), and then select which action you
want to trigger when the settings in the decision table match.
Depending on how many conditions you have configured, there may be many different combinations
that you may want to configure different actions for. To add another column, right click on the Action
column heading and select the option Add Column.... A new column will then be added. This can be
repeated for all the different combinations you want to have.
Note! Only one action can be selected for each set of combinations.
111
Desktop 7.1
4.1.8.5.2.2.1.4. Example
Case 1
Case 2
If the Incoming messages exceeds 100, 25 % of the incoming Diameter Credit-Control Initial requests
will be rejected.
Case 3
If the Incoming messages exceeds 150, 100 % of the incoming Diameter Credit-Control Initial requests
will be rejected.
112
Desktop 7.1
In case you need to manually trigger or clear the supervision service with action overload, e g for
maintenance or other purposes, you can use the mzsh wfcommand. See the Commandline Tool
User's Guide for further information about this.
4.1.9. Validation
Workflow configurations may be designed, configured, and saved step-by-step, but still not be valid
for activation until fully configured and valid. A valid workflow configuration contains three types of
Configuration data:
• Workflow data: General information related to the workflow configuration for instance, error
handling.
• Workflow structure data: Contains the agents and routes. A route indicates the flow of data depending
on the name of the route and the internal behavior of its source agent.
• Agent specific data: Each agent has a different behavior. Thus, each agent in the workflow config-
uration requires different configuration data in order to operate.
When clicking the Validate button or the Validate menu item the workflow configuration validation
is started. The validation is done in two steps:
2. If the workflow configuration is valid, the validation of the workflow table starts. The values in the
table are validated according to each agent's specifications. It is also controlled that values have
been added in all cells in the per workflow columns. The result is presented in a validation dialog
and possible workflow errors are indicated in the workflow table. The validation message for a
specific workflow can be viewed by selecting the corresponding action in the pop up-menu. If none
of the workflows in the workflow configuration is valid the following error message is shown:
If the reason why the workflow configuration is erroneous, is not evident, the button Validate can be
applied for a row, or rows, and will display a dialog with error message(s).
When data is imported to the workflow table the content will not be validated, only the correct number
of columns and types will be controlled. If validation errors occurs during the import the user is asked
whether the import should be aborted or continued (that is importing with errors). Aborting an import
results in restoration/ rollback to the previous table.
When a workflow is saved it will be silently validated and if some of its configuration is invalid or
missing, a dialog will state this and ask whether to still save the workflow or not. Validity is not neces-
sary in order to save a workflow configuration. The workflow can be incomplete or the agent config-
uration can be faulty. The only exception is that all workflows in the workflow configuration must
have unique names. The workflow symbol on the window border and in the Open dialog will be
marked with a red cross if it is not valid.
113
Desktop 7.1
A real-time workflow does not usually fall back to scheduled mode and therefore will not automatically
pick up changes made to the workflow. Some real-time agents can be modified while they are running
however will not be saved. See Section 2.2.2, “Dynamic Update”.
Agent states and events can be monitored during workflow execution and the monitor also allows for
dynamic updates of the configuration for certain agents, through sending of commands. A command
can, for example, tell an agent to flush or reset data in memory. For further information about applicable
commands, see the relevant agent user's guide. See also Section 2.2.2, “Dynamic Update”.
Note! The workflow monitor can apply commands only on one workflow at a time, in one
monitor window per workflow. The monitor functionality is not available for groups or the
whole workflow configuration. The workflow monitor window displays the active version of
the workflow.
Monitoring a workflow does not imply exclusive rights to start or stop it, the workflow can be activated
and deactivated by another user while monitored, or by scheduling.
To open the monitor from Execution Manager, see Section 7.6.1.3, “ The Right-Click Menu”.
114
Desktop 7.1
4.1.11.1.1. Menus
The Workflow Monitor has three different menus; File, Edit and Event which are all described in
further detail below.
File
Edit
Profiler Starts the currently loaded workflow using workflow profiling, see
Section 4.1.11.2, “Workflow Profiling”.
115
Desktop 7.1
Dynamic Update Updates a running workflow with a new configuration that has been
entered in monitor mode. Agents that support update of the configura-
tion while running will be updated with the new configuration. Once
the update has been introduced to the workflow, the user will be shown
if anything was affected. Only the current executing workflow is af-
fected by the dynamic update. See Section 2.2.2, “Dynamic Update”.
Toggle Debug Mode On/Off Turns on or off debug information for a workflow.
Note that turning on the Debug mode might slow down the workflow
due to log filing.
View/Edit Workflow Click to open the workflow editor view.
Event
The workflow monitor visually shows the load status for agents and routes using symbols to indicate
which agents that are the slowest and which routes the data usually takes though the Workflow. This
way, it is possible to find bottlenecks in the Workflow.
Since the load status gives an indication of slow agents and routes, this does not necessarily mean a
critical problem. An Aggregation agent, for example, often has a higher load than many other agents,
since this agent might access disk storage and can be configured with complex business logic.
Note! The workflow profiling is only active when executing a workflow from the workflow
monitor, using the "Profiler" button (see Section 4.1.11.1.1, “Menus”).
When running a Workflow through scheduling (see Section 4.2.2.5.3, “Scheduling”), or using
the "Start" button in workflow monitor, there will be no profiling active.
The following formula is used to calculate the average load for agents:
The load status is calculated based on the average load and is indicated by a colored symbol close to
the agent symbol:
116
Desktop 7.1
Normal Load:
High Load:
Load is equal to Average Load x 2 or between Average Load x 2 and Average Load x 3
Unknown Load:
There is no known data load. This could be because the agent is a Collection Agent which
cannot publish statistics, or there is not enough data load to be able to calculate an average.
The following formula is used to calculate the average load for routes:
The load status is calculated based on the average load and is shown using different thickness on the
routes:
Normal Load:
Load is equal to Average Load x 2 or between Average Load x 2 and Average Load
x3
Very High Load:
There is no known data load. This could be because there is not enough data load to
be able to calculate an average.
If Debug is turned on, the profiling result might become misleading since the debugging increases the
data load through agents and routes. To get a more correct result, turn off the Debug capability by using
the option Toggle Debug Mode On/Off.
1. To view events for all agents in the workflow, select Events for All Agents from the Event menu.
2. To only view events for some selected agents, click each agent while holding down the <Ctrl> key
on the keyboard.
117
Desktop 7.1
State Description
Aborted At least one of the workflow agents has aborted. You track the reason for the error
either by double-clicking the aborted agent, or by examining the System Log.
Building When a workflow, or any of a workflow's referenced configurations, are being re-built,
for example when saving or recompiling, the workflow will be in Building state.
Figure 101.
When a workflow is in Building state, the Configuration Monitor icon in the status
bar will indicate that operations are in progress, and in the Worklow Monitor, the text
"Workflow is building" will also be displayed. Workflows started by scheduling con-
figurations will wait until the workflow leaves the Building state before they start.
Executed A workflow becomes Executed after one of the following:
Hold A workflow that is in the Idle state and is being imported either by the mzsh systemim-
port r | sr | sir | wr or by the System Importer configured to Hold Execution,
118
Desktop 7.1
enters the Hold state until the import activity is finished. The workflow group then
resumes its Idle state.
Idle Until you execute the workflow for the first time it is in the Idle state. After execution,
although the workflow is indeed idle, the state space on the display might remain as
any of the following states: Executed, Completed, Aborted, or Not Started.
Invalid The workflow configuration is erroneous. Once you correct the error the workflow
assumes the Idle state. Note: A workflow in the Invalid state cannot be executed.
Loading The platform is uploading the workflow to the Execution Context. When the transfer
is complete Execution Context initializes the agents. When the workflow starts running
the state changes to Running.
Running The Workflow is currently executing
Unreachable If the platform fails to establish connection with the EC where a workflow is executing,
the workflow will enter the unreachable state. When the workflow server successfully
reestablishes the connection, the workflow will be marked as Running, Aborted, or
Executed, depending on the state that the workflow is in. An Unreachable workflow
may require manual intervention if the workflow is not running any more. For further
information see Section 1.2, “Execution Context”.
Waiting The Waiting state applies only to workflows that are included as members in a workflow
group. In the Waiting state the workflow cannot start execution due to two parameters
in the workflow group configuration: The Startup Delay parameter, and the Max
Simultaneous Running quota. A Workflow in the Waiting state will change to Running
when triggered either by a user, the scheduling criteria of its parent workflow group,
or by a more distant ancestor's scheduling criteria.
In most cases if a workflow has aborted, one of its agents will have the state Aborted displayed above
it. Double-clicking such an agent will display a dialog containing the abort reason. Also, the System
Log holds valid information for these cases.
In a batch workflow a detected error will cause the workflow to abort and the detected error will be
shown as part of the abort reason and inserted into the System Log. A real-time workflow handles errors
by only sending them to the System Log. The only time a real-time workflow will abort is when an
internal error has occurred. It is therefore important to pay attention to the System Log or subscribe
to workflow error events to fully understand the state of a real-time workflow.
Note! Although a workflow has aborted, its scheduling will still be valid. Thus, if it is scheduled
to execute periodically, it will be automatically started again the next time it is due to commence.
This since the cause of the abortion might be a lost connection to a network element, which
could be available again later. Therefore, a periodically scheduled workflow, which has aborted,
is treated as Active until it is manually deactivated.
Click Show Trace; the Stack Trace Viewer opens. Use the information that it provides when consulting
DigitalRoute® Global Support.
119
Desktop 7.1
Created The agent is starting up. No data may be received during this phase. This state only exists
for a short while during workflow startup.
Idle The agent is started, awaiting data to process. This state is not available for a real-time
workflow.
Running The agent has received data and is executing.
Stopped The agent has successfully finished processing data.
Aborted The agent has terminated erroneously.
The error reason can be tracked either by double-clicking the agent or by examining the
System Log.
Real-time agents only has three different execution states, that will occur at every execution of the
workflow.
The batch agents are more complex and contain additional states in order to guarantee transaction
safety.
State Description
120
Desktop 7.1
initialize The initialize state is entered once for each invocation of the workflow.
During this phase the workflow is being instantiated and all agents are set up ac-
cording to their configuration.
beginBatch This state is only applicable for batch workflows.
At every start of a new batch, the batch collection agent will emit a beginBatch
call. All agents will then prepare for a new batch. This is normally done every time
a new file is collected, but can differ depending on the collection agent.
consume The agents will handle all incoming UDRs or bytearrays during the consume
state.
drain This state is only applicable for batch workflows.
When all UDRs within a batch has been executed, the agents will enter the drain
state. This state can be seen as a final consume state with the difference that there
is no incoming UDR or bytearray. The agent may however send additional inform-
ation before the endBatch state.
endBatch This state is only applicable for batch workflows.
The Collection agent will call for the endBatch state when all UDRs or the byte
arrays has been transferred in to the workflow. This is normally done at the file
end, but is dependent on the collection agent or when a hintEndBatch call is
received from any agent capable of utilizing APL code.
commit This state is only applicable for batch workflows.
Once the batch is successfully processed or sent to ECS, the commit state is
entered. During this phase, all actions that concern transaction safety will be ex-
ecuted.
deinitialize This is the last execution state for each workflow invocation. The agents will clean
and release resources, such as memory and ports, and stop execution.
cancelBatch This state is only applicable for batch workflows.
If an agent fails the processing of a batch it may emit a cancelBatch call and
the setting in Workflow Properties will define how the workflow should act. For
more information regarding the Workflow Properties, see Section 4.1.8.2, “Error
Tab”.
If the last execution of the workflow aborted, the agents will enter the rollback
execution state right after the initialize state. The agents will recover the state
prior to the failing transaction and then enter beginBatch or deinitialize
depending on if there is a additional batches to process.
121
Desktop 7.1
4.1.11.8. Transactions
A workflow operates on a data stream and MediationZone® supplies a transaction model where per-
sistent synchronization is made from the workflow data and agent specific counters. In theory the
synchronization could be performed continuously on a byte level but done in practice this would
drastically decrease the performance of the system.
The MediationZone® transaction model is based on the premises that Collection agents are free to
initiate a transaction to the Transaction server. At the moment the complete workflow is frozen and
the Transaction server saves the state of the workflow data that is queued for each agent. In practice,
agents indirectly emit a transaction when an End Batch is propagated. When all data is secured, the
workflow continues the execution.
Some agents are designed to wait for acknowledgment from sources they communicate with. Thus, a
stop request may take a while before acknowledged. In case of a network element connected to a Me-
diationZone® Collection agent has terminated in a bad state, causing the Collection agent to hang, the
Execution Context on which the workflow is running must be restarted.
UDRs already in the workflow, will be processed if they can be processed within the time interval set
in the parameter named: ec.shutdown.time found in the executioncontext.xml file.
The parameter specifies the maximum time in milliseconds the execution context will wait before a
real-time workflow stops after a shutdown has been initiated. This is to enable the workflow to stop
all input and drain all UDRs in the workflow before stopping.
Note!
The wait time is initially set to 20 seconds. If this value is set to 0 all draining is ignored and
the workflow will stop immediately.
The parameter can be changed anytime however the execution context must be restarted before the
changes will take effect. For further information see the System Administration user guide.
If the workflow is unable to drain the data within the specified time the workflow will still stop and
any remaining data in the workflow will be lost. If this occurs a log note will be added in the System
Log.
In case an Inter Workflow forwarding agent is included in the workflow, the last file might be incom-
plete. For these cases, the error handling is taken care of by the corresponding Inter Workflow collection
agent.
122
Desktop 7.1
Batch Awaits the next End Batch before unloading the workflow, that is, when the current
batch is fully processed.
Immediate Deactivates the workflow immediately, causing the current batch to be terminated. This
may still take a while, however, it is still faster than the Batch termination option.
In this section you will find all the information you need to create, configure, and execute a workflow
group.
To open an existing Workflow Group configuration, double-click the configuration in the Configur-
ation Navigator, or right-click a configuration and then select Open Configuration(s)....
123
Desktop 7.1
The menu items that are specific for a workflow group configuration are described in the next coming
sections:
4.2.1.1.1. Edit
4.2.1.1.2. View
Make sure this option is selected if you want to have these buttons visible
in the view. To remove the buttons, clear the check box for this option.
124
Desktop 7.1
Configuration Filter Enables you to include or exclude the following from the Available to
Add list:
• Workflow Groups
• Workflows
• Realtime workflows
4.2.1.3.1. Members
Item Description
Available to add Upper pane: Displays a tree view of the workflows and workflow groups that are
saved within their respective configurations, and are available to you to add as
members when creating a new workflow group.
Lower pane: Displays a list of workflows that are included in the workflow config-
uration that you select from the upper pane.
Contains settings described in Figure 110, “The Workflow Group Editor Scheduling Tab”
• Removing members
125
Desktop 7.1
• Configuration
Note! An invalid workflow member will not affect the validity of the workflow group.
2. Either right-click the selected item and select Add as Member or click on the upper Add button.
Note! Batch-, task- and system task workflow members can be combined in a workflow,
group, but real-time workflow members can only be combined with other realtime workflow
members. However, for real-time workflows, we recommend that only one workflow is in-
cluded in each workflow group.
3. Click on the Save As button and give the new workflow group a name.
• A workflow group member might still run as a member of another workflow group in the system
1. Right-click on the member you want to remove in the in the Group Members list in the Members
tab in Workflow Group configuration.
You will get a question if you are sure you want to remove the member.
126
Desktop 7.1
When planning the execution order of the members in your workflow group, use the Prerequisites
column in the Group Members table. By doing so you ensure:
• A linear execution
• That every member is fully executed before the next member starts running
3. Select the check boxes for the members that the current member should follow.
127
Desktop 7.1
Note! Apply Prerequisites settings for all members except for the first one in the execution
order.
4. Click OK.
See the image below for an example of how it may look like.
You can rearrange the members' order of appearance in the Group Members list, by using the
Up and Down buttons. When rearranging a list, that is already configured with Prerequisites
you will notice that the Prerequisites parameter is removed and a yellow warning icon will
appear instead. Note that this will not affect the workflow group validity. To remove the notific-
ation sign, either open the Prerequisites dialog box and click OK, or - to remove all the notific-
ation signs - save the workflow group configuration, and reopen it.
4.2.2.5.2. Execution
128
Desktop 7.1
Entry Description
Max Simultan- Enter the maximal number of workflows you want to be able to run concurrently.
eous Running
Workflows
Note!
• If you do not specify a limit, your specific work environment and equipment
will determine the maximal number of workflows that can run simultaneously.
• This value applies only to the workflow group that you are defining and will
not affect members that are workflow groups.
Startup Delay If Max Simultaneous Running Workflows is set to a value larger than 1, enter the
delay (in seconds) of the execution start for each of the workflows that may run sim-
ultaneously.
Note!
• If you do not enter any value, a very short delay will be applied by the system,
by default.
• You can assign a Startup Delay regardless of the members status. Once the
delay is up, if the member in turn is disabled, the workflow group attempts
to execute the next member.
Continue This option activates the default behavior on member abort, which means that the
workflow group will run until all its members are fully executed and/or all are aborted.
129
Desktop 7.1
Note! This means that groups with Real-time workflow members will continue
to run until all the members are aborted or stopped manually.
Stop Select this option to have the workflow group stop when a member aborts. A batch
workflow will finish the current batch and then stop.
Stop Immedi- Select this option to have the workflow group stop immediately when a member aborts.
ately A batch workflow will stop even in the middle of processing a batch.
Enable Select this check box to enable the workflow group Execution Settings.
Note!
• Execution Settings that you configure here, will only apply for workflow
members for which Execution Settings have not been enabled in the config-
urations that they are part of.
Distribution A workflow executes on an Execution Context (EC). Specific ECs, or groups of ECs,
may be defined by the user, or the MediationZone® system can handle it automatically.
The Distribution settings are applied to all included group members i e workflow-
and workflow group configurations. When there are conflicting settings, the members
that are lowest in the workflow group hierarchy have precedence.
When the Distribution settings of workflow group configurations are set on the same
level in the hierarchy, they do not conflict with each other.
Note! If you select to configure the distribution using EC groups, the selected
distribution type will also be applied on the ECs within the groups.
Hint You can combine both individual ECs and EC groups in the Execution
Contexts list. The selected distribution will then be applied for all ECs stated
either individually or in groups.
• Sequential - Valid only if ECs/EC groups are defined. Starts the workflow on the
first EC in the list. If this EC is not available, it will proceed with the next in line.
• Workflow Count - Starts the workflow on the EC running the fewest number of
workflows. If the Execution Contexts list contains at least one entry, only this/these
ECs/EC groups will be considered.
• Machine Load - Starts the workflow on the EC with the lowest machine load. If
the Execution Contexts list contains any entries, only this/these ECs/EC groups
will be considered. Which EC to select is based on information from the System
Statistics sub-system.
130
Desktop 7.1
• Round Robin - Starts the workflow on the available ECs/EC groups in turn, but
not necessarily in a specific order. If EC1, EC2 and EC3 are defined, the workflow
may first attempt to start on EC2. The next time it may start on EC3 and then finally
on EC1. This order is then repeated. If an EC is not available, the workflow will be
started on any other available EC.
4.2.2.5.3. Scheduling
The cause of execution for a workflow group can either be a planned time scheme or a specific event.
You can configure the cause of execution in the Scheduling tab.
Note!Changes to a running workflow group will not apply until the group has finished running,
which means that a real-time workflow will have to be stopped manually for changes to apply.
Entry Description
Day Plans Use this table to plan timed triggers that will execute your workflow group. Note that
you can define a list of various plans. MediationZone® will pick the plan that meets
the top priority according to the section called “Day Plans Priority Rule”.
Click on the Show... button to open a calender that displays the workflow group ex-
ecution plan, see Figure 112, “The Execution Calendar”
Event Trigger Use this table to define an event execution trigger for the workflow group, see the
section called “ Event Triggers ”.
The Day Plans table enables you to create a list of different execution schemes of the workflow group.
You configure each Day Plan to any interval between executions.
Note! Two Day Plans should not contradict each another. An example of an invalid configuration:
Day Plan A is set to Tuesdays Off, while Day Plan B is set to Every 5 minutes between 15:00:00
and 15:05:00 on Tuesdays.
MediationZone® applies the following priority rule for picking a Day Plan out of the list:
131
Desktop 7.1
3. Weekday (Monday-Sunday)
4. Every day
Click on the Add button in the Day Plan table in the Scheduling tab.
Entry Description
Day Select the target day. Valid options are:
• Every day
• A specific weekday
Day Off Select this check box to avoid execution on the day specified in the Day list.
Start At Enter a start time for the first execution
Stop At Enter the time for when execution should stop
If these fields are left empty, the default stop time, which is 23:59, will be ap-
plied.
Repeat Every Enter the interval between execution start time in seconds, minutes, or hours.
If this field is left empty, only one execution session will run at the specified
start time.
132
Desktop 7.1
A green colored cell in the calender represents at least one scheduled execution during that time.
Event Triggers
To trigger the execution of a workflow group you add a row to the Event Trigger table. A row can
be either a certain event, or a chain of events, that must occur in order for the workflow group execution
to set off.
Note! An Event Trigger that is comprised of a chain of events will take effect only when all
the events that it includes have occurred.
The events that have occurred are stored in memory. When MediationZone® is restarted this
information is lost and none of the events on the event chain are considered to have occurred.
133
Desktop 7.1
3. Select an Event Type from the drop-down list, see Section 5.4, “Event Fields”
6. If you want to filter all the events based on specific values of the selected type, enter the values in
the Match Value(s) column. Otherwise, if you leave the default value, All, all the events of the
selected event type will trigger the execution of the workflow group.
Note! There are no referential constraints for Event Triggers nor any way to track relations
between workflows that are triggered by one another. For example: workflow A is defined to
be activated when workflow B is activated. Workflow B might be deleted without any warnings,
leaving Workflow A, still a valid workflow, without a trigger. This might happen since value
matching is based on a regular expression of the workflow name, and not on a precise link
match.
134
Desktop 7.1
State Description
Aborted The default behaviour is that a workflow group will not assume the Aborted state until
all of its members are back to Idle When one member is in the Aborted state, the
workflow will continue until all the other members in the workflow group have finished
execution. Then the workflow group gets into an Aborted state.
Note! You can change the default behaviour for when a member aborts by using
the Behaviour when member abort settings in the Execution tab, see Sec-
tion 4.2.2.5.2, “Execution”.
When you stop a workflow group, it will first assume the Stopping state and take
care of all transactions. Only then will the workflow group state change to Idle
Hold A workflow group that is in the Idle state and is being imported either by the mzsh
systemimport r | sr | sir | wr or by the System Importer configured to Hold Exe-
cution, enters the Hold state until the import activity is finished. The workflow group
then resumes its Idle state.
Idle The workflow group configuration is valid , and none of its members is currently being
executed from within the workflow group.
Invalid There is an error in the workflow group configuration.
135
Desktop 7.1
Running The workflow group is running, controlling the execution of its members according to
the configuration settings.
Stopping A manual stop of the workflow group, or of the parent workflow group, makes the
workflow group enter the Stopping state. The workflow group remains in the Stopping
state while all the members are finishing their data transactions. Then the workflow
group will go into either the Idle or the Aborted state.
Suppressed Workflow groups that are in the Running state while configurations are being imported
by the mzsh systemimport r | sr | sir | wr command, or by the System Importer
configured to Hold Execution, enter the Suppressed state. In this state any scheduled
members are prevented from being started. The workflow group remains in this state
until the import activity is finished. Then, if the workflow members are still running,
the real-time workflow group returns to the Running state. Batch workflow groups re-
main in the Suppressed state until their members complete their execution. Then, the
workflow group state becomes Idle.
Note! If the workflow group is in the Suppressed state, and you stop all the
workflow group members, the workflow group will enter the Stopping state. If
this happens while an import process is going on, the workflow group will move
from the Stopping state to the Idle and then to the Hold.
The Suspend Execution configuration enables you to apply a restriction that prevents specific workflows
and/or workflow groups from running in specific periods of time.
Note! Grouping workflows is possible in the Suspend Execution for the sole purpose of suspend-
ing them during a defined period of time. These groups are not workflow group configurations.
136
Desktop 7.1
The menu items that are specific for Suspend Execution configurations are described in the following
sections:
4.2.5.1.1. View
Make sure this option is selected if you want to have the button visible
in the view. To remove the button, clear the check box for this option.
Configuration Filter Enables you to include or exclude the following from the Available to
Add list:
• Workflow Groups
• Workflows
• Realtime workflows
137
Desktop 7.1
• Members Tab
• Scheduling Tab
On the Members tab you select the workflows which execution you want to suspend during specific
periods of time.
Item Description
Available to add Upper pane: A tree view of the workflows and workflow groups that are saved
within their respective configurations, are available to you to apply execution
suspension for.
Lower pane: A list of workflows that are included in the workflow configuration
that you select from, in the upper pane.
Button Description
Click to add a member to the list.
From the Scheduling tab you suspend and enable the activation of workflows that you select on the
Members tab, if they are executed during the suspension interval.
138
Desktop 7.1
Time When you click the Add Row button that is located at the bottom of the Scheduling
tab, a new row appears in the Scheduling tab table. This row includes the current time
stamp. You change the time stamp to a future date by first double-clicking the row and
then clicking the button that appears in the selected row. Then, from the Time Chooser
dialog box, you select a time and a date.
Note! As soon as a specified date has passed, according to the Desktop (client)
clock, the text in that row becomes italicized.
Enable Double-click the table cell to select it, and then check to enable the activation of the
workflow at the specified time stamp.
Disable Double-click the table cell to select it, and then check to suspend the workflow at the
specified time stamp.
3. Click the button to move each selection into the Members list on the right hand side of the tab.
4. On the Scheduling tab, click the Add Row button ; the current time stamp is added to the table.
5. At this point you can either suspend the workflow immediately by checking Disable, or you can
edit the time and date to have the suspension start later. To do that select the relevant row and click
the threeDot ...; the Date Chooser dialog box opens.
139
Desktop 7.1
6. Select the year, month, day, hour, and minutes and click OK; the row is updated with a later time
stamp.
7. Check Enable, to remove the execution suspension, or Disable, to suspend a workflow at the specified
time.
Note! The MediationZone® platform should be running when both the suspend- and the enabled
activation dates occur, for these actions to be effective.
140
Desktop 7.1
5. Event Notifications
An Event Notification configuration offers the possibility to route information from events generated
in the system, to various targets. These targets include:
• Database
• Log file
• System Log
An event is an incident of importance that occurs in the MediationZone® system. There are several
different event types that all contain specific data about the particular event. Besides being logged,
events may be split up and selected parts may be embedded in user defined strings. For instance, consider
an event originating from a user, updating an existing Notifier:
This is the default event message string for User Events. However, it is also possible to select parts of
the information, or other information residing inside the event. Each type of event contains a predefined
set of fields. For instance, the event message previously exemplified, contains the userName and
userAction fields which may be used to customize event messages to suit the target to which they will
be logged:
Note! The Category field in the above picture is left empty intentionally, since it does not have
a value for this specific event. A category is user defined and is entered in the Event Categories
dialog. It is a string which will route messages sent with the dispatchMessage APL function.
The event types form a hierarchy, where each event type adds its own fields and inherits all fields from
its ancestors.
• Base
• Alarm
• Code Manager
• Group
141
Desktop 7.1
• System
• User
• Workflow
• Agent
• Agent Failure
• Agent Message
• Agent State
• ECS Insert
• Debug
• Dynamic Update
• Workflow State
• External Reference
• <User Defined>
Each event type and its fields are described in Section 5.4, “Event Fields”.
The menu items that are specific for Event Notification configurations are described in the following
sections:
142
Desktop 7.1
5.3. Configuration
A notifier is a selected target, receiving event data when one or several selected event types are generated
in the system. In addition, filters may be applied for each selected event type. Notifiers are configured
in the Event Notification Editor.
To create a new Event Notification configuration, click the New Configuration button in the upper
left part of the MediationZone® Desktop window, and then select Event Notification from the menu.
Event Notifier Enabled Check to enable event notification. Note, undirected events are not saved
by the system, and can therefore not be retrieved.
Notifier Setup A Notifier is the target where event messages, configured in the Event
Setup, are sent. For instance,to a database table, a log file or to the Medi-
ationZone® System Log.
The overall appearance of the message string is also defined in this tab.
Event Setup In the Event Setup tab, events to catch are defined. If necessary, the message
string defined in the Notifier Setup is also modified.
143
Desktop 7.1
Notification Type See detailed description in Section 5.3.1.1, “Notification Type Con-
figurations”.
Duplicate Suppression (sec) Enter the number of seconds during which an identical event is sup-
pressed from logging. Default value is 0.
Base Configuration Enter the event target definition parameters. For further information
see Section 5.3.1.1, “Notification Type Configurations”.
Target Field Configuration See Section 5.3.1.2, “Target Field Configuration”.
• Database
• Log File
• Send Mail
• System Log
5.3.1.1.1. Database
Event fields may be inserted into database tables using either plain SQL statements or calls to stored
procedures.
144
Desktop 7.1
Database The name of the database in which the table resides. Databases are defined in the
Database profile configuration.
SQL Statement Type in any SQL statement, using '?' for variables which are to be mapped against
event fields in the Event Setup tab. Note that trailing semicolons are not used. In
case of running several statements, they must be embedded in a block.
Messages may be routed to ordinary text files on the local file system.
Directory The path to the directory where the file, to append to, resides.
Filename The name of the file. In case the file does not exist, it will be created when the first
message for the specific event map arrives. New messages are appended.
Size The maximum size of the file. When this parameter is exceeded, the existing file is re-
named and a new is created upon the arrival of the next event.
The old file will receive an extension to the file name, according to
<date_time_milliseconds_timezone>.
Time The maximum lifetime of a file before it is rotated. When this parameter is exceeded,
the existing file is renamed and a new is created upon the arrival of the next event.
• hour, rotation is made at the first full hour shift that is xx:59.
• 2 hour, rotation is made in predefined two hour intervals (0,2,4...22) when turning to
next full hour. For example after 01:59.
145
Desktop 7.1
• 3 hour, rotation is made in predefined three hour intervals (0,3,6, ...21) when turning
to next full hour. For example after 02:59.
• 4 hour, rotation is made in predefined four hour intervals (0,4,8, ...20) when turning
to next full hour. For example after 03:59.
• 6 hour, rotation is made in predefined six hour intervals (0,6,12 ...21) when turning
to next full hour. For example after 05:59.
• 8 hour, rotation is made in predefined eight hour intervals (0,8,16) when turning to
next full hour. For example after 07:59.
• week, rotation will be done at midnight at the last day of the week.
• month, rotation will be done at midnight at the last day of the month.
The old file will receive an extension to the file name, according to
<date_time_milliseconds_timezone>.
• Linefeed
• Comma
• Colon
• (None)
Note! Do not configure two different Event Notifiers to log information to the same file. Messages
may be lost since only one notifier at a time can write to a file. Define one Event Notifier with
several Event Setups instead.
It is also possible to send mails to one or several recipients when the specified events occur. Make
sure the correct parameters, mz.mailserver and mz.notifier.mailfrom, have been configured
in $MZ_HOME/etc/platform.xml.
146
Desktop 7.1
Recipient Mail address to one or several recipients. Use comma to separate, if several. Select Add
to select mail addresses, configured for available users.
For further information about how to obtain the text, see Section 5.3.1.2, “Target Field
Configuration”.
Subject The subject/heading of the mail. If Event Contents is selected, newlines will be replaced
with spaces to make the subject readable. If the string exceeds 100 characters, it is trun-
cated.
For further information about how to obtain the text, see Section 5.3.1.2, “Target Field
Configuration”.
Message The body of the mail message.
For further information about how to obtain the text, see Section 5.3.1.2, “Target Field
Configuration”.
Events may be sent in form of SNMP traps to systems configured to receive such information. For the
MIB definition, see the $MZ_HOME/etc/mz_trap_mib.txt file.
Note! A new SNMP trap format is now available. For backward compatibility purposes, the
previous invalid format will still be used by default. However, if you want to use the new format
you can add the property snmp.trap.format.b in platform.xml, and set it to true
in order to activate the new values.
The value of the agentAddress field will be taken from the parameter pico.rcp.server.host.
147
Desktop 7.1
Figure 126. Event Notification Editor - Notification Type Send SNMP Trap
For further information about how to obtain the text, see Section 5.3.1.2, “Target
Field Configuration”.
This notification type is similar to Section 5.3.1.1.4, “Send SNMP Trap ”, with one difference: It is
specifically designed to work for Alarm events.
Figure 127. Event Notification Editor - Notification Type Send SNMP Trap Alarm
Target Field Configuration See Figure 129, “Target Field Configuration - Log Line”.
Selecting this target will route messages, produced by the selected events, to the standard Medi-
ationZone® System Log. The Contents field from each event will be used as the message in the log.
148
Desktop 7.1
Note! Do not route frequent events to the System Log. Purging a large log might turn into a
performance issue. If still doing that, keep the log size at a reasonable level by applying the
System Task System Log Cleaner. For further details see Section 4.1.1.4.9, “System Log
Cleaner”.
Depending on the parameter type, there will be one or several population types available in the list
next to it.
5.3.1.2.1. Manual
Selecting Manual allows the user to hard code a value, and thus gives no possibility to select any dy-
namic values to be embedded in the message. The value entered will be assigned to the parameter exactly
as typed.
Selecting Event Field allows the user to assign the value of one specific event field to the parameter.
For further information about fields valid for selections, see Section 5.4, “Event Fields”.
Selecting Event Contents assigns the event value of each event Contents field to the parameter. All
event types have a suitable event content text. For instance, referring to the example in Figure 120,
“Events Can Be Customized to Suit Any Target”, the Event Contents string will be:
Another example; the following string is reported for a User Defined Event:
The Message string originates from the dispatchMessage function. Note that nothing will be
logged unless dispatchMessage is used. Also in this case, the Field Maps in the Event Setup
will be disabled.
Note! The same result is achieved when selecting Event Field as Log Line, and then selecting
Contents as mapping (the Event Setup tab).
149
Desktop 7.1
5.3.1.2.4. Formatted
Selecting Formatted allows the user to enter text combined with variable names, which are assigned
event field values in the Event Setup tab.
Figure 130. Each variable in Notifier Setup will have its own Notifier Field in Event Setup - Send
Mail
For each variable entered in a field with the Formatted option selected, a notifier field will be added
in the Event Setup tab where you can then assign event field values.
Variable names must be preceded by a $, started with a letter, and be comprised of a sequence of letters,
digits, or underscores.
The settings in the screenshot above will be interpreted as containing the variables 'NO' and 'ANUM'.
150
Desktop 7.1
Filter The Filter table enables you to configure the event types to catch. For each event
type, a filter may be defined to allow, for instance, a specific workflow and two
specific event severities to pass.
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Event Field Name The contents of this column varies depending on the selected event type. For
further information about different events, see Section 5.4, “Event Fields”.
However, these values are only suggestions. You can also use hard coded strings
and regular expressions.
151
Desktop 7.1
Example 19.
.*idle.*
(?m).*idle.*
Note! Some of the Event Fields let you select from four Match Value types:
Information, Warning, Error, or Disaster. For the rest of the Event Fields
Match Value you use a string. Make sure you enter the exact string format
of the relevant Match value. For example: the Event Filed timeStamp can
be matched against the string format yyyy-mm-dd.
Field Map Maps variables against event fields. The Field Map table exists only if any of
the parameters for the selected notifier type is set to Formatted, Event Field or
SQL.
Notifier Field States the Notifier parameter available in the Notifier Setup tab.
If a specific parameter has more than one variable, it will claim one line per
variable.
Variable The name of the variable, as entered in the Notifier Setup tab. If the parameter
type is Event Field, this field will be empty.
Event Field Double-clicking a cell will display a list, from which the event field, to obtain
values from, is selected.
Add Event... Enables you to add an event that you generate by a call from an APL- or Ultra
code in the workflow configuration. Click Add Event to configure your event;
the new event type is added as a separate tab in the Event Setup.
Remove Event... Removes the currently selected event type tab in the Event Setup dialog.
Refresh Field Updates the Field Map table. Required if parameter population types or formatting
Map fields have been modified in the Notifier Setup tab.
The Event Notification Editor subscribes to these events and routes them to notifiers, for example, log
files or a database.
An Event Type is comprised of a set of fields containing the original event message, a set of standard
workflow related information, and event specific fields that are the parameters the original event
message.
152
Desktop 7.1
All Event types in the MediationZone® system inherit fields from the Base Event type. Workflow
related events inherit fields related to the workflow event as well, such as agentName. In addition
to these, User Defined Events will receive any fields as defined by the user.
• Events inherited from a Base Event (all other events), with additional information added.
The user-defined event must be configured in the Ultra Format Editor. Other than the fields entered
by the user, MediationZone® will automatically add basic fields. User Defined Events may only be
dispatched from an agent utilizing APL, that is, Analysis or Aggregation.
Figure 134. User Defined Events are Sent from Analysis or Aggregation Agents
Fields added by the user must be populated manually by using APL commands, while the basic fields
are populated automatically. From the basic fields, only category and severity may be assigned values.
The other basic fields are read only, hence it is not possible to assign values to them.
153
Desktop 7.1
Note! Subscribing to Base Events is not recommended since it will match every event produced
in the system, which may generate a high volume of events.
• contents - A hard coded string containing event specific information; the original event message.
For instance, for the ECS Insert Event, this string will contain the type of data sent to ECS, the
workflow name, the agent name, and the UDR count. For information about the contents field, see
the specific event types (this table).
• eventName - The name of the Event, that is, any of the types described in this section, for example,
Base Event, Code Manager Event or Alarm Event.
• Execution Context - On which the workflow that issues the event is running.
• receiveTimeStamp - The date and time for when an event is inserted in the platform database.
This is the time used in for example the System Log.
• severity - The severity of the event. May be any of: Information, Warning, Error or Disaster.
The default value is Information.
• timeStamp - The date and time taken from the host where the event is issued.
• alarmDescription - The contents of the Description text-box in the Alarm Detection Config-
uration. See Figure 35, “The Alarm Detection”.
• alarmId - The unique number that the system uses to identify saved configurations.
• alarmModifierComment - The annotation that the user enters when closing the alarm.
154
Desktop 7.1
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• contents
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
For information about how to configure the Couchbase Monitor Service, see Section 4.1.8.5.1,
“Couchbase Monitor Service”.
155
Desktop 7.1
5.5.4.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time a Couchbase Monitor Event is generated.
Double-click on the field to open the Match Values dialog where you can click on the Add button to
add which values you want to filter on. If there are specific values available, these will appear in a
drop-down list. Alternatively, you can enter a hard coded string or a regular expression.
The following fields are available for filtering Couchbase Monitor events in the Event Setup tab:
• clusterId - This fields contains the id of the monitored Couchbase Cluster. In some situations, e g
when the event is caused by an incorrectly configured Couchbase profile, the id may be unavailable.
• clusterNode - When the triggered event is related to a specific Couchbase node, this field contains
the name of the node.
• eventType - This field contains the type of event that was triggered. For information about the
available types see Section 5.5.4.2, “Couchbase Event Types”.
• profileKey - When the triggered event is related to a specific configuration, this field contains its
unique configuration Key. You can right-click on a configuration in the Configuration Navigator
pane to view its Key.
• profileName - When the triggered event is related to a specific configuration, this field contains its
name.
The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:
• category - If you have configured any Event Categories, you can select to only generate notific-
ations for Couchbase Monitor events with the selected categories. See Section 5.6, “Event Category”
for further information about Event Categories.
• contents - This field contains a string with event specific information. If you want to use this
field for filtering you can enter a part of the contents as a hard coded string.
• eventName - This field can be used to specify which event types you want to generate notifications
for. This may be useful if the selected event type is a parent to other event types. However, since
the Couchbase Monitor event is not a parent to any other event, this field will typically not be used
for this event.
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all Couchbase Monitor events from 1st
of June, 2014, to 30th of June, 2014.
• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster.
• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
156
Desktop 7.1
for example, "2014-06-15 09:.*" for catching all Couchbase Monitor events from 9:00 to 9:59 on
the 15th of June, 2014.
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
157
Desktop 7.1
The connection to ZooKeeper has been lost and the monitoring will
be stopped until the connection is re-established. Manual investigation
is required to find the cause of the connection loss. One cause could
be that a majority of ZooKeeper nodes running in the configured
Execution Contexts have shut down and you may need to restart
them.
MONITOR_REMOVE_FAILED Failed to remove monitored configuration.
158
Desktop 7.1
159
Desktop 7.1
Figure 135.
• When this notification is generated, a new line with information will be logged in the
couchbase_monitor.txt file located in the /home/user/couchbase_monitor folder, containing
the following data:
160
Desktop 7.1
Figure 136.
• When this notification is generated, an entry will be added in the cbmonitor table in the
database configured in MyDatabase profile with the following data:
• The event message, will be inserted in the message column in the database table.
• The timestamp from the EC will be inserted in the time column in the database table.
For information about how to enable dynamic peer discovery, see Section 13.2.3.2.1.3, “Realm Routing
Table”.
5.5.5.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time a Diameter dynamic event is generated.
Double-click on the field to open the Match Values dialog where you can click on the Add button to
161
Desktop 7.1
add which values you want to filter on. If there are specific values available, these will appear in a
drop-down list. Alternatively, you can enter a hard coded string or a regular expression.
The following fields are available for filtering Diameter dynamic events in the Event Setup tab:
• dynamicPeers - This field contains a comma separated list of dynamically discovered peers in
a realm and their settings.
• realmName - This field contains the name of the realm for which the event is generated.
The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:
• category - If you have configured any Event Categories, you can select to only generate notific-
ations for Diameter dynamic events with the selected categories. See Section 5.6, “Event Category”
for further information about Event Categories.
• contents - This field contains a string with event specific information. If you want to use this
field for filtering you can enter a part of the contents as a hard coded string.
• eventName - This field can be used to specify which event types you want to generate notifications
for. This may be useful if the selected event type is a parent to other event types. However, since
the Diameter dynamic event is not a parent to any other event, this field will typically not be used
for this event.
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all Diameter dynamic events from 1st of
June, 2014, to 30th of June, 2014.
• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster.
• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all Diameter dynamic events from 9:00 to 9:59 on the
15th of June, 2014.
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
162
Desktop 7.1
Figure 137.
• When dynamic peer discovery is enabled and the peers of a realm are looked up in DNS, a
notification will be generated.
• When this notification is generated, a new line with information will be logged in the diamet-
er_dynamic_event.txt file located in the /home/user/diameter folder, containing the following
data:
163
Desktop 7.1
Figure 138.
• When dynamic peer discovery is enabled and the peers of a realm are looked up in DNS, a
notification will be generated.
• When this notification is generated, an entry will be added in the diameterdynamic table in
the database configured in MyDatabase profile with the following data:
• The timestamp from the EC will be inserted in the timestamp column in the database table.
• The realm name will be inserted in the realm column in the database table.
• The peer information (hostname, port, protocol) will be inserted in the peers column in
the database table.
1. Idle - This is the state of a workflow group that is valid but where no workflows are being executed.
2. Invalid - A workflow group will change state from Idle to Invalid if the configuration is made in-
valid. Once the configuration is valid again, the Workflow Group will change back to the Idle state.
3. Hold - A workflow group will change state from Idle to Hold if configurations are being imported
with certain options selected. Once the import is finished, the state will change back to Idle again.
If a default import is made, the workflow group will not change into the Hold state.
164
Desktop 7.1
4. Running - A workflow group will change state from Idle to Running as soon as it is being executed.
If the execution is allowed to finish, the state will change back to Idle.
5. Suppressed - If configurations are being imported with certain options selected while a workflow
group is in Running state, the state will change to Suppressed. For real-time workflows, the state
will change back to Running again once the import is finished. Batch workflows will remain in
Suppressed state until all members have finished execution and will then change to Idle state. If
the workflow group is manually stopped, the state will change to Stopping. If a default import is
made, the workflow group will not change into the Suppressed state.
6. Stopping - If the execution of a workflow group is manually stopped, the state will change from
Running or Suppressed to Stopping.
7. Aborted - If one of the members of the workflow group aborts, the workflow group will change
state from Running or Suppressed to Aborted once all of its members have finished execution.
See section Section 4.2.3, “Workflow Group States” for further information about workflow group
states.
5.5.6.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications for all state changes for all workflow groups. Double
click on the field to open the Match Values dialog where you can click on the Add button to add
which values you want to filter on. If there are specific values available, these will appear in a drop-
down list. Alternatively, you can enter a hard coded string or a regular expression.
The following fields are available for filtering Group State events in the Event Setup tab:
• groupName - This field enables you to select which workflow groups you want Group State event
notifications to be generated for.
• groupState - This field determines for which states you want Group State event notifications to
be generated. If the state for one of the matching workflow groups changes into any of the states
added for this field, a group state event notification will be generated.
165
Desktop 7.1
The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:
• category - If you have configured any Event Categories, you can select to only generate notific-
ations for Group State events with the selected categories. See Section 5.6, “Event Category” for
further information about Event Categories.
• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string, e g the
state you are interested in Idle/Running/Stopping/etc. However, for Group State events, almost
everything in the content is available for filtering by using the other event fields, e g groupName,
groupState, etc.
• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the Group State event is not a parent to any other event, this field will typically not be used
for this event.
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field. However, since
the Group State events are only issued from the Platform, this event field should typically not be
used for filtering.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all Group State events from 1st of June,
2014, to 30th of June, 2014.
• severity - With this field you can determine to only generate notifications for state changes with
a certain severity; Information, Warning, Error or Disaster. For example, a state change from Idle
to Running will typically be of severity Information, while a state change to Abort state will typically
be of severity Error.
• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all Group State events from 9:00 to 9:59 on the 15th
of June, 2014.
166
Desktop 7.1
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
Figure 141.
• When the workflow group MyGroup, located in the Default folder, changes state to either
Aborted or Stopping, a Group State Event notification will be generated.
• When this Group State Event is generated an entry will be added in the groupStates table in
the database configured in MyDatabaseProfile with the following data:
• The workflow group name will be inserted in the wfg column in the database table.
• The state will be inserted in the state column in the database table.
• The timestamp from the EC will be inserted in the timestamp column in the database table.
167
Desktop 7.1
Figure 142.
• When the workflow groups MyGroup and MySecondGroup, located in the Default folder,
changes state during the time period 01:00 and 01:59 on the 21st of June, 2012, a Group State
Event notification will be generated.
• When this Group State Event is generated a mail will be sent to the [email protected]
e-mail address with the following data:
• The subject will contain the following text: "Event Notification: GroupState".
• The message will contain the following text: "An event of type Group State has occurred
with the following contents: <the content of the event>".
• Name - The name of the workflow group that is mentioned in the eventMessage.
• Message - A textual description of the events that take place while systemimport -holdexecution
is executing.
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• contents
• eventName
168
Desktop 7.1
• origin
• receiveTimeStamp
• severity
• timeStamp
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• contents
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
Message: Retrying to send event to listener <listener> at <host>. Retry count <attempt number>.
169
Desktop 7.1
Message: The pico instance <pico instance> at host <host> was disconnected from platform by
<user>.
• When the Platform thread pool size has been set for workflows and workflow groups.
Message: The platform thread pool size for workflows and groups is set to <thread pool size>.
• If the mz.platform.wf.threadpool property cannot be parsed with default thread pool size.
Message: Failed to parse the <mz.platform.wf.threadpool> property. Using default size <default
thread pool size>.
• If the mz.platform.wf.threadpool property has been set to a value outside of valid range.
Message: The property <mz.platform.wf.threadpool> is outside the appropriate range, have set size
to <thread pool size>.
• If configuration data is missing for a workflow in a workflow group that is being loaded to the
GroupServer.
• If the GroupServer cannot register to the EventServer in order for workflow groups to be loaded or
updated.
Message: Group server is unable to register to the event server. Groups will not be loaded or updated.
Message: Group <workflow group> and all its members are stopping.
Message: Failed to load configuration for <workflow>, changing state to invalid state.
170
Desktop 7.1
Message: Found one old version of the workflow <workflow> with session id <session id> running
on an ec. The workflow have been shut down..
• If a reconnect attempt to an unreachable workflow has failed and a new attempt is made.
• If a workflow that is supposed to be closed and killed is trying to communicate with the Platform.
Message: Warning, a presumed closed and killed workflow <workflow> tried to communicate with
the platform, ignoring the message.
Message: Warning an old workflow where found on ec <ec> when the workflow <workflow> where
to be started, the old one have been forced to stop.
• If a workflow is stopped.
• If trying to retrieve a list with valid workflow configurations, and failing to retrieve any of the
workflows.
Message: Unable to retrive workflows from the configuration <workflow>. Due to <cause>.
• If configurations are missing in a Suspend Execution configuration, see the Suspend Execution
User's Guide for further information.
Message: Warning, The following members of the suspend execution configuration where not found
and they where not <enabled/disabled>. <configurations>.
• If a Code Manager event occurs, see Section 5.5.3, “Code Manager Event” for further information.
• If a Redis HA event occurs, see Section 5.5.27, “Redis HA Event” for further information.
171
Desktop 7.1
5.5.9.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications for all state changes for all workflow groups. Double
click on the field to open the Match Values dialog where you can click on the Add button to add
which values you want to filter on. If there are specific values available, these will appear in a drop-
down list. Alternatively, you can enter a hard coded string or a regular expression.
The following fields are available for filtering Group State events in the Event Setup tab:
• systemMessage - This field contains the message appended with the System event, as described
in the previous section.
The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:
• category - If you have configured any Event Categories, you can select to only generate notific-
ations for System events with the selected categories. See Section 5.6, “Event Category” for further
information about Event Categories.
• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string, e g the
state you are interested in Idle/Running/Stopping/etc. However, for System events, the content
consists of the text "System message:" and then the system message itself, see the description of
the system message above.
• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the System event is not a parent to any other event, this field will typically not be used for this
event.
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-04.*" for catching all System events from 1st of April, 2014,
to 30th of April, 2014.
• severity - With this field you can determine to only generate notifications for state changes with
a certain severity; Information, Warning, Error or Disaster. This may be useful to filter on if you
only want to view System events that generate Warnings for example.
• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all System events from 9:00 to 9:59 on the 15th of
June, 2014.
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
172
Desktop 7.1
Figure 143.
• When this notification is generated a new log line will be added in the systemevent.txt file
located in the /home/MyDirectory/systemevent/ directory,with the following data:
173
Desktop 7.1
Figure 144.
• When a System Event with a message containing the text "Warning" is registered, a System
Event notification will be generated.
• The message will contain the following text: "The following warning has been detected:
<the system event message>".
Whenever a notification with notification type log file, that uses external references, is generated, the
System External Reference event is triggered.
5.5.10.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time data a System External Reference event
occurs. Double click on the field to open the Match Values dialog where you can click on the Add
button to add which values you want to filter on. If there are specific values available, these will appear
in a drop-down list. Alternatively, you can enter a hard coded string or a regular expression.
174
Desktop 7.1
The following fields are available for filtering System External Reference events in the Event Setup
tab:
Fields inherited from the Base event
The System External Reference event inherits all its fields from the base event,. These fields can be
used for filtering and are described in more detail in Section 5.5.1, “Base Event”:
• category - If you have configured any Event Categories, you can select to only generate notific-
ations for System External Reference events with the selected categories. See Section 5.6, “Event
Category” for further information about Event Categories.
• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string.
• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if you have configured notifications for several different events with
notification type log file, that uses external references, and you only want notifications to be generated
for a specific event type.
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all events from 1st of June, 2014, to 30th
of June, 2014.
• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster.
• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all events from 9:00 to 9:59 on the 15th of June, 2014.
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
175
Desktop 7.1
Example 28. System External Reference Event Notification for Group State events
Figure 145.
• When a Group State event occurs, a Group State event AND a System External Reference
event notification will be generated
• When these notifications are generated, a log entry will be added in a file with a name coming
from an external reference, located in the /home/mydirectory/groupstate/ directory,
containing the contents of the events.
176
Desktop 7.1
Figure 146.
• When a Group State event occurs, a Group State event AND a System External Reference
event notification will be generated
• When these notifications are generated, a log entry will be added in a file with a name coming
from an external reference, located in the /home/mydirectory/groupstate/ directory,
containing the following text:
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
177
Desktop 7.1
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
Create operation
• Create Started - which is triggered when a workflow calls the tableCreateShared function.
• Create Finished - which is triggered when the shared table has been loaded from the database.
A Create operation will always consist of two actions; either Create Started and Create Finished, or
Create Started and Create Failed.
Refresh operation
• Refresh Started - which is triggered when a workflow calls the tableRefreshShared function or when
the Shared Table profile has been configured with a Refresh Interval.
• Refresh Finished - which is triggered when the shared table has been refreshed.
A Refresh operation will always consist of two actions; either Refresh Started and Refresh Finished,
or Refresh Started and Refresh Failed.
Released operation
The Released operation only has one action, i e to release the table when no references to the table has
existed for a certain time interval.
See Section 9.7, “Shared Table Profile” for further information about shared tables.
5.5.12.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time data a SharedTables event is triggered.
Double click on the field to open the Match Values dialog where you can click on the Add button to
add which values you want to filter on. If there are specific values available, these will appear in a
drop-down list. Alternatively, you can enter a hard coded string or a regular expression.
178
Desktop 7.1
The following fields are available for filtering SharedTables events in the Event Setup tab:
• actionType - With this field you can configure notifications to be sent only for certain actions.
Use regular expressions to filter on this field.
• agentName - This field contains the name of the agent issuing the action. In case a Refresh or
Create action is initiated based on the Refresh Interval setting in the Shared Tables Profile, this
field will be empty. You can use this field to specify notifications to be generated only for certain
agents. Use regular expressions to filter on this field.
• duration - The duration is the amount of time in milliseconds it takes to perform a Create or
Refresh operation, and this field is included in the Create Finished and Refresh Finished actions. If
you select to filter on this field, you can specify to only generate notifications for a certain duration.
This will also mean that notifications will only be generated for Create Finished and Refresh Finished
actions. Use regular expressions to filter on this field.
• errorMessage - In case a Create Failed, or a Refresh Failed action is triggered, this field will
contain an error message. If you select to filter on this field, you can specify to only generate noti-
fications for certain error messages, or just select to have notifications generated for actions containing
error messages. This will also mean that notifications will only be generated for Create Failed and
Refresh Failed actions. Use regular expressions to filter on this field.
• workflowName - This field contains the name of the workflow issuing the action. In case a Refresh
or Create action is initiated based on the Refresh Interval setting in the Shared Tables profile, this
field will be empty. You can use this field to specify notifications to be generated only for certain
workflows. Use regular expressions to filter on this field.
• workflowVersion - This field contains the version number of the workflow issuing the action.
In case a Refresh or Create action is initiated based on the Refresh Interval setting in the Shared
Tables profile, this field will be "0". Use regular expressions to filter on this field.
• rowCount - This indicates the number of rows that were created or refreshed in the database. Use
regular expressions to filter on this field.
• ShareTablesProfileName - This field contains the name of the Shared Tables profile issuing
the action. You can use this field to specify notifications to be generated only for workflows using
a certain SharedTablesProfile. Use regular expressions to filter on this field.
The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:
• category - If you have configured any Event Categories, you can select to only generate notific-
ations for SharedTables events with the selected categories. See Section 5.6, “Event Category” for
further information about Event Categories.
• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string.
• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the SharedTables event is not a parent to any other event, this field will typically not be used
for this event. However, if you have several different event types configured for generating notific-
ations in the same event notification configuration, it may be useful to include this field in the noti-
fication itself to differentiate between the event types.
179
Desktop 7.1
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all SharedTables events from 1st of June,
2014, to 30th of June, 2014.
• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster. For the SharedTables event, the actions
Create Started, Create Finished, Refresh Started, Refresh Finished, and Released have severity In-
formation, and the actions Create Failed and Refresh Failed have the severity Error.
• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all SharedTables events from 9:00 to 9:59 on the 15th
of June, 2014.
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
180
Desktop 7.1
Figure 147.
• When a Create Started, Create Finished, or Create Failed action occurs, a SharedTables event
notification will be generated
• When this notification is generated, an entry will be logged in the sharedtables.txt file located
in the /home/user/sharedtables folder, containing the following data:
181
Desktop 7.1
Figure 148.
• When a SharedTables action is issues by any workflow using a Shared Tables Profile with a
name containing "SharedTableProfile", a SharedTables event notification will be generated
• When this notification is generated, a mail will be sent to the mail address mymail@my-
company.com, containing the following data:
• A message saying: A SharedTables event for a <action type> action has been issued by a
workflow using the <name of the Shared Table Profile>, at <the timestamp for when the
event was triggered> with the following content:.
182
Desktop 7.1
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
The following fields are inherited from the workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:
• workflowKey
• workflowName
• workflowGroupName
183
Desktop 7.1
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:
• workflowKey
• workflowName
• workflowGroupName
Not all agents can issue these sort of events. For further information, see the relevant agent user's guide.
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• eventName
• origin
• receiveTimeStamp
184
Desktop 7.1
• severity
• timeStamp
The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:
• workflowKey
• workflowName
• workflowGroupName
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:
• workflowKey
• workflowName
• workflowGroupName
185
Desktop 7.1
• agentState - The state of the agent. The following are available: Aborted, Active, Created, Idle,
Stopped.
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:
• workflowKey
• workflowName
• workflowGroupName
5.5.19.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time a Diameter Peer State Changed event is
generated. Double-click on the field to open the Match Values dialog where you can click on the Add
button to add which values you want to filter on. If there are specific values available, these will appear
in a drop-down list. Alternatively, you can enter a hard coded string or a regular expression.
The following fields are available for filtering Diameter Peer State Changed events in the Event Setup
tab:
• peerName - The name of the Diameter peer for which the connnection state has changed.
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
186
Desktop 7.1
• category - If you have configured any Event Categories, you can select to only generate notific-
ations for Diameter peer state changed events with the selected categories. See Section 5.6, “Event
Category” for further information about Event Categories.
• contents - This field contains a string with event specific information. If you want to use this
field for filtering you can enter a part of the contents as a hard coded string.
• eventName - This field can be used to specify which event types you want to generate notifications
for. This may be useful if the selected event type is a parent to other event types. However, since
the Diameter Peer State Changed event is not a parent to any other event, this field will typically
not be used for this event.
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all Diameter Peer State Changed events
from 1st of June, 2014, to 30th of June, 2014.
• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster.
• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all Diameter Peer State Changed events from 9:00 to
9:59 on the 15th of June, 2014.
The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:
The following fields are inherited from the Agent event, and described in more detail in Section 5.5.14,
“Agent Event”:
• agentName
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
187
Desktop 7.1
Example 32. Diameter Peer State Changed event notification saved in a Log File
Figure 149.
• When this notification is generated, a new line with information will be logged in the diamet-
er_peer_state.txt file located in the /home/user/diameter folder, containing the following
data:
• The name of the of the peer for which the connection state has changed.
188
Desktop 7.1
Example 33. Diameter Peer State Changed event notification saved in a database
Figure 150.
• When this notification is generated, an entry will be added in the diameterdynamic table in
the database configured in MyDatabase profile with the following data:
• The timestamp from the EC will be inserted in the timestamp column in the database table.
• The peer name will be inserted in the peer column in the database table.
• The new state will be inserted in the state column in the database table.
See Section 16.1, “Error Correction System” for further information about how data is inserted into
ECS.
5.5.20.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time data is inserted into ECS. Double click on
the field to open the Match Values dialog where you can click on the Add button to add which values
189
Desktop 7.1
you want to filter on. If there are specific values available, these will appear in a drop-down list. Al-
ternatively, you can enter a hard coded string or a regular expression.
The following fields are available for filtering ECS Insert events in the Event Setup tab:
• ecsMessage - With this field you can configure notifications to be sent only for certain messages
associated with the cancelBatch function. If UDRs are inserted, the message will be "None".
Use regular expressions to filter on this field.
• ecsMIM - This field enables you create a regular expression based filter for specific MIM values,
i e notifications will only be generated for data containing the specified MIMs.
• ecsSourceNodeName - This field enables you configure notifications to be sent only for insertions
made from specified agents. For batches, this will be the agent issuing the cancelBatch, while
for UDRs this will be the ECS Forwarding agent. Use regular expressions to filter on this field.
• ecsType - For this field you can select if you want notifications to be generated for only batches,
only UDRs or both, i e All.
• ecsUDRCount - This field enables you to configure notifications to be sent only for batches con-
taining a certain amount of UDRs. Use regular expressions to filter on this field.
• agentName - This field enables you configure notifications to be sent only for events issued from
specified agents. Use regular expressions to filter on this field.
The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:
• category - If you have configured any Event Categories, you can select to only generate notific-
ations for ECS Statistics events with the selected categories. See Section 5.6, “Event Category” for
further information about Event Categories.
• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string. However,
for ECS Statistics events, everything in the content is available for filtering by using the other event
fields, i e eventName, errorCodeCountForNewUDRs, etc.
• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the ECS Statistics event is not a parent to any other event, this field will typically not be used
for this event.
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all ECS Statistics events from 1st of June,
2014, to 30th of June, 2014.
190
Desktop 7.1
• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster. However, since ECS Statistics events only
have severity Information, this field may not be very useful for filtering.
• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all ECS Statistics events from 9:00 to 9:59 on the 15th
of June, 2014.
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
Figure 151.
• When a batch is inserted in ECS, an ECS Insert event notification will be generated
• When this notification is generated, a mail will be sent to the mail address my.mail@my-
company.com, containing the following data:
• A message saying: A batch has been inserted into ECS: <the ECS message>.
191
Desktop 7.1
Figure 152.
• When this notification is generated, an entry will be added in the ecsInsert table in the database
configured in MyDatabase profile with the following data:
• The ECS Type, i e Batch or UDR, will be inserted in the tp column in the database table.
• The timestamp from the EC will be inserted in the time column in the database table.
5.5.21.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time the ECS_Maintenance system task is ex-
ecuted. Double click on the field to open the Match Values dialog where you can click on the Add
button to add which values you want to filter on. If there are specific values available, these will appear
in a drop-down list. Alternatively, you can enter a hard coded string or a regular expression.
The following fields are available for filtering ECS Statistics events in the Event Setup tab:
• errorCodeCountNewUDRs - This field enables you create a regular expression based filter for
UDRs in state New in order to only generate notifications for UDRs passing the filter. This may be
192
Desktop 7.1
useful for specifying that notifications should only be generated for certain error codes and/or when
a certain amount of UDRs have be registered, for example.
The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:
• category - If you have configured any Event Categories, you can select to only generate notific-
ations for ECS Statistics events with the selected categories. See Section 5.6, “Event Category” for
further information about Event Categories.
• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string. However,
for ECS Statistics events, everything in the content is available for filtering by using the other event
fields, i e eventName, errorCodeCountForNewUDRs, etc.
• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the ECS Statistics event is not a parent to any other event, this field will typically not be used
for this event.
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all ECS Statistics events from 1st of June,
2014, to 30th of June, 2014.
• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster. However, since ECS Statistics events only
have severity Information, this field may not be very useful for filtering.
• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all ECS Statistics events from 9:00 to 9:59 on the 15th
of June, 2014.
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
193
Desktop 7.1
Figure 154.
• When the ECS_Maintenance system task is run, a notification will be generated if any UDRs
in New state with error code myErrorCode, or if any UDRs in Reprocessed state are detected.
• When this notification is generated, a new line with information will be logged in the error-
codenew.txt file located in the /home/myDirectory/ecs folder, containing the following data:
• The number of UDRs registered for each error code for UDRs in Reprocessed state.
194
Desktop 7.1
Figure 155.
• When the ECS_Maintenance system task is run, a notification will be generated if more than
100 UDRs in New state with error code CriticalError, or if any UDRs in Reprocessed state
are detected.
• When this notification is generated, a mail will be sent to the mail address my.mail@my-
company.com, containing the following data:
• A message saying: At <the timestamp for when the event was generated> more than 100
UDRs with error code CriticalError was detected.
• The the entire content of the notification will also be included in the message.
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
195
Desktop 7.1
• category
• contents
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:
• workflowKey
• workflowName
• workflowGroupName
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• contents
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:
• workflowKey
• workflowName
• workflowGroupName
• abortReason - Describes the abort reason for a state event of type aborted.
196
Desktop 7.1
• workflowState - The new state that the workflow is in now. Valid options are: Aborted, Executed,
Hold, Idle, Invalid, Loading, Running, Unreachable, Waiting.
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:
• workflowKey
• workflowName
• workflowGroupName
This event inherits all its fields from the base- and the workflow events.
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• contents
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
197
Desktop 7.1
The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:
• workflowKey
• workflowName
• workflowGroupName
For example, you can configure a Supervision event to be generated when the throughput goes above
a certain value, or when the heap size goes above a certain level, etc.
See Section 4.1.8.5.2, “Supervision Service” for further information about configuration of Supervision
events.
5.5.26.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time a Supervision event is generated. Double
click on the field to open the Match Values dialog where you can click on the Add button to add
which values you want to filter on. If there are specific values available, these will appear in a drop-
down list. Alternatively, you can enter a hard coded string or a regular expression.
The following fields are available for filtering Supervision events in the Event Setup tab:
• action - With this field you can configure notifications to be sent only for certain actions. Actions
are configured in Action Lists for the Decision Tables you have created for the Supervision Service
in the Workflow Properties. Use regular expressions to filter on this field.
• cause - With this field you can specify to generate notifications only for events with certain de-
scriptions. The descriptions are added when configuring your actions for the Supervision Service.
See Section 4.1.8.5.2, “Supervision Service” for further information. Use regular expressions to
filter on this field.
• value - This field enables you configure notifications to be sent only for events with a certain
content. The content is added when you configure you actions for the Supervision Service. See
Section 4.1.8.5.2, “Supervision Service” for further information. Use regular expressions to filter
on this field.
The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:
• category - If you have configured any Event Categories, you can select to only generate notific-
ations for Supervision events with the selected categories. See Section 5.6, “Event Category” for
further information about Event Categories.
198
Desktop 7.1
• contents - This field contains the action type configured in the Supervision Service, i e Supervision
Event, and the cause, i e the name of the action, as well as the value.
• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the Supervision event is not a parent to any other event, this field will typically not be used
for this event.
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all Supervision events from 1st of June,
2014, to 30th of June, 2014.
• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster. However, since Supervision events only
have severity Information, this field may not be very useful for filtering.
• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all Supervision events from 9:00 to 9:59 on the 15th
of June, 2014.
The following fields are inherited from the workflow event, and can also be used for filtering, described
in more detail in Section 5.5.13, “Workflow Event”:
• workflowGroupName - This field can be used for configuring Supervision event notifications
to be generated only for specific workflow groups. Simply select the workflow groups you want to
generate Supervision events for in the drop-down-list, or enter a regular expression.
• workflowKey - This filed can be used for configuring Supervision event notifications to be gen-
erated only for specific workflow keys. You can browse for the workflow keys you want to add, or
enter a regular expression.
• workflowName - This field can be for configuring Supervision event notifications to be generated
only for specific workflow names. You can browse for the workflow names you want to add, or
enter a regular expression.
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
199
Desktop 7.1
Figure 156.
• When a set of conditions that has an event action associated in one of the decision tables
configured for Supervision Service are met, a notification will be generated.
• When this notification is generated, a new line with information will be logged in the super-
vision.txt file located in the /home/user/supervision folder, containing the following data:
200
Desktop 7.1
Figure 157.
• When a set of conditions that has an event action with a description containing "High" asso-
ciated in one of the decision tables configured for Supervision Service are met, a notification
will be generated.
• When this notification is generated, an e-mail will be sent to the mail address
[email protected], containing the following data:
• A message saying: At <the timestamp for when the event was generated> a High Level
Supervision event was generated with the following contents:.
• The the entire content of the notification will also be included in the message.
• eventMessage - Can be used for matching any text within the event message.
• eventType - This field determines which event types of the ones logged in System Log you are in-
terested in.
201
Desktop 7.1
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
• category
• contents
• eventName
• origin
• receiveTimeStamp
• severity
• timeStamp
For further information about how to configure the HA Redis events see the System Administration
Guide.
You can also configure space action events to be notified when a space has been created, copied or
removed using Event Notification.
5.5.28.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which generate event notifications every time a Space Action event is generated. Double click
on the field to open the Match Values dialog where you can click on the Add button to add which
values you want to filter on. If there are specific values available, these will appear in a drop-down
list. Alternatively, you can enter a hard coded string or a regular expression.
The following fields are available to filter Space Action events in the Event Setup tab:
• actionType - With this field you can configure notifications to be sent for certain space actions.
You select the action type from the drop-down list: spacecreate done, spacecopy done
and spaceremove done.
• destinationSpaceName - The name of the destination space to which the content of the source
space is copied when a spacecopy command has been executed. To choose a specific space or
spaces, select the space from the drop-down box which includes all of your spaces.
• spaceName - The name of the space for which an action has occurred. To choose a specific space
or spaces, select the space from the drop-down box which includes all of your spaces. If the action
is a spacecopy, it is the source space name.
The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:
202
Desktop 7.1
• category - If you have configured any Space Action categories, you can select to only generate
notifications for Space Action events with the selected categories.
• contents - This field contains a string with event specific information. If you want to use this
field for filtering you can enter a part of the contents as a hard coded string.
• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types.
• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.
• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-11.*" for catching all Space Action events from 1st of
November, 2014, to 30th of November, 2014.
• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster.
• timeStamp - This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-11-13 09:.*" for catching all the Space Action events from 9:00 to 9:59 on the
13th of November, 2014.
Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.
203
Desktop 7.1
Figure 158.
• When a space action is performed on any of your spaces, a notification will be generated.
• When a space action event is generated, an entry is added to the spaceactions table in the
configured database, containing the following data:
• The name of the space for which the action has occurred
event myEvent {
ascii addedField1;
int addedField2;
};
A user defined event is of workflow type therefore includes workflow specific fields.
The basic fields are automatically included in myEvent, along with the typed in fields. Population
of the fields is done via an APL utilizing agent. Example code:
204
Desktop 7.1
dispatchEvent( theEvent );
• category - A user defined category, as entered in the Event Categories dialog. If utilized, this
field is set manually in the APL code.
• origin - the IP address of the Execution Context the workflow issuing the event is running on.
• receiveTimeStamp - The date and time for when an event is inserted in the platform database.
This is the time used in for example the System Log.
• severity - The severity of the event. May be any of: Information, Warning, Error, or Disaster.
The default value is Information. If another severity is required, this field must be set manually in
APL to one of the strings: "Information", "Warning", "Error", "Disaster".
• timeStamp - The date and time taken from the host where the event is issued.
• The contents field - Workflow name: <Workflow name>, Agent name: <Agent
name>
Figure 159. The Event Category window, Where a User Defined String is Specified to be Used
as Name for the Category Field in an Event.
When the Event Category is defined it is mapped against a Match Value in the Event setup tab. Then
the defined Event Category is used as a parameter in the APL code with the dispatchMessage
function.
205
Desktop 7.1
206
Desktop 7.1
6. Inspection
When workflows are executed, the agents may generate various kinds of data, such as logging errors
into the System Log, or sending erroneous data to the Error Correction System (ECS). The Inspectors
allow the user to view such information.
The Desktop standard menus and buttons are described in Section 2.3.2.1, “Desktop Standard Menus”
and Section 2.3.2.2, “Desktop Standard Buttons”.
When workflows are executed, the agents may generate various kinds of data. The following Medi-
ationZone® Inspectors are available to analyze the data:
207
Desktop 7.1
Apart from simply sending a UDR or batch to ECS, a workflow can be configured to associate user-
defined information with the ECS data. For the UDRs this is Error Code and MIM information. For
cancelled batches, the Error UDR and Cancel Message may contain user defined information.
Note! Only a reference, not the data itself, is saved in the database. Physically, ECS data is
saved in the directory defined by the property mz.ecs.path found in
$MZ_HOME/etc/platform.xml. The default path is $MZ_HOME/ecs.
If the mz.ecs.path parameter is changed, the changes will take effect the next time data is
inserted into ECS. Existing ECS data is left at its current location and must not be moved. If
required to do so anyway, move the content of the old mz.ecs.path directory to the new,
and create a soft link in the old directory pointing out the new location.
Note! Take special precaution when changing, updating or renaming formats. If the updated
format does not contain all the fields of the historical format, in which UDRs may already reside
in the ECS or Aggregation, data will be lost. When a format is renamed, it will still contain all
the fields. The data, however, cannot be collected.
Initially, the window is empty and must be populated with data using the Search ECS dialog. Depending
on if the search is done for UDRs or batches the columns in the table will differ. For further information,
see Section 6.7.5, “Searching the ECS”.
If you want to get quicker and smaller search results, for example, you can set this property to a lower
value.
Note! If the value of the property is set higher than the default value, it may result in poor per-
formance. Also, note that this property only has effect when UDRs are inspected and not when
batches are inspected.
208
Desktop 7.1
Note! When increasing the above default limit, one might have to increase MZ Platform max-
imum heap size as well. This is done by changing the -Xmx parameter in the platform.xml
file. As a general rule, ECS wants another 10 Mb Platform heap space for every additional one
million UDRs.
6.7.1.3. Menus
6.7.1.3.1. General
The following menu items apply for both Batches and UDRs.
File menu
Edit menu
Delete Removes selected or (if no entries are selected) all matching entries, provided
that the RP State is set to Reprocessed. The ECS does not have to be purged
manually, there is a predefined cleanup task - ECS_Maintenance for automatic
purging. For further information, see Section 16.1.4, “ECS_Maintenance System
Task”.
Note! If the maximum number of entries that can be displayed in the table
has been exceeded. The delete operation will still be applied for all
matching entries.
Search... Displays the Search Error Correction System dialog where search criteria
may be defined to limit the entries in the list. For further information, see Sec-
tion 6.7.5, “Searching the ECS”.
Matching entries are bundled in groups of 500 in the table. This list shows which
group, out of how many, that is currently displayed. An operation targeting all
matching entries, will have effect on all groups.
Select All Selects all entries in the ECS Inspector.
Error Codes... Displays the ECS Error Code dialog where the Error Codes in the system may
be configured. For further information, see Section 6.7.7, “Error Codes”.
Reprocessing Displays the ECS Reprocessing Groups dialog, where reprocessing groups are
Groups... managed. For further information, see Section 6.7.8, “Reprocessing Groups”.
Searchable Fields... Opens the Searchable Fields dialog, where you can define specific UDR fields
that you want to add as meta data that can later be used for making searches,
see Section 6.7.2, “Configuring Searchable Fields in the ECS”.
Restricted Fields... Opens the Restricted Fields dialog, where you can specify certain fields within
certain UDR types that should be restricted from being updated in the ECS In-
spector, see Section 6.7.4, “Configuring Restricted Fields in the ECS”.
209
Desktop 7.1
View menu
Batch/UDR menu
Set State... Defines the state of a selected number of entries or (if no entries are selected) all entries.
Possible states are either New or Reprocessed (that is collected by an ECS Collection
agent and reprocessed with errors). Already processed data can be reset to New to enable
recollection. When the state is changed, the timestamp in the Last RP State Change
column in the ECS Inspector will be updated. See Section 6.7.9, “Changing State” for
more information.
Note! If the number of matches is larger than the maximum number of UDRs to
be displayed the state change will still be applied for all matching entries.
Assign to Assigns a selected number of entries or (if no entries are selected) all entries to a repro-
RPG... cessing group. Grouped entries can be collected simultaneously by an ECS collection
agent.
Delete Removes selected or (if no entries are selected) all matched entries, provided that the
RP State is set to Reprocessed. The ECS does not have to be purged manually, there
is a predefined cleanup task - ECS_Maintenance for automatic purging. For further
information, see Section 16.1.4, “ECS_Maintenance System Task”.
Note! If the maximum number of entries that can be displayed in the table has
been exceeded, the delete operation will still be applied for all matching entries.
UDR menu
Explore Displays the UDR Editor presenting the content of the selected UDR(s). The editor will also
UDR... be displayed if you double-click on a cell in the UDR Type column in the table of entries. In
the editor, the content of the UDR can be changed, except for the field Original Data.
210
Desktop 7.1
For further information, see the MediationZone® Ultra Format Management User's Guide.
Set Tag With this option you can set a tag on selected UDRs. When you have selected the UDR(s)
on you want to tag and then selected this option, a dialog will open asking you to enter the Tag
UDR(s)... Name.
The Tag Name will then be visible in the Tag column in the ECS Inspector.
Clear If you select this option, the tags for the selected UDRs will be removed.
Tag(s) on
UDR(s)...
Bulk Several UDRs (selected or matched) may be edited simultaneously with the Bulk Editor.
Edit... The editor displayed from ECS differs slightly from the editor displayed from the UDR Editor
window, opened for example from the Explore UDR... dialog. The ECS Bulk Editor has a
Preview option, which makes it possible to preview the changes prior approving and saving.
When changes have been made you can select to view only the modified entries, only the
untouched entries, or all entries in the ECS Inspector.
211
Desktop 7.1
Note! If the number of matches exceeds the maximum number of entries that can be
displayed, it may be a good idea to set up a workflow for editing the entries using APL
instead of performing a bulk edit.
When clicking on the Apply Changes button, the Bulk Edit Result dialog will open, displaying
modified and untouched entries.
212
Desktop 7.1
Now you can select if you want to view only modified entries, untouched entries, or all entries
in the ECS Inspector by using either of the options Entire Result Set, Modified Only, or
Untouched Only. Click on the View button when you have made your selection.
The selected type of entries will then be displayed in the ECS Inspector. In the top right corner,
above the ECS Inspector table, you will see information about what selection you have made,
e g "Modified Entries from Bulk Edit".
Hint! If you want to view the changes before applying them, you can click on the
Preview button instead. The Bulk Edit Result - Preview dialog will then open, giving
you a preview of the changes that are about to be made. If you are satisfied with the
preview, click on the Apply button, and the Bulk Edit Result dialog will open.
For further information about the Bulk Editor, see the MediationZone® Ultra Format Man-
agement User's Guide.
Batch menu
Explore Displays the Error UDR Viewer presenting the content of the Error UDR (if any). The
Error viewer will also be displayed if you double-click on a cell in the Error UDR column in
UDR... the table of entries..
213
Desktop 7.1
Note! These configurations have to be made before UDRs are sent to the ECS by the ECS For-
warding Agent.
214
Desktop 7.1
2. Enter a name for the label in the Label field and click on the Add button.
3. Repeat the previous step for all the labels you want to add, and then click on the Close button to
close the dialog.
4. Click on the Mappings tab to map UDR fields to the different labels.
6. Select the UDR type you want to add and click OK.
The UDR type is added in the UDR Types field and the UDR Browser is closed.
7. Repeat the previous step for all the UDR types you want to add.
8. Select a UDR type in the UDR Types field, and double click on the UDR Field column for a label
you want to associate a UDR field with.
The selected field is listed in the UDR Field column for the label and the dialog is closed.
10. Repeat the previous step for all the UDR Types you want map fields from.
Note! This enables you to map one field from each UDR type to a certain label.
11. Click on the Save button when you are finished with your configuration.
The configuration is saved, and the next time the ECS receives UDRs from an ECS Forwarding
Agent, the configured UDR fields are added as meta data and can later be used for making searches,
see Section 6.7.5, “Searching the ECS”.
215
Desktop 7.1
The configuration is available for all users having access to ECS Inspector in MediationZone® .
However only users belonging to the administrator group are allowed to change the configuration, i.e.
for all other users the configuration is available in read only mode.
Restrictions can be set on any UDR type, also within sub-UDRs. The restrictions are applied recursively
so if you have restrictions on a UDR field of a certain type, all fields below this will be blocked from
editing as well.
The restrictions defined in this configuration are valid only in ECS, i.e. the UDRs are still possible to
modify outside ECS (unless they have been explicitly defined as read only in UDR definition).
It is possible to import and export the restricted fields configuration if needed. The configuration is
located in the configuration tree under System->ECS->Restricted Fields.
Note! The access rights described above applies also for import and export of configuration.
This means that any user can export restricted fields configuration (as long as they have ECS
Inspector access) but only members of the administrator group may import the configuration.
Note! Only users belonging to the administrator group are allowed to configure restricted fields.
However, all users can view the configured restrictions.
1. Click on the Add button beneath the Restricted UDR Types section.
216
Desktop 7.1
2. Select the UDR Type for which you want to restrict fields from being edited and click on the Apply
button.
3. Repeat the previous step for all the UDR Types you want to add, and then click on the OK button
to close the dialog.
4. Select one of the UDR Types that you have added and click on the Add button beneath the Restricted
Fields section.
5. Select a field you want to restrict from being edited in the UDR and click on the Apply button.
6. Repeat the previous step for all the fields you want to restrict from being edited, and then click on
the OK button to close the dialog.
7. When you are finished, click on the Save button in the ECS Restricted Fields dialog to save your
settings.
The configured fields will now be blocked from editing in the ECS Inspector.
217
Desktop 7.1
The following search options are available for both UDRs and batches in ECS:
Saved filters This field contains any saved filters you may have created. For more informa-
tion about how to create a filter, see Section 6.7.5.2, “Saving Search Settings
as a Filter”
Workflow The name of the workflow that sent the entry to ECS.
Agent The name of the agent that sent the entry to ECS.
Error Code An Error Code that has been defined in ECS Inspector. See Section 6.7.7,
“Error Codes” for further information.
Error Case A list displaying the Error Cases associated with the selected Error Code. If
the entry is too long to fit in the field, the field can be expanded by enlarging
the ECS Inspector in order to display the entire error case text.
An Error Case is a string, associated with a defined Error Code. Error Cases
can only be appended via APL code:
udrAddError( <UDR>,
<Error Code>,
<Error Case> );
Note! When Batch is selected, the <UDR> parameter is the error UDR.
218
Desktop 7.1
Insert Period Use this Search option to search for UDRs/batches that were inserted into ECS
during a specific time period, either by specifying a start and end time, or by
using any of the predefined intervals, e g today, this week, etc.
Reprocessing Group Contains a list of all reprocessing groups.
Unassigned (UDR/Batch) will list all entries not associated with any repro-
cessing group.
Reprocessing State The entry state, which can be New, or Reprocessed. Only entries in state New
may be collected.
The following search options are only available when searching for Batches in ECS:
Cancel Agent The name of the agent that cancelled the batch.
Cancel Message The error message that was sent as an argument with the cancelBatch func-
tion.
Error UDR Type The type of Error UDR that can optionally be sent with a batch, containing im-
portant MIM information (or any other desired information when the UDR is
populated via APL).
The following search options are only available when searching for UDRs in ECS:
Example 41.
Figure 173.
Only UDRs with IMSI 2446888776 will be displayed in the ECS Inspector,
provided that the label IMSI has been mapped to the IMSI field in the UDRs.
Wild cards and intervals can also be used when entering the values for the fields; "*"
can be used for matching any or no characters, and intervals can be set by using
brackets "[ ]".
219
Desktop 7.1
• Only one wild card and one interval can be used per value.
• If the interval consists of the same number of digits in the start and end value, the
match will be made on that number of digits, e g (a[001-002]) will match a001 but
not a01.
• If the interval consists of different number of digits in the start and end value, the
match will be made on an appropriate numberof digits in the UDR, e g (a[1-20]) will
match a1 and a20 but not a01 or a020.
• The start value cannot have a larger amount of numbers than the end value, e g [0001-
3].
If a setting for a value is not correct, an error message will be displayed as a tooltip.
In order for a UDR to pass the filter, all the defined values have to match.
Example 42.
Figure 174.
will be displayed.
Warning! If you have included search criteria that refers to parameters that have been defined
in your system, such as error codes, tags, search fields, etc, these filters will not work properly
in case you delete any of the defined parameters.
To save a filter:
1. Set the search options you want to have and click on the Save... button.
220
Desktop 7.1
2. Enter a name in the ECS Filter Name field and click OK.
The dialog will close and the new filter will appear in the Saved Filters field.
The next time you want to use the same search settings, click on the filter name in the Saved Filters
field and the saved search settings will be displayed.
Hint! Any saved filters can be renamed or deleted by selecting the filter and then clicking on
the Rename or Delete buttons.
Note! The ECS Inspector caches the result when the user populates a list (for instance the Error
Codes). This is done to avoid unnecessary population of workflow names, agent names and error
codes since it is costly in terms of performance. You have to click on the Refresh button in order
to repopulate the search window.
221
Desktop 7.1
6.7.6.1. Columns
The following columns are available in the ECS Inspector Table:
Example 43.
cancelBatch("undefined_number_prefixes.");
222
Desktop 7.1
Example 44.
The error UDR format is defined as any other format from the Ultra Format
Editor.
internal myErrorUDR{
long noOfUDRs;
};
RP Group Shows the reprocessing group that the entry is assigned to, if any. Assignments can
be made both manually and automatically. In the latter case, an Error Code must be
mapped to a reprocessing group.
RP State Initially, an entry has the reprocessing state, New, that is the entry has not been repro-
cessed. In order for it to be collectible, it has to be assigned to a reprocessing group.
When collected by an ECS Collection agent, the state is changed to Reprocessed.
223
Desktop 7.1
Note! Only entries in state New may be collected by the ECS Collection agent.
The state can manually be changed back to New if this is necessary. Only entries
set to Reprocessed can be removed.
MIM Values Double-clicking this field will display a new window, listing the MIM values. MIM
values to be associated with the entry are configured differently for the two types of
entries:
Tags This column is only available when viewing UDRs and will display any tags that have
been set on the UDRs.
Last RP State This column displays the timestamp for when the reprocessing state was last changed.
Change The first time a UDR is sent to ECS, it will be in reprocessing state New, and this
column will display the timestamp for when the UDR was inserted into ECS. When
the UDR is collected for reprocessing or if the state is changed manually in the ECS
Inspector, this column will be updated with the current timestamp.
<search field The labels for any search fields you may have configured will be displayed as indi-
label(s)> vidual columns. These will only be available when viewing UDRs.
See Section 6.7.2, “Configuring Searchable Fields in the ECS” for further information
about configuring searchable fields.
Warning! If you remove the tag that you have stated in the filter, the filter will not work properly.
1. After having populated the ECS Inspector, select the UDRs you want to tag, click on the UDR
menu and select the Set Tag on UDR(s)... option.
2. Enter the tag name in the Tag Name field and click OK.
224
Desktop 7.1
Hint! If you change your mind, the set tags can be removed by selecting the option Clear
Tag(s) on UDR(s) in the UDR menu.
4. Select the Tag check box and enter the tag you want to search for in the field to the right of the
check box.
5. Click on the Save As... button beneath the Saved Filters field.
6. Enter a name in the Saved Filter Name field and click OK.
The dialog will close and the new filter will appear in the Saved Filters field.
The next time you want to view the tagged UDRs, select the saved filter setting when making your
search and only the tagged UDRs will be displayed in the ECS Inspector.
There are two predefined Error Codes within the system, AGGR_UNMATCHED_UDR and DUPLIC-
ATE_UDR, which are automatically set by the Aggregation and Duplicate UDR Detection agents when
the corresponding error condition is detected. All other Error Codes are defined by the user.
Apart from being accessible in the ECS Inspector, the error codes will also be used in ECS Statistics,
see Section 6.8, “ECS Statistics” and Section 5.5.21, “ECS Statistics Event”.
225
Desktop 7.1
Note! Several Error Codes can be attached to the same UDR. This will affect the ECS Statistics
output. For further information, see Section 6.8.1.2, “Error Code Search”.
To create an Error Code, select Error Codes... from the Edit menu. This will display the ECS Error
Code dialog.
Selecting Add will open the Add ECS Error Code dialog. This is where assignments of new Error
Codes are made.
Error Code The Error Code that will be attached to UDRs or batches.
Description A description of the error code.
RP Group The reprocessing group that the Error Code will be assigned to.
A user may send optional information to the ECS from an Analysis or an Aggregation agent, as long
as an Error Code has been defined. To this Error Code, any information may be appended using APL.
See the example below.
Example 45.
In this example the "CALL ID ERROR" is defined in the ECS Error Code dialog, found in
the Edit menu in the ECS Inspector.
Note! To clear the errors for a UDR the udrClearErrors function should be used. For further
information, see Example 153, “Reassigning to a Different Reprocessing Group”.
226
Desktop 7.1
To create a reprocessing group, select Reprocessing Groups... in the Edit menu of the ECS Inspector.
Click on the Add button to display the Add ECS Reprocessing Group dialog. The reprocessing group
must have a unique name.
The Error UDR Type is only applicable for Batch Data Type. If no Error UDR is to be used in re-
processing, this information is not required.
Note! UDRs with several Error Codes mapped to different reprocessing groups cannot be
automatically assigned to a reprocessing group. They must be assigned manually.
Note! If the number of matches is larger than the maximum number of UDRs to be displayed,
see Section 6.7.1.1, “Maximum Number of Displayed UDR Entries”, the state change will still
be applied for all matching entries.
1. If you only want to change the state for a few of the entries, select the entries in the table, otherwise
leave all entries unselected to apply the state change for all matching entries, and select the Set
State... option in the UDR menu.
The Set State dialog opens where you can see the total number of entries that will be affected.
227
Desktop 7.1
Note! If the number of matching entries exceeds the maximum number of entries that can
be displayed in the ECS Inspector, the dialog will only tell you that all matching entries will
be affected. If you proceed, another dialog will open up informing you about the total number
of entries that will be affected, asking you if you want to continue.
2. Select to which state you want to set the entries in the Select state: list and click OK.
If the number of matching entries exceeds the maximum number of entries that can be displayed
you will get a question if you want to continue.
If the number of matching entries exceeds the maximum number of entries that can be displayed,
a progress bar will show the progress of the state change. This may be useful if you are changing
the state for a large number of entries.
Otherwise, the state will simply be changed in the table and the timestamp in the Last RP State
Change will be updated.
Note! The ECS Statistics data is gathered and calculated with a system task called ECS_Main-
tenance, see Section 16.1.4, “ECS_Maintenance System Task” for more information. If you
want to change the scheduling of the task, this is changed in the configuration for ECS_Main-
tenance_grp.
If the ECS_Maintenance system task is scheduled to be executed with a time interval of less
than an hour, the statistical data will be gathered every hour.
Initially the Error Correction System Statistics window is empty. It is populated by performing a
search.
File menu Export... Shows the Export dialog, allowing the statistics to be exported.
File menu Print... Shows the Print dialog, allowing the statistics to be printed.
Edit menu Search... Shows the Search ECS Statistics dialog. For further information, see Sec-
tion 6.8.1, “Searching the ECS Statistics”.
If no limitations are entered in the search dialog, a basic search is performed. For further information,
see Section 6.8.1.1, “Basic Search”.
228
Desktop 7.1
Data Type Will determine if the search will be made for Batches or UDRs.
Error Code A list of available Error Codes, as defined in ECS. To search for several Error Codes
the Add button may be selected to append further fields.
Period Refines the search by setting a time period when the data was entered into ECS.
Note!Only 100 000 entries at a time can be browsed. If the search results in more than 100 000
entries, bulk operations must be repeated for each multiple of 100 000.
When selecting one row in the table, the spread of Error Codes is displayed in a pie chart. If up to four
Error Code types for that date are named, these will all be shown the graph. If five or more Error Code
types are present, the three most common Error Code types will be shown, and the rest will be grouped
in the category "Other".
229
Desktop 7.1
Date The date and time when the values were calculated.
New The amount of new errors current in ECS on the given date.
Reprocessed The amount of reprocessed errors current in ECS on the given date.
Value Type Enables the possibility to display graphical statistics for either New or Reprocessed
UDRs or batches separately.
230
Desktop 7.1
Error Code This column is only visible when the search is made on Error Codes.
Newest The last time the error occurred.
Oldest The first time the error occurred.
Error Code Report One line in the graph shows the number of UDRs attached with the selected
Error Code.
231
Desktop 7.1
7. Tools
MediationZone® provides different Tools to, for example, view logs, statistics, and pico instance in-
formation, and to import and export configurations.
The section describes all MediationZone® Tools, except for the Ultra Format Converter and the
UDR File Editor. For further information about the Ultra specific tools, see the MediationZone®
Ultra Management user's guide.
The Desktop standard menus and buttons are described in Section 2.3.2.1, “Desktop Standard Menus”
and Section 2.3.2.2, “Desktop Standard Buttons”.
Note! Only members of the Administrator group have access to the Access Controller, hence
only administrators may add users to the system. Only one user may use the Access Controller
at the time.
To open the Access Controller, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select Access Controller from the menu.
It is recommended that the password for mzadmin is changed and kept in a safe place. Instead personal
accounts should be created and used for handling the system in order to track changes.
To Add a User:
1. Open the Users tab.
2. From the Access Controller main menu select File and then Add.
232
Desktop 7.1
For details of how to change your password see Section 2.3.2.1.1, “The File Menu”.
To add a new group to the system, select the Access Groups tab and then select Add from the File
menu or from the toolbar.
233
Desktop 7.1
Application Cat- A drop down menu that allows the user to filter on application type. Options are
egory All, Configuration, Inspection, Tools, or Web interface.
Select All Enables Write (if applicable) and Execute for all permissions in the chosen
category.
Deselect All Disables Write and Execute for all permissions in the chosen category.
For information about how to modify configuration permissions, see Section 7.3, “Configuration
Browser”.
234
Desktop 7.1
If the external authentication server returns an error or cannot be accessed, MediationZone® will perform
the authentication internally as a fallback method.
Note! Configuration performed from the Users Tab has no impact on external authentication
servers.
7.2.3.1. Preparations
This section can be ignored if authentication is to be performed by MediationZone® .
The LDAP directory that is used for authentication must conform to the following requirements:
1. The cn attribute of group entries must match an access group defined in MediationZone® .
Note! MediationZone® performs case sensitive comparisons of the cn attributes and access
groups.
2. For each user in a group entry, the memberUid attribute must be set.
The following steps are required before configuration of authentication with LDAPS or LDAP over
TLS:
1. Obtain the server certificate for the authentication server from your LDAP administrator.
2. Start a command shell and copy the server certificate to the platform host.
7.2.3.2. Configuration
To setup the authentication method, open the Authentication Method tab and fill in the details accord-
ing to the description below.
235
Desktop 7.1
Authentication The authentication method to be used. The following settings are available:
Method
• Default
• LDAP
ldap://ldap.example.com:389
ldaps://ldap.example.com:636
Try Connection Tests the connection to the authentication server. LDAP attributes and other settings
than the URL are not used when testing the connection.
User Base DN The LDAP attributes for user lookups in the external authentication server. The
substring %s in this value will be replaced with the Username entered at login to
produce an identifier that is passed to the LDAP server.
uid=%s,ou=users,dc=digitalroute,dc=com
Group Base DN The LDAP attributes for group lookups in the external authentication server.
236
Desktop 7.1
ou=groups,dc=digitalroute,dc=com
The name of the groups must be identical to the names configured in Access
Groups.
TLS Enables Transport Layer Security.
Note!
• The URL must contain a fully qualified DNS name or the authentication
will fail.
The selected authentication method becomes effective when the configuration is saved.
Note! Authentication for the user mzadmin is always performed by MediationZone® regardless
of the selected authentication method.
The default value for allowed password age is set to 30 days for administrators and 90 days for users.
These limits can be modified in the file platform.xml.
mz.security.max.password.age.admin
mz.security.max.password.age.user
The values are set in days and are only valid in combination with the user.control property.
Note! The enhanced user security settings are not applicable when using LDAP as authentication
method.
237
Desktop 7.1
• Include at least one special character and one that is either a number or capital letter
• Be identical to any of the recent twelve (minimum) passwords used for the user ID
The default value of a password age is 30 days for administrator, and 90 days for user.
When an administrator creates a new user, a password should be assigned to the user. When the account
is used for the first time the user is prompted to change the password.
Note! Three failed login attempts will disable the user account. If this happens contact your
system administrator.
Note! The enhanced security password rules are not applicable when using LDAP as authentic-
ation method.
From the Configuration Browser you can also open the Configuration Tracer. In Configuration
Tracer you see both active configurations, and historical ones.
To open the Configuration Browser, click the Tools button in the upper left part of the Medi-
ationZone® Desktop window, and then select Configuration Browser from the menu.
Note! When using the default authentication method, configurations created by LDAP authen-
ticated users may not appear in the Configuration Browser . In order to make these configura-
tions visible, change the owner in Properties under the right-click menu of the Configuration
Navigator. The new owner must be listed in the Users tab of the Access Controller.
238
Desktop 7.1
7.3.1. Menus
In this section the different menus in the Configuration Browser, and their respective options are
described.
View/Edit Configura- Available when at least one configuration is selected in the browser.
tion(s)...
Select this option to open the selected configuration.
Export Configura- Available when at least one configuration is selected in the browser.
tion(s)
Select this option to export the selected configurations. The System Exporter
window will open with the configurations pre-selected.
Cut Select this option to put one or more configurations on the clipboard for
moving the configuration to another location. Select the menu option Paste
in the folder where the configurations should be stored.
239
Desktop 7.1
This option is not applicable if the configuration is locked. For further in-
formation see Section 2.1.2, “Locks”.
Copy Select this option to put one or more configurations on the clipboard for
copy the configurations to another location. Select the menu option Paste
in the folder where the copied configurations should be stored.
Paste Select this option to store configurations that have been cut or copied to the
clipboard into a folder.
Delete... Select this option to delete the selected configuration(s). If the configuration
is referenced by another configuration, a warning message be displayed, in-
forming you that you cannot remove the configuration. For further informa-
tion see Section 7.3.5.3, “The References Tab”.
Rename... Select this option to change the name of the selected configuration. Take
special precaution when renaming a configuration. If, for example, an APL
script is renamed, workflows that are using this script will become invalid.
This is especially important to know when renaming folders containing many
ultra format configurations or APL. Renaming a folder with ultra formats
or APL configurations will make all referring configurations invalid.
Encrypt... Select this option to encrypt the selected configurations.
Decrypt... Select this option to decrypt the selected configurations.
Validate... Select this option to validate the configuration. A validation message will
be shown to the user.
Properties Select this option to launch the Properties dialog for the selected configur-
ation. For further information, see Section 7.3.5, “Properties”.
Configuration Tracer Select this option to launch the Configuration Tracer. For further informa-
tion, see Section 7.3.4, “Configuration Tracer”.
Filter Configurations Select this option to open the Filter Configurations dialog:
• From the Types tab you select the configurations that you want to see in
Configuration Browser
• From the Owners tab you select the owners whose configurations you want
to see in Configuration Browser
Configuration Types This menu option contains a sub menu with all the MediationZone® configur-
ation types and allows the user to filter the current view in the Configuration
Browser to only display configurations of certain types.
Owners This menu option contains a sub menu with all the MediationZone® users
and allows the user to only display configurations that are owned by certain
users.
240
Desktop 7.1
Each folder listed in the folder pane has a number attach to its name. This number indicates how many
configurations that are stored in that folder. The number will change when using a filter which makes
it easy to see which folders contains configurations of a specific type.
Column Description
Type Contains an icon representing the application type.
Name Displays the name of the configuration.
Lock Indicates whether the configuration is locked or not.
Perm Displays the permissions granted to the current user of the configuration. Permissions
are shown as R (Read), W (Write) and X (eXecute). If the configuration is encrypted,
an E will also be added. For further information about permissions, see Sec-
tion 7.3.5.2, “The Permissions Tab”.
Owner Displays the username of the user that created the configuration. The owner can:
• Modify the permissions of user groups to read, modify, and execute the configur-
ation.
Modified By Displays the username of the user that made the last modifications to the configur-
ation.
Modified Date Displays the date when the configuration was last modified.
The Configuration Tracer also provides you with the unique identification key that the system gives
every configuration.
241
Desktop 7.1
7.3.4.1. Menus
Edit Menu
View/Edit Configuration... This option is enabled when in Active mode, and when a Configuration
is selected. When you select a Configuration it opens in a tab.
View Menu
Active Active mode will display the same configurations as those displayed in the Configuration
Browser.
Historic Historic mode will display configurations that have been removed from the system. The
user may select to restore such a configuration.
Refresh Select this option to refresh the information in the table.
7.3.4.2. Table
The table in the Configuration Tracer contains the following columns:
7.3.5. Properties
To open the Properties dialog, either right click on a configuration, or select a configuration, click on
the Edit menu and select the Properties option.
242
Desktop 7.1
This dialog contains four different folders; Basic, which contains basic information about the config-
uration, Permission, where you set permissions for different users, References, where you can see
which other configurations that are referenced by the selected configuration, or that refers to the selected
configuration, and History which displays the revision history for the configuration. The Basic tab is
displayed by default.
• Modify the permissions of user groups to read, modify, and execute the configuration.
Modified by Displays the user name of the user that made the last modifications to the configuration.
Modified Displays the date when the configuration was last modified.
If you want to use the information somewhere else you can highlight the information and press CTRL-
C to copy the information to the clipboard.
243
Desktop 7.1
As access permissions are assigned to user groups, and not individual users, it is important to make
sure that the users are included in the correct user groups to allow access to different configurations.
R W X E Permission Description
R - - - Allowed only to view the configuration, given that the
user is granted access to the application.
- W - - Allowed to edit and delete the configuration.
- - X - Allowed only to execute the configuration.
R W - - Allowed to view, edit and delete the configuration, given
that the user is granted access to the application.
- W X - Allowed to edit, delete and execute the configuration.
R - X - Allowed to view and execute the configuration, given
that the user is granted access to the application.
R W X - Full access.
- - - E Encrypted.
244
Desktop 7.1
The References tab contains two sub folders; Used By, that displays all the configurations that uses
the current configuration, and Uses, that displays all the configurations that the current configuration
uses.
If you want to edit any of the configurations, you can double click on it and it will be opened in a tab.
If you want to clear the history for the configuration, click on the Clear Configuration History button.
The version number will not be affected by this.
To open the Configuration Monitor, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select Configuration Monitor from the menu.
245
Desktop 7.1
Delete Select this option to delete the selected configuration(s) from the list.
Details Select this option to display the details about warnings that has occured. For more information
regarding the details, see Section 7.4.3, “Details”
If Show all Operations is selected, operations from all users will be shown.
Columns Description
User Name Specifies which user and desktop host that initated the operation.
Operation Name Specifies the operation that is executing.
Progress The progress of the steps for each operation is shown in this column. For example,
if an Ultra format has been saved and three workflows are dependent on this format,
the save operation would consist of four steps.
7.4.3. Details
To display the details for an operation, select the operation in the Configuration Monitor table and
click on the Details button.
246
Desktop 7.1
The displayed Details is divided in two parts. The first section lists the dependent configurations that
has changed their states between invalid and valid and the second part comtains a selectable list of
exceptions that occured during compilation. The exceptions and their stack traces can be viewed by
selecting the exception and click on the View Trace button.
To generate documentation on the configurations in the system, you must select the Output Target
directory in which you want to generate the documentation. To select a directory click the Browse...
button, select the target directory, and click the Save button. Click the Generate button. You can then
open the generated HTML file (index.html) in your web browser from the selected target directory.
247
Desktop 7.1
Section Description
Workflow An image of the configuration. This section is only included for workflow configurations.
Globals The variables and constants that are declared globally. This section is only included for
APL Code configurations.
Functions The APL functions. This section is only included for APL Code configurations.
Description The content provided by the user in the configuration profile, using the Documentation
dialog. For example, you can provide a description and the purpose of the configuration
in the dialog.
For further information on how to populate this section, see Section 2.3.3.3, “Document-
ation”.
Uses A list of all the configurations that the configuration uses, for example, APL code or
Ultra format.
Used By A list of all the configurations that use the configuration.
248
Desktop 7.1
To open the Execution Manager, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select Execution Manager from the menu.
The Execution Manager view is made up of the three tabs, Overview, Running Workflows, and View.
You open a Detail View tab for every workflow group, or groups, that you want to monitor.
You can clean the Status box from messages with a right-click, and then clicking Clear Area.
Column Description
Name The workflow group name
Mode If the workflow group is Enabled it can be activated by its Scheduling Criteria.
If it is Disabled, the workflow group can only be started manually.
State The workflow group current state
Runtime Info Specific information about the workflow group state
Started By The workflow or workflow group could have been started by either one of
the following:
249
Desktop 7.1
• A user
Entry Description
Open in Detail View Opens the selection in a separate tab.
View Abort Message Opens an error dialog box that specifies the reason to aborting execution of
the particular workflow group or its workflow member.
Start Triggers execution of your selection
Stop Stops your selections execution
Enable/Disable Select if the workflow group should be enabled. This value overrides the one
in the workflow groups table which is described in Section 7.6.1.2, “The
Workflow Groups Table”
Open in Editor Opens a workflow configuration or a workflow group in a new tab in Desktop.
Search Opens a Search and Filter bar at the bottom of the Execution Manager. Enables
you to search and filter the list of workflows in the workflow groups table.
Using all lower case letters in the search and filter text field will result in case
insensitive search and filtering. If upper case letters are used anywhere in the
text field the search will be case sensitive.
Note: You can open the Search Bar from the View menu as well.
Note! On this view you stop a workflow by right-clicking it and then selecting Stop.
The Running Workflow tab table contains the columns that you find on the Overview tab, as well as
the following:
Column Description
EC The IP address of the computer on which the workflow is running
Debug On or Off. See Toggle Debug Mode in Section 4.1.11.3, “Viewing Agent Events”.
Backlog The number of files that are yet to be processed.
The value on Backlog is identical to the Source Files Left MIM value.
250
Desktop 7.1
Throughput This column displays the throughput for the workflow's collecting agent. The value
shows either number of UDRs or bytes, depending on what the collecting agent produces,
and is updated every five seconds as long as the workflow is being executed.
Note!
• Detail views are saved as part of the user preferences, and therefore enable you to export and
import them along with user information.
• A workflow group on a Detail View that is marked with a yellow warning icon, is invalid.
The table on this tab contains the columns that you see on the Running Workflows tab as well as the
following:
Column Description
Prereq A comma delimited list of the workflow group prerequisites settings. See
Section 4.2.2.5.1, “Members Execution Order”
Next Suspend Action The suspend action is either the scheduled execution suspension of a config-
uration (workflow or workflow group), or the removal of such a suspension
(activation enabling).
Entry Description
Debug On/Off Turns on/off debug information for the selected workflows. See Toggle Debug
Mode in Section 4.1.11.3, “Viewing Agent Events” or Section 4.1.8.4, “Execution
Tab” for more information.
Open in Monitor Applies only to workflows. Opens the selected workflow in the workflow monitor.
3. Enter a unique name for the view and click OK; a new tab opens and displays the Workflow groups
and their members in a separate table.
251
Desktop 7.1
2. Check to select the tabs that you want on display, and clear the tabs that you want hidden.
3. To remove a tab: • Use the Remove button for your checked entries
• OR, right-click the tab that you want to close and select Close
From the Pico Manager it is possible to deny hosts of pico instances (pico hosts) access to the system.
By default, access is granted to added hosts, until the privilege is manually removed.
Execution Contexts must be registered on a pico host before use. The registration is done from the
Pico Manager. For further information, see Section 7.7.1.1, “Adding Pico Hosts”.
You can also register groups of ECs/ECSAs, which can make configuration easier as the workflows,
or workflow groups, will then be able to address the configured groups instead of specific ECs/ECSAs.
The registration is done from the Pico Manager. For further information, see Section 7.7.2.1, “Adding
EC/ECSA Groups”.
By default, instances of Desktop, mzsh, and Service Contexts always have access to the system and
you do not need to register these on pico hosts. You can change this behavior by setting the property
mz.dynamicconnections in the platform.xml file:
To open the Pico Manager, click the Tools button in the upper left part of the MediationZone® Desktop
window, and then select Pico Manager from the menu.
252
Desktop 7.1
The Pico Manager configuration contains two different tabs; Pico, which is used for registering pico
clients, and Groups, which is used for registering groups of ECs/ECSAs. The Pico tab is displayed
by default.
IP Address The IP address of the pico client. IPv6 addresses will be displayed with long notation
even if they have been entered with short notation.
Access Shows if the host is authorized to connect to the MediationZone® system.
IP Address Enter the IP address of the pico host. If using an IPv6 address, you can select to enter
the address with either long or short notation, and the system will then display the
addresses with long notation.
For further information about IPv6 addresses, see the System Administrator’s Guide.
Deny Access Indicates if the pico host will have access to connect to MediationZone® .
Instances A list of the Pico instances that you add. See Pico Instance in Terminology document.
Note! You can add more than one instance to a specific host.
1. Make sure the new Execution Context is properly installed in the local area network. It has to be
assigned a host name and have the prerequisite software installed according to the MediationZone®
Installation Instructions - User Guide.
2. Make an Execution Context only installation of MediationZone® on the host. Make sure to give
the new Execution Context a unique name.
253
Desktop 7.1
3. Register the new Execution Context in the Pico Manager. Make sure to enter the name exactly as
entered in the previous step.
4. Start the Execution Context on the new host by entering the command:
1. Make sure the Desktop host is properly installed in the local area network and have the prerequisite
software installed according to the MediationZone® Installation Instructions - User Guide.
From the Start menu select Programs and then select MediationZone® Desktop.
$ mzsh desktop
Group Displays the name of any registered groups. This name will be selectable when configuring
Exection Contexts in the Execution tab in the workflow properties, or in the workflow
group configuration. See Section 4.1.8.4, “Execution Tab” for further information.
Members Displays the names of the ECs/ECSAs that have been included in the group.
254
Desktop 7.1
2. Select if the group should contain ECs or ECSAs by clicking on the corresponding radio button.
4. Click on the Execution Context drop-down-list and select one of the ECs/ECSAs you want to add
and click on the Add button.
The selected EC/ECSA is now added in the Execution Context section in the Add EC Groups
dialog.
5. Repeat step 4 for all the ECs/ECSAs you want add and then close the dialog by clicking on the
Close button.
6. When you are satisfied with your group configuration, click on the Add button in the Add EC
Groups dialog.
The group is now added in the Groups tab in the Pico Manager configuration.
7. If you want to add additional groups, alter the configurations in the Add EC Groups and click on
the Add button for each group.
8. When you have created all the groups you want to have, click on the Close button.
The configured groups are now available when configuring Execution settings in Workflow Prop-
erties. See Section 4.1.8.4, “Execution Tab” for further information.
To open the Pico Viewer, click the Tools button in the upper left part of the MediationZone® Desktop
window, and then select Pico Viewer from the menu.
255
Desktop 7.1
Pico Instance Name of the MediationZone® pico instance/clients. For each and every of one the
pico instances - Platform, Desktop, EC, etc - a JVM (Java Virtual Machine) is
started.
Allows the user to remove a stand-alone unreachable Execution Context from the
system in case it is unreachable. The platform will never automatically unregister
such instance since it is accepted that it can reside on an unreliable network.
Secure Indicates if the Pico instance is SSL secured or not.
Start Time The time the pico instance was started.
Memory Used, available, and maximum memory on the hosting JVM.
Response [ms] The time it took in milliseconds for the local Desktop to invoke a ping on the pico
instance.
Resting the mouse pointer on any of the objects in the Memory column, will display detailed inform-
ation on the memory usage on the hosting JVM.
256
Desktop 7.1
You can send this export data to another MediationZone® system, where you can use the System
Importer to import it and embed its contents locally.
Example 50.
A MediationZone® system can import a tested export ZIP file of configurations from a test
system and use it safely in production.
In System Exporter you can select data from the following folder types:
• Configuration: workflow configurations, agent profiles, workflow groups, Ultra formats, or alarm
detectors.
• System: Other customized parts of MediationZone® such as: ECS, event category, folder (structure),
pico host, Ultra, user, or workflow alarm value.
• Avoid exporting excessive amounts of data. For information about data clean-up see Sec-
tion 4.1.1.4, “System Task Workflows”.
• When exporting Event Notifications, these will be disabled on import by default, see Sec-
tion 7.10.1.1, “To Import Data:” for further information.
7.9.1. Exporting
To open the System Exporter, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select System Exporter from the menu.
257
Desktop 7.1
When you select an entry from the Available Entries table, all the de-
pendent entries are automatically selected, as well.
Encryption The export file is a ZIP file that contains a collection of XML files. Select the
Encryption option to make these files password encrypted.
258
Desktop 7.1
View Log Select this option to open a log of the export file production.
Note! Since no runtime configuration change is included in the exported data and only the initial
value is exported, you need to take a note of information such as file sequence numbers in Col-
lector agents.
1. In the System Exporter, select options according to your preferences in the Edit menu.
2. Click on the Browse button to select the directory path to where you want to save your selections,
either ZIP-packed or not.
Hint!
in the $MZ_HOME/etc/desktop.xml file, you can configure what default directory you
want to have when clicking on the Browse button.
The Exporter will also remember the last directory to which an export was made, and will
open the file browser in this directory the next time you click on the Browse button. This
directory will be kept in memory until the Desktop is closed.
259
Desktop 7.1
3. In the Available Entries field, expand the folders and select the check boxes for the entries you
want to export in the Include column.
4. Click on the Save as... button if you want to save data about your export.
Note! After you have selected the entries that you want included in the export ZIP file, you
can save your selections combination before you click Export. This is particularly useful if
you export a certain set of selections regularly.
The saved selection combination is a *.criteria" file that contains data only about your
selections. It is not an export ZIP file. The *.criteria" file is stored on your local disk
and not in the MediationZone® system.
Either an export ZIP file will be created at the Output Target, or the selected structure will be ex-
ported to the specified directory.
Note! In the export material you will also find three directories: one that includes the Ultra
code that your export involves, one that includes profile-relevant APL code, and another one
that contains workflow-related data. You can use the files that are included in these directories
to compare the export material with the data on the system to which you export.
If the profile or workflow data is password encrypted, it is exported as it is. Otherwise, a directory
named after that export data file, is created. In this directory, the contents of the export data file are
divided into files, as follows:
260
Desktop 7.1
The tree structure of the exported material is identical to the structure that is displayed on the
System Exporter view. See Figure 208, “The System Exporter View”.
System Importer imports data that has been exported by the System Exporter. Every time you import
data, System Importer will save a backup file that contains all the imported data. This file is stored
on the Platform computer, under $MZ_HOME/backup/yyyy_MM, by the name
import_<date>_<filename>.zip.
The file exported by the System Exporter can contain data from the following folder types:
• Configuration: Workflow configurations, agent profiles, workflow groups, Ultra formats, or alarm
detectors.
• System: Other customized parts of MediationZone® such as: ECS, Event Category, Folder (structure),
pico host, Ultra, user, or workflow alarm value.
• Avoid importing excessive amounts of data. For information about data clean-up see Sec-
tion 4.1.1.4, “System Task Workflows”.
• When importing Event Notifications, these will be disabled by default, see Section 7.10.1.1,
“To Import Data:” for further information.
7.10.1. Importing
To open the System Importer, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select System Importer from the menu.
261
Desktop 7.1
Invalid Ultra and APL definitions are considered erroneous, and result
in aborting the import.
Select Dependencies Select this option to have dependencies follow along with the entries that
you actively select.
Preserve Permissions Select this option to preserve user permissions in the current system when
importing a configuration. Clear this option if it is okay to overwrite user
permissions in the current system when importing a configuration.
Directory Input Select this option to enable the import of unpacked data that have been expor-
ted to a directory, see Section 7.9.1, “Exporting” for further information.
Clear this option to import a ZIP file.
Hold Execution Select this option to prevent scheduled workflow groups from being executed
while importing configurations.
Restart For information, see systemimport in the MediationZone® Command Line
Tool user's manual.
Stop and Restart For information, see systemimport in the MediationZone® Command Line
Tool user's manual.
Stop Immediately and For information, see systemimport in the MediationZone® Command Line
Restart Tool user's manual.
Wait for Completion For information, see systemimport in the MediationZone® Command Line
and Restart Tool user's manual.
The View Menu
262
Desktop 7.1
View Log Select this option open a log of the import process.
2. Click on the Browse button to select the directory where the exported data is located.
Hint!
in the $MZ_HOME/etc/desktop.xml file, you can configure what default directory you
want to have when clicking on the Browse button.
The Importer will also remember the last directory from which and import was made, and
will open the file browser in this directory the next time you click on the Browse button.
This directory will be kept in memory until the Desktop is closed.
3. In the Available Entries field, expand the folders and select the check boxes for the entries you
want to import in the Include column.
263
Desktop 7.1
5. Update the dynamic configuration data in the collectors with the file sequence numbers, that you
had noted down before performing the Export, see Section 7.9.1.1, “To Export Data:” for further
information.
• Prior to importing Inter Workflow and Aggregation profiles, empty the Workflow data stream.
Otherwise, these agent profiles will be overwritten by the profiles that are included in the
imported bundle, and might not recognize or reprocess data.
• Imported workflow groups are disabled by default. You need to activate all the members,
their respective sub-members, and the workflow group itself.
• When you import a User it is disabled by default. A User with Administrator permissions
must enable the user and revise which Access groups the user should be assigned to.
• Imported Alarms are disabled by default. You enable an Alarm from the Alarm Detection.
• Imported Event Notifications are disabled by default. You enable an Event notification from
the Event Notification Configuration.
If, for instance, a workflow aborts, the reason for the abortion may be tracked through this utility.
To open the System Log, click the Tools button in the upper left part of the MediationZone® Desktop
window, and then select System Log from the menu.
264
Desktop 7.1
Initially, the window is empty and must be populated with data using the Search System Log dialog.
For further information, see Section 7.11.1, “Searching the System Log”.
Edit menu Select Selects all entries in the group currently displayed. A group is selected from the
All Show Entries list.
Edit menu Displays the Search System Log dialog where search criteria may be defined to
Search... limit the entries in the list. For further information, see Section 7.11.1, “Searching
the System Log”.
View menu Show Displays the Stack Trace Viewer window. This information must always be in-
Trace... cluded when contacting DigitalRoute® Support in cases involving error messages.
Show Entries Entries matching the search criteria are displayed in groups of 500. Show Entries
contains a list of all groups of 500, from which one is selected. Note that the full
content of the log messages for a group is fetched from the database once the
group is selected. This to have as little impact on the overall performance of the
system as possible.
Severity The severity of the message, could be any of the following:
• E (Error) - An error is logged when any part of the system fails, for instance,
when a workflow aborts. Double-clicking an error message will display the
Stack Trace Viewer. This information must always be included when contact-
ing DigitalRoute® Support, in cases involving error messages.
265
Desktop 7.1
• D (Disaster) - Is usually never used, other than possibly for user defined agents.
266
Desktop 7.1
For an entry to be displayed in the list, it has to pass all of the following filters.
Log Area Which part of system that reported the entry. At least one must be enabled.
Severity Type Type of severity. At least one type must be enabled.
Period Period between what dates entries will be viewed. A few predefined options are
available. If none is selected, all are considered.
From If User Defined is selected in the Period list, all entries reported after
the selected date will match. If not, all entries before the To date will
match.
To If User Defined is selected in the Period list, all entries reported before
the selected date will match. If not, all entries after the From date will
match.
Workflow Group Check to include log messages of the workflow group that you select from the
drop-down list.
Workflow Contains options to filter out specific workflows and/or agent names. If disabled,
all workflows/agents will match.
Agent Check to include log messages of the agent that you select from the drop-down
list.
Note! System log presents log messages according to your Log Area selec-
tion: User, System, and/or Workflow. To have System Log present agent-
related messages you need to configure an agent event from the Event No-
tification Configuration. For further information see: Add Event in Sec-
tion 5.3.2, “The Event Setup Tab”.
267
Desktop 7.1
Username If enabled, all activities performed by the selected user will match. If disabled, all
user activities will match.
Log Message Log entries may be scanned for occurrences of specific messages. Using all lower
case letters in the text field will result in case insensitive search. If upper case
letters are used anywhere in the text field the search will be case sensitive.
• Browse select - A continuous number of rows are selected by first clicking it, then while holding
down the <Shift> key clicking the last row.
• Extended select - Individual rows are selected by clicking them while the <Ctrl> key is held down.
Headers Only Will print only a short summary of each selected entry. The printed information
is the same as displayed on each row in the browser.
Full Details Will print detailed information about each selected entry, one page for each.
The information printed is the same as displayed in the Message Area of the
System Log Inspector.
Include Stack Trace Will include the stack trace for the log entries where available (that is, Error
type messages).
There are three different types of statistics: host, pico instance and workflow.
268
Desktop 7.1
MediationZone® uses the std UNIX command to collect the information. This binary must be installed
to have statistics collected and to perform load-balancing work for workflows. The following list holds
all values collected from each host. On newer operating systems, some of these may not be available
for collection due to changes in the kernel of the operation system.
• CPU User Time - This value shows how much time was spent in non-kernel specific code. This
value is displayed in percentage. 100% means that all processing power is spent. See CPU System
Time as well.
• CPU System Time - This value shows how much time was spent in kernel specific code, such as
scheduling of different processes or network transfer. This value is displayed in percentage. 100%
means that all processing power is spent.
• Context Switches - The number of context switches per second. A context switch occurs when one
process hands over information to another process. The more context switches, the less effective
and scalable the system will be.
• Swapped To Disk - The amount of data that was swapped out. A large value indicates that the
system does not have enough RAM to manage the memory requirements of the different processes.
• Swapped In From Disk - The amount of data that was read from swap.
• Processes Waiting For Run - Shows how many processes that are waiting to run. A high number
indicates that the machine is not fast enough to manage the load.
• Processes Swapped Out - Processes that have been persisted in swap due to insufficient available
memory, or due to aggressive management of the memory layer.
• Processes In Sleep - The number of processes that are presently not doing anything.
• Used Memory - Shows the amount of memory currently allocated by the running process. As Java
is a language utilizing garbage collection, this number may very well get close to the maximum
memory limit without being a problem for the running process. However, if the amount of used
memory is close to the maximum limit for a long time, the process needs more memory. This value
is displayed in bytes. See the -Xmx and -Xms properties defined in the XML file defining the process.
• Maximum Memory - Shows the amount of memory that the process can use. This value is displayed
in bytes.
• Process CPU Time - Shows the percentage of CPU time that has been used.
• Open File Descriptors - This is a Unix measurement that enables you to create a statistical diagram
over the number of open files during the last minute, hour, or day.
• Garbage Collection Count - Shows the number of times the garbage collector has run since the
last time statistics was collected.
• Garbage Collection Time - Shows the amount of time the garbage collector has run since the last
time statistics was collected. This value is displayed in milliseconds
269
Desktop 7.1
• Queue Throughput - Displays queue throughput per second for real-time queues. Statistics for
real-time queues is only available when routing UDRs, not raw data.
Note! To enable its convenient delegation to external systems, or to generate an alarm if the
throughput falls too low, throughput is also defined as a MIM value for the workflow. For
further information, see Throughput Calculation.
• Queue Size - The size of the queue space that is being used at the time of the sample for each indi-
vidual queue.
MediationZone® uses Java Management Extensions (JMX) to monitor MIM tree attributes in running
workflows. For more information, refer to Section 8.3, “Workflow Monitoring”.
To display statistics in the System Statistics window, you will have to use the Search function, see
Section 7.12.4.1, “Searching System Statistics” or Import statistics, see Section 7.12.6, “Importing
Statistics”.
In the Search System Statistics dialog, search criteria may be defined in order to single out the Stat-
istics of interest.
270
Desktop 7.1
View Mode Specifies the type of statistics you want to view; Host, Pico Instance or Workflow.
Resolution Specifies the time resolution to be used.
There are three different time resolutions on which statistics are collected.
Minute This is the most precise value but requires the most from the server when
locating the statistics. It is saved every minute.
Hour These values are calculated every hour and are a sum of the minute values
for that hour.
Day Day values are calculated by the corresponding statistics task and is a sum of
the minute values for that day.
Criteria The Criteria settings are used for selecting which search criteria you want the displayed
statistics to meet. The following criteria are available:
Host - If this option is selected, the statistics originating from the host selected in the
drop-down list will be displayed.
Pico Instance - If this option is selected, the statistics originating from the pico instance
selected in the drop-down list will be displayed.
Workflow - If this option is selected, the statistics originating from the workflow selected
in the drop-down list will be displayed.
Period - If this option is selected, the statistics from the chosen time interval will be
displayed. Either you can select one of the predefined time intervals; Today, Yesterday,
This Week, Previous Week, Last 7 Days, This Month or Previous Month, or you can
select the option User Defined and enter the start and end date and time of your choice
in the From: and To: fields.
Note! If several criteria are enabled, an absolute match will be displayed. For in-
stance, if Host and Workflow is specified as well as Period, only the time for
which there are both workflow measures and host measures is displayed.
271
Desktop 7.1
For each type of statistics you have selected to view, you will see a Value drop-down list displaying
the statistical value selected for the statistical type, and one drop-down list displaying the criteria se-
lected. If you select another statistical value or another criteria in one of these drop-down lists, the
statistical data in the System Statistics window will be updated instantly.
Host View The statistics for each host will be displayed in a separate color. If there are
several matching hosts, all may be displayed at the same time.
Pico Instance View The statistics for each pico instance will be displayed in a separate color. If
there are several matching pico instances, all may be displayed at the same
time.
Workflow View The statistics for each workflow will be displayed in a separate color, and if
you have selected to view queue statistics, each queue will have its own color.
If there are several matching workflows/queues, all may be displayed at the
same time.
The menus in the menu list contains options for printing, exporting, importing, searching and refreshing
the statistics. Searching, printing and refreshing can also be performed by using the buttons in top of
the window. To the right of the buttons you can see the current date. For further information about the
272
Desktop 7.1
Export... and Import... options, see Section 7.12.5, “Exporting Statistics” and Section 7.12.6, “Import-
ing Statistics”.
In the bottom of the window, there is a scroll bar and two buttons for zooming in and out. With these,
you can focus on a particular time window within the search result. The scrollbar enables you to scroll
back and forth in time to see the value changes.
In the bottom right corner of the window, you have a drop-down list called Value. This list contains
three different types of values:
1. In the System Statistics window, click on the File menu and select the Export... option.
2. Browse to the directory where you want to save the file, enter a file name and click on the Save
button.
The statistical information for the selected time period will be saved in *.zip format.
Hint! The export functionality can also be used for saving statistics on a regular basis, e g every
month or every year, to use for comparison with current statistics.
Note! An import of statistics will not affect the data in the database, it will just display a snapshot
of the statistics at the time when it was exported.
1. In the System Statistics window, click on the File menu and select the Import... option.
2. Browse to the directory where the *.zip file you want to import is located, select the file and click
on the Open button.
The statistical information will now be displayed in the System Statistics window. The same search
criteria that were set in the Search System Statistics dialog when the statistics was exported will
be displayed.
273
Desktop 7.1
The date information in the top of the window will now display the time interval for the imported
statistics, and the text "Imported Statistics" will also appear in red beside the date information.
274
Desktop 7.1
8. Monitoring
MediationZone® uses Java Management Extensions (JMX Beans) to enable external monitoring. A
connector is used to connect a JMX agent to a JMX enabled management application.
The Java Monitoring and Management Console (jconsole) is a JMX client that allows you to monitor
a local or remote JVM process. Currently you can monitor:
• Events
• Workflows
• RCP Latency
• Aggregation
• Couchbase Monitoring
http://docs.oracle.com/javase/8/docs/technotes/guides/management/jconsole.html
2. If you want to monitor a local JVM process, select the Local Process. Select the process you want
to view and then click on the Connect button.
275
Desktop 7.1
Note! Which process you should to select depends on what you want to monitor. If you want
to monitor the Event Server, select the codeserver process. For other monitoring, e g Event
Sender or workflow, select the picostart process for the Execution Context that the Event
Sender or workflow is running on.
3. If you want to be able to monitor a JVM process remotely, you have to add a few JDK properties
in the platform.xml and executioncontext.xml files.
<jdkarg value="-Dcom.sun.management.jmxremote.port=9999"/>
<jdkarg value="-Dcom.sun.management.jmxremote.authenticate=false"/>
<jdkarg value="-Dcom.sun.management.jmxremote.ssl=false"/>
you will be able to connect to port 9999 without having to enter any user name or password,
and without using SSL.
Note! Use different ports if you set remote port in both platform.xml or
executioncontext.xml.
For further information about which ports that you are recommended to use, how to set up
user names and passwords, how to set up SSL, and remote monitoring and management in
general, see the JDK product documentation regarding JConsole Management.
In the New Connection dialog, you can then select the option Remote Process:, enter the hostname
and port along with any username and password that may apply, and click on the Connect button.
• EventServerQueue - which shows information about all the events in the system
• EventListenerQueue - which shows information about the different listeners in the system
• ECEventSenderQueue - which shows information about the events that the Execution Context
will try to send to the Platform. If the connection with the Platform is broken, the EC/ECSA will
cache the events and then try to send them again once the connection is back up.
1. Select the codeserver process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.
The Java Monitoring & Management Console will open and display the Overview tab.
276
Desktop 7.1
3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.event, then on
the plus sign for EventServerQueue.
4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.
EventLoad This attribute value shows the current load of the EventServerQueue, i e the amount
of the queue's maximum size that is occupied with events, where 0.75 equals 75
%, 0.5 equals 50 %, etc.
NoOfListeners This attribute value shows the total number of listeners within the system. If you
want to view information about a specific listener, see Section 8.2.2, “Monitoring
the EventListenerQueue”.
QueueSize This attribute value shows the total number of events in the queue.
TotalEvents This attribute value shows the total number of events logged since the Platform was
started.
To view the different listeners, follow the same procedure as for the EventServerQueue, see Sec-
tion 8.2.1, “Monitoring the EventServerQueue”, but click on the plus sign for EventListenerQueue
instead.
277
Desktop 7.1
Expand the tree for the listener that you want to view attributes for by clicking on the plus signs for
the EventListener you want to view and for Listener. Click on on Attributes to display the different
attribute values in the right section of the JConsole window.
For each listener in the EventListenerQueue you can see the following information:
EventLoad This attribute value shows the current load of the EventListenerQueue, i e the amount
of the queue's maximum size that is occupied with events, where 0.75 equals 75 %,
0.5 equals 50 %, etc.
QueueSize This attribute value shows the total number of events in the listener's queue.
TotalEvents This attribute value shows the total number of events logged for the listener since the
Platform was started.
1. Select the picostart process for the EC you want to monitor the EventSender for when starting the
JMX client, see Section 8.1, “Starting the Jconsole Client” for further information.
The Java Monitoring & Management Console will open and display the Overview tab.
3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.event, then on
the plus sign for ECEventSenderQueue.
4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.
278
Desktop 7.1
ConnectedToPlatform This attribute value shows whether the Execution Context (ECSA) is con-
nected to the Platform (true) or not (false).
ConnectionDownTime This attribute value shows for how long the connection with the Platform
has been down. This value is displayed in seconds.
EventLoad This attribute value shows the current event load of the EventSender queue,
i e the amount of the queue's maximum size that is occupied with events,
where 0.75 equals 75 %, 0.5 equals 50 %, etc.
PersistentQueueSize This attribute value shows the number of events that the Execution Context
has not been able to send to the Platform due to broken connection.
QueueSize This attribute value shows the total number of events in the queue.
TotalEvents This attribute value shows the total number of events logged since the Ex-
ecution Context was started.
Note! Currently, the MIM monitoring is limited to global MIMs (real-time workflows).
279
Desktop 7.1
1. Select the picostart process for the Execution Context on which the workflow is running when
starting the JMX client, see Section 8.1, “Starting the Jconsole Client” for further information.
The Java Monitoring & Management Console will open and display the Overview tab.
3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.wf, then on the
plus sign for Workflow and then on the plus sign for the workflow you want to monitor.
4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.
Beneath the workflow in the tree to the left, the MIMTree and Attributes can be expanded to display
more details of the different MIM tree attributes as shown in Figure 221, “Workflow Monitoring using
Jconsole”.
When the Latency Statistics agent is used in the workflow, additional information becomes available
in the LatencyInfo structure. For information about the Latency Statistics agent, see Section 10.18,
“Latency Statistics”
280
Desktop 7.1
• Platform
• Execution Context
• Desktop
• Command Line
Each time a new instance is started, for example, when starting an mzsh shell from the Command
Line, it will be added to the list of monitored Pico Instances.
The latency is the time it takes for a ping request to be sent to another party, for example, from the
Platform to an Execution Context, and for the corresponding ping response to be received.
Note! When starting the Platform, the latency values might become high. To get more realistic
values, do a reset using resetAllValues, as described in Section 8.4.2, “Operations”.
To monitor the latency between the Platform and the Pico Instance communicating with it, do the
following:
281
Desktop 7.1
1. Select the codeserver process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.
The Java Monitoring & Management Console will open and display the Overview tab.
3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.rcp, then on
the plus sign for Ping and then on the plus sign for the connection you want to monitor.
8.4.1. Attributes
Click on Attributes in the tree to display the different attribute values for a certain Pico Instance in
the right section of the JConsole window.
Counter This attribute value shows the total number of ping requests sent from a Pico Instance.
This value shows "0" when the Platform has been started, or after resetting the RCP
latency values using resetAllValues, as described in Section 8.4.2, “Operations”.
Latency Shows the current latency.
MinLatency This attribute value shows the lowest latency since the Platform was started, or after
resetting the RCP latency values using resetAllValues, as described in Section 8.4.2,
“Operations”.
MaxLatency This attribute value shows the highest latency since the Platform was started, or after
resetting the RCP latency values using resetAllValues, as described in Section 8.4.2,
“Operations”.
282
Desktop 7.1
8.4.2. Operations
Click on Operations in the tree to display the operation alternatives for a certain Pico Instance in the
right section of the JConsole window.
resetAllValues Click on the resetAllValues operation in the tree to the left and then click this button
to reset all RCP latency values.
The MBean is registered under the com.digitalroute.profile domain, with the key type set to "Ag-
gregation" and the key name set to the Aggregation profile's name.
For information about MIM values published by the Aggregation agent, see Section 11.1.3.9, “Meta
Information Model”
283
Desktop 7.1
1. Select the picostart process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.
The Java Monitoring & Management Console will open and display the Overview tab.
3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.profile, then
on the plus sign for Aggregation and then on the plus sign for the Aggregation Profile you want
to monitor.
8.5.1.1. Attributes
Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.
Figure 225. JConsole displaying the attributes for Aggregation with file storage
AggregationTime This attribute value shows the time (in milliseconds) that has been spent on
aggregation on the last batch.
CacheHits This attribute value shows the number of cache hits counted by the Aggreg-
ation profile each time session information is read from the cache.
CacheHits is reset each time the Execution Context is started, or after using
resetCounters, as described in Section 8.5.1.2, “Operations”.
CacheMisses This attribute value shows the number of cache misses counted by the Ag-
gregation Profile each time session information cannot be read from the
cache and is instead read from disk. Note that if a non-existing session is
requested, this will not be counted as a cache miss.
284
Desktop 7.1
8.5.1.2. Operations
Click on Operations in the tree to display the operation alternatives for the Aggregation Profile in the
right section of the JConsole window.
Figure 226. JConsole displaying the operations for Aggregation with file storage
resetCounters Click on the resetCounters operation in the tree to the left and then click this button
to reset the values for CacheHits, CacheMisses and CreatedSessions.
285
Desktop 7.1
The MBean is registered under the com.digitalroute.workflow domain, with the key type set to
"Workflow" and the key workflow set to the name of the Aggregation workflow.
For information about MIM values published by the Aggregation agent, see Section 11.1.3.9, “Meta
Information Model”
1. Select the picostart process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.
The Java Monitoring & Management Console will open and display the Overview tab.
3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.workflow, then
on the plus sign for Workflow and then on the plus sign for the workflow you want to monitor.
8.5.2.1. Attributes
Click on MIM Tree and then Attributes in the tree to display the different attribute values in the right
section of the JConsole window.
Figure 227. JConsole displaying the attributes for Aggregation with Couchbase storage
<agent name>.Agent This attribute value shows the name of the Aggregation agent.
Name
<agent name>.Created This attribute value shows the number of created aggregation sessions.
Session Count
The value of <agent name>.Created Session Count is reset when the
workflow is started.
286
Desktop 7.1
<agent name>.Inbound This attribute value shows the number of UDRs routed to the agent.
UDRs
The value of <agent name>.Inbound UDRs is reset when the workflow
is started.
<agent name>.Mirror This attribute value shows the total number of attempts to retrieve a stored
Attempt Count mirror session.
Example 52.
The value of <agent name>.Mirror Not Found Count is reset when the
workflow is started.
<agent name>.Outbound This attribute value shows the number of UDRs routed from the agent.
UDRs
The value of <agent name>.Outbound UDRs is reset when the workflow
is started.
<agent name>.Session This attribute value shows the number of sessions removed.
Remove Count
287
Desktop 7.1
Example 53.
• There are 1000 sessions with a timeout latency that is less than
one minute.
• MonitorCoordinator - which shows information about the number of monitored Couchbase Nodes.
• Monitor_<cluster id> - which shows detailed information about the monitored Couchbase cluster.
288
Desktop 7.1
1. Select a picostart process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.
The Java Monitoring & Management Console will open and display the Overview tab.
3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.couchbase.mon-
itor, then on the plus sign for ConfigCordinator.
4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.
Unmanaged This attribute value shows the name of the Couchbase profiles of Couchbase Clusters
that are unmanged i e that do not respond to management requests.
Monitored This attribute value shows the IP addresses and ports of the the configured Couchbase
Cluster nodes, and the Couchbase cluster id to which they belong.
Coordinator This attribute value shows if the Execution Context actively performs Couchbase
Monitoring (true) or not (false).
289
Desktop 7.1
1. Select a picostart process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.
The Java Monitoring & Management Console will open and display the Overview tab.
3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.couchbase.mon-
itor, then on the plus sign for MonitorCoordinator.
4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.
LocalMonitored This attribute value shows the number of Couchbase Nodes that are monitored
by the Execution Context.
AllMonitored This attribute value shows the total number of monitored Couchbase clusters
290
Desktop 7.1
1. Select a picostart process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.
Note! It is not possible to know which Execution Context that is associated with the leader
in the ZooKeeper cluster. For this reason you will need to connect to all available picostart
processes until you find the one that contains the bean.
The Java Monitoring & Management Console will open and display the Overview tab.
3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.couchbase.mon-
itor, then on the plus sign for Monitor_<cluster id>.
4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.
For the Monitor_<cluster id> you can see the following information:
291
Desktop 7.1
ClusterAvailable This attribute value shows if the Couchbase cluster is currently available
(true) or not (false).
ClusterDetails This attribute value shows IP Address and id of the monitored couchbase
cluster.
ClusterUnavailableDura- This attribute value shows for how long the cluster has been unavailable
tion in seconds. When the cluster is available, this value is set to 0.
Failovers This attribute value shows the number of failovers since monitoring
started.
FailureCountThreshold This attribute value shows the maximum number of failed health checks
before a Couchbase node is automatically failed over.
Frequency This attribute value shows the frequency of health checks in milliseconds.
HealthChecks This attribute value shows the total number of cluster health checks that
have been performed since the Execution Context started.
LastHealthCheckDetails This attribute value shows the result of the last health check, including
node health and cluster membership. The IP addresses in the attribute
value may not be the same as the ones specified in the Couchbase profile.
For example, the IP address in this value can be 127.0.0.1 for a Couchbase
node running on the local host machine, even though an external IP ad-
dress is specified in the profile.
StartTime This attribute value shows the date and time when monitoring started.
292
Desktop 7.1
9. Appendix I - Profiles
This appendix contains descriptions for the profiles that are not related to specific agents. All the agent
specific profiles are described in connection with each agent in the following appendixes.
The audit table column types are defined in an Audit profile configuration.
The Audit profile is loaded when you start a workflow that depends on it. Changes to the profile become
effective when you restart the workflow.
To create a new Audit profile configuration, click the New Configuration button in the upper left part
of the MediationZone® Desktop window, and then select Audit Profile from the menu.
To open an existing Audit profile configuration, double-click the Configuration in the Configuration
Navigator, or right-click a Configuration and then select Open Configuration(s)....
Database This is the database the agent will connect and send data to.
Click the Browse... button to get a list of all the database profiles that are available.
For further information see Section 9.3, “Database Profile”.
293
Desktop 7.1
The Audit functionality in MediationZone® is supported for use with the following
databases:
• Oracle
• TimesTen
• Derby
• SQL Server
• SAP HANA
Refresh Select Refresh to reload the meta data for the tables residing in the selected data-
base.
Use Default Check this to use the default database schema that was added in the Username
Database field of the Default Connection Setup in the Database profile configuration. For
Schema more details on how to add a default database schema, see Section 9.3, “Database
Profile”.
Note! This is not applicable for all database types. Use Default Database
Schema is available for selection only when accessing Oracle or TimesTen
databases.
Tables within the default schema will be listed without schema prefix.
Table A list of selected audit tables. For further information about adding and editing
tables, see Section 9.1.3, “Adding and Editing a Table Mapping”.
294
Desktop 7.1
• Counter - A built-in sequence which is incremented with the value passed on with
the auditAdd APL function.
• Key - Used to differ between several audit inserts. It is possible to use several keys,
where a unique combination of keys will result in one new row in the database.
If the same key combination is used several times within a batch, the existing row will
be overwritten with new audit data. However, if a later batch uses the same key com-
bination, a new row will be created.
If using more than one key, the Key Sequence must be entered in the same order when
calling the auditAdd or auditSet APL functions. The Audit functions are further
described in the APL Reference Guide.
Note that this is not a database key and it must be kept as small as possible. A value
that is static during the whole batch must never be used as a key value.
• Value - A column holding any type of value to be set, except for Counter values. This
is used in combination with the auditSet APL function. Another use is mapping
against existing MIM values in the Workflow Properties window.
• Transaction Id - To make sure entries are transaction safe, each table must contain a
column of type NUMBER and at least have the length twelve (or have no size declared
at all). Do not enter or alter any values in this column, it is handled automatically by
the MediationZone® system. The value -1indicates that the entry is committed and
safe.
Note! The Transaction Id should be indexed for best performance. The contents
will be of low cardinality and could therefore be compressed if supported.
• Unused - Used in case a column must not be populated, that is, set to null.
Key Se- A key sequence is a defined way to assign a Key value, to identify in which order you
quence need to send along key values when you use the auditAdd or auditSet APL func-
tions.
Each key in a table must have a sequence number in order to be identified when passed
on as parameters to the APL audit functions. The first key is identified as 1, the second
as 2, and so on.
The key sequence will uniquely identify all audit log entries to be inserted per batch.
295
Desktop 7.1
9.1.4. An Example
To illustrate how Audit may be used, consider a workflow with an Analysis agent, validating and
routing UDRs. Most of the UDRs will be sent on the "COMPLETE" route. The rest of the incomplete
UDRs will be sent on the "PARTIALS" route. If there is a considerable amount UDRs that are routed
to the latter, the batch is canceled.
The output on each route is to be logged in a concealed audit table, including information on canceled
batches. An entry in the table will be made for each batch, and for each route. Hence two entries per
batch.
In this example only the destination key is needed, which will uniquely identify all rows to be inserted
per batch. The name of the destination agent is therefore selected. Note, it is not possible to update an
existing row in the table, only to add new rows. This to assure the traceability of data. In order to output
other information than MIM values (which may be mapped in the Workflow Properties window),
the workflow must contain an Analysis or Aggregation agent.
• One column (of type NUMBER) must be reserved for the MediationZone® transaction handling.
This column should be indexed in order to achieve best performance. The contents will be of
low cardinality and could therefore be compressed if supported.
• Consider which column/columns that contains tag information, that is, the key. A key may consist
of one or several columns.
2. Create an Audit profile. For further information, see Section 9.1.4.1.1, “Adding the Table Mapping”.
296
Desktop 7.1
3. Map parameters in the Workflow Preferences Audit tab to the Audit profile. For further information,
see Section 9.1.4.2, “Workflow Properties - Audit tab”.
4. Design APL code to populate the tables. For further information, see Section 9.1.4.3, “Populating
Audit Tables”.
From the Add and Edit Audit Table Attributes dialogs, the existing table columns are mapped to
MediationZone® valid types.
The data to insert, will be put in the UDRs column. Setting it to type Counter, makes it possible to
use the auditAdd function to increment the corresponding column value. If Value is used, the
auditSet function can be used in order to assign a value.
The DESTINATION and UDRs columns in Figure 235, “The Audit Profile” are populated by using
the APL audit functions. The CANCELED column name might be mapped directly to an existing
MIM value.
297
Desktop 7.1
Value with the auditSet function. Note that Counter columns are automatically set to 0 (zero)
when a batch is canceled. This is not the case for Value columns.
In the following subsections, the differences of the case exemplified in Figure 234, “Audit Information
May Be Concealed” is discussed.
Note! In terms of performance, it does not matter how many times an audit function is called.
Each call is saved in memory and a summary for each key is committed at End Batch.
By using the auditAdd function, the user does not have to keep track of the number to increment a
counter column with. At Cancel Batch, the value is set to 0 (zero).
In the current example, each UDR is validated with respect to the contents of the causeForOutput
field. The statistics table is updated to hold information on the numbers of UDRs sent on the different
routes.
Example 54.
int noPART;
beginBatch {
noPART = 0;
}
consume {
if ( input.causeForOutput == "0" ) {
udrRoute( input, "COMPLETE" );
auditAdd( "myFolder.count_PARTIALS",
"ADMIN.PARTIALS_AUDIT",
"UDRS", 1,
mimGet( "TTFILES", "Source Filename" ),
mimGet( "COMPLETE", "Agent Name" ) );
} else {
noPART = noPART + 1;
if ( noPART < 300 ) {
udrRoute( input, "PARTIALS" );
auditAdd( "myFolder.count_PARTIALS",
"ADMIN.PARTIALS_AUDIT",
"UDRS", 1,
mimGet( "TTFILES", "Source Filename" ),
mimGet( "PARTIALS", "Agent Name" ) );
} else {
cancelBatch( "Too many partials found." );
}
}
}
Using the auditSet function for the same example as discussed in the previous section, means the
user has to keep track of the number of records in the APL code. Note that the Profile must be updated;
the Counter column must be redefined to Value.
298
Desktop 7.1
Value columns are not reset when a batch is canceled. Hence there will be entries made in the table
for the UDRs column for all batches.
As a client to Couchbase, the profile operates in synchronous mode. When sending a request to
Couchbase, the profile expects a server response, indicating success or failure, before proceeding to
send the next one in queue.
When using the Couchbase profile for Aggregation, it is possible to enable asynchronous mode. In
this mode the Couchbase profile does not wait for a response from Couchbase before sending the next
request in queue. For more information about using the Couchbase profile in Aggregation, see Sec-
tion 11.1.3.13, “Performance Tuning with Couchbase Storage”.
The Couchbase profile is loaded when you start a workflow that depends on it. Changes to the profile
become effective when you restart the workflow.
Note! Created or updated Couchbase profiles that are used for PCC do not become effective until you
restart the Execution Contexts.
To create a new Couchbase profile, click on the New Configuration button in the upper left part of
the MediationZone® Desktop window, and then select Couchbase Profile from the menu.
To open an existing Couchbase profile, double-click on the configuration in the Configuration Nav-
igator, or right-click on the configuration, and then select Open Configuration(s)....
In a Couchbase profile, there are three tabs; Connectivity, Management, and Advanced.
299
Desktop 7.1
Bucket Name Enter the bucket that you want to access in Couchbase in this field.
Bucket Password Enter an optional password for the bucket in this field.
Connections Enter the number of connections between the nodes you want to have in
your cluster in this field.
Operation Timeout Enter the number of milliseconds after which database operations should
(ms) time out.
Operation Queue Max Enter the maximum time interval, in milliseconds, a client will wait to add
Block Time (ms) a new item to a queue.
Retry Interval Time In this field, enter the time interval, in milliseconds, that you want to wait
(ms) before trying to read the cluster configuration again after a failed attempt.
Max Number Of Re- Enter the maximum number of retries in this field.
tries
Cluster Nodes In this section, add IP addresses/hostnames and ports of at least one of the
nodes in the cluster. This address information is used by the Couchbase
profile to connect to the cluster at workflow start, and to retrieve the IP ad-
dresses and ports of the other nodes in the cluster.
If the first node in the list cannot be accessed, the Couchbase profile will
attempt to connect to the next one in order. This is repeated until a successful
connection can be established. Hence it is not necessary to add all the nodes,
but it is good practice to do so for a small cluster. For example, if there are
just three nodes, you should add all of them.
You should also add all nodes if Monitoring in the Management Settings
is Active. The specified nodes are used by the Couchbase Monitoring Service
to check the health of the cluster. If none of these nodes are available, the
monitoring will stop.
300
Desktop 7.1
Admin User Name If you want to create a new bucket that does not exist in your Couchbase
cluster, enter the user name that you stated when installing Couchbase in this
field.
Admin Password If you want to create a new bucket that does not exist in your Couchbase
cluster, enter the password that you stated when installing Couchbase in this
field.
Bucket Size (MB) Enter the size of the bucket you want to create, in MB in this field. Once the
bucket is created, you cannot change the size by updating this field.
Number of Replicas Enter the number of replicas you want to have in this field.
Monitoring - Active Select this check box if you want to activate the Couchbase Monitoring Ser-
vice. This service is suitable for High Availability installations, since it will
allow you to detect failing nodes earlier than the monitoring built into
Couchbase itself, and perform automatic failover of nodes.
Note! You must install and configure the the Couchbase Monitoring
Service to use this functionality. For more information, see the Install-
ation Instructions.
For information about how to access the current status of a cluster, see Sec-
tion 4.1.8.5.1, “Couchbase Monitor Service”.
Frequency (ms) Enter the frequency, in milliseconds, with which you want to perform monit-
oring.
Failure Count Enter the number of failures before performing a failover of a node.
Note! If you have several Couchbase profiles that have Monitoring activated, it is important
that the monitoring configurations for Frequency and Failure Count are the same in all the profiles,
as there is no guarantee which profile these settings are read from.
If the bucket that you specify in the Couchbase profile does not exist, it is created in runtime, i e when
accessed in a workflow. This is provided that Admin User Name and Admin Password have been
stated in the Management tab. If the bucket you want to access already exists in your cluster, these
two fields do not have to be filled in.
It is recommended to change these properties when using the Couchbase profile in Aggregation. For
more information about using the Couchbase profile in Aggregation, see Section 11.1.3.13, “Performance
Tuning with Couchbase Storage”.
301
Desktop 7.1
See the text in the Properties field for further information about the other properties that you can set,
or see the official Couchbase documentation at http://docs.couchbase.com for more detailed descriptions
of the different parameters.
• Audit Profile
302
Desktop 7.1
• Event Notification
What a profile can be used for depends on the selected database type. The supported usage for each
database type is described in Section 9.3.5, “Database Types”.
The Database profile is loaded when you start a workflow that depends on it. Changes to the profile
become effective when you restart the workflow.
To create a new Database profile configuration, click the New Configuration button in the upper left
part of the MediationZone® Desktop window, and then select Database Profile from the menu.
To open an existing Database profile configuration, double-click the Configuration in the Configuration
Navigator, or right-click a Configuration and then select Open Configuration(s)....
There is one menu item that is specific for Database profile configurations, and it is described in the
next coming section:
Item Description
External References Select this menu item to Enable External References in an Agent Profile Field.
Please refer to Section 9.5.3, “Enabling External References in an Agent
Profile Field” for further information.
303
Desktop 7.1
304
Desktop 7.1
Default Connection Setup Select to configure a default connection. For further information, see
Section 9.3.3.1, “Default Connection Setup”.
Advanced Connection Setup Select to configure the data source connection using a connection
string.
Database Type Select any of the any of the available database types. You may need
to perform some preparations before attempting to connect to the
database for the first time. For information about required preparations,
see Section 9.3.5, “Database Types”.
Connection String Enter a connection string containing information about the database
and the means of connecting to it.
Notification Service This field is used when the selected Database Type is Oracle. For
more information, see Section 9.3.5.4, “Oracle”
Username Enter the database user name.
Password Enter the database password.
Try Connection Click to try the connection to the database, using the configured values.
• Derby
• MySQL
• Netezza
• Oracle
• PostgreSQL
• SAP HANA
• SQL Server
• Sybase IQ
305
Desktop 7.1
• TimesTen
9.3.5.1. Derby
This section contains information that is specific to the database type Derby.
• Audit Profile
• Event Notification
9.3.5.1.2. Preparations
The drivers that are required to use the Derby database are bundled with the MediationZone® software
and no additional preparations are required.
9.3.5.2. MySQL
This section contains information that is specific to the database type MySQL.
• Event Notification
9.3.5.2.2. Preparations
This section describes preparations that you must perform before attempting to connect to a MySQL
database.
When performing table lookups to a MySQL database, the result may not be updated unless the Exe-
cution Context is restarted. Use the following statement to avoid this issue:
To decrease re-connection overhead, database connections are saved in a connection pool. To set the
connection pool size, open the executioncontext.xml file and edit its value:
306
Desktop 7.1
9.3.5.3. Netezza
This section contains information that is specific to the database type Netezza.
• sqlExec (APL)
For data loading, it is recommended that you use the SQL Loader Agent. You can also use the APL
function sqlExec for both data loading and unloading. For information on how to use sqlExec with
Netezza see Section 9.3.5.3.3, “APL Examples”. For further details, see also the IBM Netezza Data
Loading Guide.
9.3.5.3.2. Preparations
This section describes preparations that you must perform before attempting to connect to a Netezza
database.
A driver, provided by your IBM contact, is required to access the Netezza database. This driver must
be storen on each host (Platform or Execution Context) that will connect to a Netezza database.
For MediationZone® to access the Netezza database, the classpath must be specified. Edit the classpath
in the files platform.xml and executioncontext.xml for each Execution Context. For ex-
ample:
<classpath path="/opt/netezza/nzjdbc.jar"/>
After the classpath has been set, copy the jar file to the specified path.
The Platform and the Execution Contexts must be restarted for the changes in platform.xml and
executioncontext.xml to become effective.
This section gives examples of how to use the APL function SQLExec with a Netezza Database profile.
Example 55. Load data from an external table on the Netezza database host
initialize {
int rowcount = sqlExec("NETEZZA.NetezzaProfile",
"INSERT INTO mytable
SELECT * FROM EXTERNAL '/tmp/test.csv' USING (delim ',')");
}
307
Desktop 7.1
Example 56. Load data from an external table on the Execution Context host
initialize {
int rowcount = sqlExec("NETEZZA.NetezzaProfile",
"INSERT INTO mytable SELECT * FROM EXTERNAL '/tmp/test.csv'
USING (delim ',' REMOTESOURCE 'JDBC')");
}
Example 57. Unload data to an external table on the Execution Context host
initialize {
int rowcount = sqlExec("NETEZZA.NetezzaProfile",
"CREATE EXTERNAL TABLE '/tmp/test.csv'
USING (DELIM ',' REMOTESOURCE 'JDBC')
AS SELECT * FROM mytable");
}
9.3.5.4. Oracle
This section contains information that is specific to the database type Oracle.
• Audit Profile
• Event Notification
9.3.5.4.2. Preparations
If Oracle was not setup during installation of MediationZone® , you must perform additional installation
before attempting to connect to an Oracle database. For information about enabling client access to
Oracle, see the Installation Instructions.
308
Desktop 7.1
To make the Connection String text area and the Notification Service text field appear, select the
Advanced Connection Setup radio button. The Username, Password and Database Type fields will
remain.
If MediationZone® is installed with the Oracle database, the Oracle RAC functionality Fast Connection
Failover (FCF) is available. MediationZone® support FCF thus the behavior from MediationZone®
perspective is that there will normally be some exceptions generated during RAC instance failover.
When FCF is configured, MediationZone® detects a lost connection, clears the database connection
pool and reinitializes the connection pool.
During a RAC instance failover you might experience exceptions for example when database transac-
tions such as updates and inserts are done. Database exceptions are logged in the MediationZone®
system.
The Platform and Execution Contexts supports the failover behavior. However, note that neither
database collection nor forwarding agents support FCF. These agents have a different type of database
connection pool implementation.
Connection In the text field a connection string can be entered. The connection string can
String contain a SID or a service name. The string added will not be modified by the un-
derlying system.
If a connection string is longer than the text area space a vertical scroll bar will be
displayed to enable viewing and editing of the connection string.
Notification Ser- Enter the Configuration that enables the Oracle Notification Service daemon (ONS)
vice to establish a Fast Connection Failover (FCF).
The ONS string that you enter should at least specify the agent ONS configuration
attribute, that is made up of a comma separated list of host:port pairs. The hosts
and ports represent the remote ONS daemons that are available on the RAC agents.
For further information see the Installation guidelines of MediationZone® Oracle
RAC in the MediationZone® Installation User Guide.
9.3.5.5. PostgreSQL
This section contains information that is specific to the database type PostgreSQL.
309
Desktop 7.1
• Event Notification
9.3.5.5.2. Preparations
The drivers that are required to use the PostgreSQL database are bundled with the MediationZone®
software and no additional preparations are required.
The SAP HANA database can be used with the following functionality:
• Audit Profile
• Event Notification
Note! The SAP HANA database does not guarantee 99,999% availability. Therefore, it is not
recommended to use MediationZone® with the jdbc connection to SAP HANA for real-time
applications, which require 99,999% availability.
9.3.5.6.2. Preparations
A driver, provide by your SAP contact, is required to connect a SAP HANA database from Medi-
ationZone® . This driver must be stored on each host (Platform or Execution Context) that will connect
to a SAPA HANA database.
The classpath must also be specified. Edit the classpath in the files platform.xml and
executioncontext.xml for each Execution Context. For example:
<classpath path="/opt/sapHana/ngdbc/ngdbc.jar"/>
After the classpath has been set, copy the jar file to the specified path.
The Platform and the Execution Contexts must be restarted for the changes in platform.xml and
executioncontext.xml to become effective.
310
Desktop 7.1
• Audit Profile
• Event Notification
Note! For SQL Server, the column type timestamp is not supported in tables accessed by
MediationZone® . Use column type datetime instead. See also the System Administration
Guide for information about time zone settings.
9.3.5.7.2. Preparations
The drivers that are required to use SQL Server database are bundled with the MediationZone® software
and no additional preparations are required.
9.3.5.8. Sybase IQ
This section contains information that is specific to the database type Sybase IQ.
• Event Notification
9.3.5.8.2. Preparations
The Sybase JDBC driver has to be downloaded to the Platform in order to connect to a Sybase IQ
database from MediationZone® .
1. Go to the Sybase web page and download jConnect for JDBC from the "Product Download Center":
http://www.sybase.com/download
311
Desktop 7.1
<classpath path="3pp/jconn4.jar"
The APL function closePooledConnections enables you to close a pooled connection with the
Sybase IQ server. This feature helps you eliminate invalid connections.
Note! This function only closes inactive connections, regardless of how long the connections
have been idle.
int closePooledConnections
(string dbProfile)
Parameters:
Example 58.
The default maximum number of connections on an Execution Context is five. You can tune this
number by setting the property sybase.iq.pool.maxlimitin the executioncontext.xml.
Example 59.
By default there is no timeout value defined for the socket tied to a database connection. This means
that a running query could get stuck, in case the database suddenly becomes unreachable. To specify
a time out value, in milliseconds, set the property sybase.jdbc.socketread.timeoutin the
executioncontext.xml file.
Example 60.
312
Desktop 7.1
Note: When using the timeout property you must ensure that you set a limit that exceeds your longest
running query, otherwise you might terminate a connection while it is executing a query.
9.3.5.9. TimesTen
This section contains information that is specific to the database type TimesTen.
• Audit Profile
• Event Notification
When storing a date MIM value in TimesTen, do not use the DATE column type. Instead, use
the TIMESTAMP type.
9.3.5.9.2. Preparations
The TimesTen Client must be installed on every host (Platform or Execution Context) that is connected
to a TimesTen data source through MediationZone® .
Edit the files platform.xml and executioncontext.xml on the hosts that run TimesTen.
Assuming that TimesTen is installed at /opt/TimesTen, add the following line:
<classpath path="/opt/TimesTen/tt70/lib/ttjdbc6.jar"/>
Additionally, the LD_LIBRARY_PATH variable in the shell from which you launch the Platform or
Execution Context, should include the path to the TimesTen Client native library.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TT_HOME/tt70/lib
The Platform and the Execution Contexts must be restarted for the changes in platform.xml and
executioncontext.xml to become effective.
The direct driver requires that TimesTen is installed on the Execution Context hosts. By doing so you
improve performance.
To decrease re-connection overhead, database connections are saved in a connection pool. To configure
the connection pool size, set the property timesten.connectionpool.maxlimit in the
executioncontext.xml file.
313
Desktop 7.1
Example 62.
The use of the Distributed Storage profile and profiles for specific distributed storage types, like
Couchbase and Redis, makes it easy to change the database setup with a minimum of impact on the
configured business logic. This simplifies the process of creating flexible real-time solutions with high
availability and performance.
APL provides functions to read, store and remove data in one or multiple distributed storage instances
within the same workflow. It also provides functions for transaction management and bulk processing.
For information about which APL functions that are applicable for the Distributed Storage profile, see
the APL Reference Guide.
Note! When using Redis for the Distributed Storage, the following functions cannot be used:
beginTransaction
commitTransaction
rollbackTransaction
dsCreateKeyIterator
destroyKeyIterator
getNextKey
314
Desktop 7.1
In the current version of MediationZone® , the Couchbase profile and the Redis profile are available
for use with the Distributed Storage profile. For further information about these profiles, see Section 9.2,
“Couchbase Profile” and Section 9.6, “Redis Profile”.
The Distributed Storage profile is loaded when you start a workflow that depends on it. Changes to
the profile become effective when you restart the workflow.
9.4.2. Configuration
1. To open the Distributed Storage profile configuration, click the New Configuration button in the
upper left part of the MediationZone® Desktop window, and then select Distributed Storage
Profile from the menu.
3. Click Browse... and select the storage profile you want to apply.
The External Reference values are read during runtime when needed by the workflow or a profile.
The External Reference profile is loaded when you start a workflow that depends on it. Changes to
the profile become effective when you restart the workflow.
315
Desktop 7.1
Note! The properties file should reside on the MediationZone® platform host.
A properties file contains Key-Value pairs. The typical format of a properties file is:
Key1=Value1
Key2=Value2
Key3=Value3
The Value data type can be: a string, a boolean, a password or a numeric value.
Boolean values can be represented by true, false, yes, or no, and are not case sensitive.
Password values must be represented by a string that has been encrypted by the encryptpassword
command in mzsh.
Note! If you are using characters encoded with something other than iso-8859-1 in your property
file for External References, the property file has to be converted to ASCII by using the Java
tool native2ascii. See the JDK product documentation for further information about using nat-
ive2ascii.
In Figure 245, “The External Reference Profile Configuration”, the extRef.prop file contains the
following data:
cd1=/mnt/storage/col1
cd2=/mnt/storage/col2
cd3=/mnt/storage/col3
Note! If the file contains two or more identical keys with different values, the last value is the
one that is applied.
Add a slash ("\") to continue the value on the next line. If the value is a multi-line string use
("\n\") to separate the rows.
key1=PrettyLongValueThat\
ContinuesOneTheSecondLine
key2=north\n\
center\n\
south
To create a new External Reference profile configuration, click the New Configuration button in the
upper left part of the MediationZone® Desktop window, and then select External Reference Profile
from the menu.
316
Desktop 7.1
To open an existing External Reference Profile Configuration, double-click the configuration in the
Configuration Navigator, or right-click a configuration and then select Open Configuration(s)....
External Reference Type From the drop-down list select the External Reference source type.
Properties File Enter the path and the name of the Properties file.
Local Key The name of the External Reference in MediationZone® .
Properties File Key The name of the External Reference in the Properties file.
1. To create an External Reference profile configuration, click the New Configuration button in the
upper left part of the MediationZone® Desktop window, and then select External Reference
Profile from the menu.
9.5.2. Configuration
You enable external referencing in MediationZone® by:
3. From the button panel, click Workflow Properties to open the Workflow Properties dialog.
4. On the Workflow Table tab, check the Per Workflow or Default check-boxes for the fields that
you want displayed as columns in the Workflow Table.
5. Check Enable External Reference, and click Browse to enter your External Reference profile.
See Section 9.5.1.3.1, “To create an External Reference profile:”.
6. Click OK. You have now enabled the access to external references for selected workflow table
fields.
317
Desktop 7.1
2. To enter a value either double-click the field, or right-click it and then select Edit Cell.
3. Enter the name of the reference, Local Key, which value you want applied to the field during the
workflow run-time, and then press Enter.
Note! External referencing is applicable only from the following agent profiles:
• Inter Workflow
• Database
• Archiving
• Aggregation
• Duplicate UDR
• Workflow Bridge
For further information on the agents listed above, see the relevant appendix sections.
2. Select External References from the Edit menu to open the external references view.
Enable External Reference Check to enable external referencing of the agent profile fields.
318
Desktop 7.1
3. Check Enable External Reference, and click Browse to select your External Reference profile.
4. In the Selected Agent profile fields table, select the external reference keys to use by checking
Enable and External Reference Key field.
5. Click OK; you have now enabled external references for the selected profile field.
The password values must be represented by a string that has been encrypted with the mzsh encrypt-
password command.
When using the mzsh encryptpassword command you can select to use keys that have been
generated using the Java standard tool keytool. The keys to be used are determined by using aliases,
and if no alias is used, the default key will be used for the encryption. See the JDK product document-
ation for further information about using keytool in different scenarios.
If aliases are to be used, the full path and password to the keystore has to be indicated by including
the mz.cryptoservice.keystore.path and mz.cryptoservice.keystore.password
properties in the platform.xml file. See the section describing System Properties in the System
Administration Guide for further information about these properties. The keystore must also contain
keys for all the aliases you want to use.
Note! The same keytool can be used for generating keys for RCP encryption. However, these
keys are of a different type and cannot be used for External References.
319
Desktop 7.1
This is an example of how passwords can be encrypted with crypto service keystore keys:
Note!
• If you enter a -keysize that is larger than 128, you may get a message saying
that JCE Unlimited Strength Jurisidiction Policy Files needs to be installed. See the
Oracle product documentation for further information about this.
• The -storepass flag is optional. If you do not enter a -storepass you will
be prompted for a password.
• You will be prompted if you want to use the same password for the key as for the
keystore and MediationZone® requires that the same password is used.
3. Encrypt the password to the keystore using the mzsh encryptpassword command with
the default key:
<property name="mz.cryptoservice.keystore.path"
value="<suitable directory>/myKeystore.jks"/>
<property name="mz.cryptoservice.keystore.password"
value="<the encrypted password>
5. Encrypt the passwords with aliases that you want to use in your external references:
The returned password string can now be pasted into your External References properties file
and then be used by either the Database profile, or any of the agents where passwords are
available via External References.
320
Desktop 7.1
The Redis profile is loaded when you start a workflow that depends on it. Changes to the profile become
effective when you restart the workflow.
To create a new Redis profile configuration, click the New Configuration button in the upper left part
of the MediationZone® Desktop window, and then select Redis Profile from the menu.
To open an existing Redis Profile Configuration, double-click the Configuration in the Configuration
Navigator, or right-click a Configuration and then select Open Configuration(s)....
In a Redis Profile Configuration, there are two tabs; General and Advanced.
In order to edit any Configurations in the selected profile, the Active check box has to be cleared and
the profile saved.
Identity The Identity of the Redis profile is used as a lookup key when referencing the
profile from a workflow, or other context, and has to be unique.
Type In this list you select the type of Redis profile you want to use; HA or Simple.
Active When this check box is selected and the profile is saved, the monitoring function
will be activated.
Redis Instances In this table you add all the redis instances you want to include in the profile. Each
instance is configured with:
321
Desktop 7.1
Use password au- If you want to use password authentication, select the Use password authentica-
thentication tion check box and enter the password you want to use to log in to the Redis
database. The password has to match the value set for the requirepass property
in the config files described in the System Administration Guide.
Pico Hosts By default the Redis profile is used for all pico instances, but if you want to restrict
usage to specific picos, you can configure this section with:
• Restrict usage to selected pico instances - select this check box in order to select
which picos you want to be able to use this Redis profile.
• Pico - add all the picos that you want to be able to use this Redis profile in this
table.
See the text in the Properties field for further information about the properties you can set.
322
Desktop 7.1
Using the Table Lookup Service instead of adding tableCreate in each workflow instance will
increase the throughput with less duplicated tables, fewer lookups and reduced memory consumption.
The Table Lookup Service comprises a profile in which SQL queries are defined, and two APL func-
tions; one that references the profile and creates a shared table, and one that can be used for refreshing
the table data from the APL code.
The Shared Table profile is loaded when you start a workflow that depends on it. Changes to the profile
become effective when you restart the workflow and each time you save the profile.
The type of memory allocation chosen for the shared tables are configured in the Shared Table profile
by selecting a Table Storage parameter, and if relevant an Index Storage parameter, with the option
to select variable width varchar columns. For further information, Section 9.7.2, “The Shared Table
Profile Configuration ”.
For more information regarding memory allocation, see the System Administrator's guide.
323
Desktop 7.1
9.7.2.2. Configuration
The Shared Table profile configurations contains the following settings:
Database Click on the Browse... button and select the Database profile you want to use.
Any type of database that has been configured in a database profile can be used.
See Section 9.3, “Database Profile” for further information.
Release Timeout If this check box is selected, the table will be released when the entered number
(seconds) of seconds have passed since the workflows accessing the table were stopped.
The entered number of seconds must be larger than 0.
If this check box is not selected, the table will stay available until the execution
context is restarted.
Refresh Interval Select this check box in order to refresh the data in the table with the interval
(seconds) entered. The entered number of seconds must be larger than 0.
If this check box is not selected, the table will only be refreshed if the APL function
tableRefreshShared is used. For more information regarding the function,
see Section 9.7.3.2, “tableRefreshShared”
324
Desktop 7.1
In case a refresh fails, a new refresh is initiated every 10th second, until
the refresh has finished successfully.
Object Select this option to set the Table Storage to Object. If you select this option,
the shared tables are stored as Java objects on the JVM heap.
On Heap Select this option to set the Table Storage to On Heap. If you select this option,
the shared tables are stored in a compact format on the JVM heap. If you select
On Heap, you must select an option for the Index Storage.
Off Heap Select this option to set the Table Storage to Off Heap. If you selct this option,
the shared tables are stored in a compact format outside the JVM heap.
Note! You are required to set the jdk parameter in the executioncon-
text.xml, for example:
<jdkarg value="-XX:MaxDirectMemorySize=4096M"/>
If you select Off Heap, you must select an option for the Index Storage.
Unsafe Select this option to set the Table Storage to Unsafe. If you select this option,
the shared tables are stored in a compact format. If you select Unsafe, you must
select an option for the Index Storage.
Primitive Lookup Select this option to set the Table Storage to Primitive Lookup. This provides
simple lookup tables with a fast lookup function but they are limited to two
columns of type Int/Long for the key (column 1) and type Short/Int/Long for the
value (column 2). Lookup operations on Primitive Lookup tables are limited
to the equals operation on column 1.
Object Select this option to set the Index Storage to Object. If you select this option,
the index is stored as Java objects on the JVM heap. This option is only available
if you have selected On Heap, Off Heap or Unsafe for Table Storage.
Pointer Select this option to set the Index Storage to Pointer . If you select this option,
the index is stored as pointers to the table data. This option is only available if
you have selected On Heap, Off Heap or Unsafe for Table Storage.
Cached Long/Int Select this option to set the Index Storage to Cached Long/Int Pointer. This
Pointer option is only available if you have selected On Heap, Off Heap or Unsafe for
Table Storage. For numeric index columns, the Cached Long/Int Pointer can
be used for faster lookups, but at the cost of slightly higher memory consumption.
Variable Width Select this check box to enable variable width storage of varchar columns. This
Varchar Columns reduces memory usage for columns that are wide and of varying width.
SQL Load State- In this field, an SQL SELECT statement should be entered in order to create the
ment contents of the table returned by the tableCreateShared APL function.
325
Desktop 7.1
Example 64.
For example,
will return a table named MyTable with the columns key and value when
the tableCreateShare function is used together with this profile.
If no data has been fetched from the database, SQL errors in the table lookup will
cause runtime errors (workflow aborts). However, if data has already been fetched
from the database then this data will be used. This will also be logged in the System
Log.
Table Indices If you want to create an index for one or several columns of the shared table, these
columns can be added in this field by clicking on the Add... button and adding
the columns for which you want to create an index. The index will start with 0
for the first column.
Note! An index will not be created unless there are at least five rows in the
table.
9.7.3. APL
The following functions are included for the Table Lookup Service:
• tableCreateShared
• tableRefreshShared
9.7.3.1. tableCreateShared
Returns a shared table that holds the result of the database query entered in the Shared Table profile.
table tableCreateShared
( string profileName )
Parameters:
Example 65.
initialize {
table myTable = tableCreateShared("Folder.mySharedProfile");
}
will create a shared table called myTable with the columns returned by the SQL query in the
mySharedProfile Shared Table profile.
326
Desktop 7.1
9.7.3.2. tableRefreshShared
This function can be used for refreshing the data for a shared table configured with a Shared Table
profile. The table will be updated for all workflow instances that are using the table and are running
on the same EC.
table tableRefreshShared
( string profileName )
Parameters:
profileName Name of the Shared Table profile you want to refresh data for.
Returns A refreshed shared table.
Example 66.
will return the shared table called myTable, which uses the mySharedProfile, with refreshed
data.
327
Desktop 7.1
10.1.1.1. Prerequisites
The reader of this information should be familiar with:
• Standard TCP/IP
When the workflow is activated, the agent connects to the EIU service and awaits data packets on a
predefined port to arrive from the DMS-GSP switch. The AFT/TCP agent may not be combined with
other collectors in the same workflow.
10.1.2.1. Configuration
The AFT/TCP agent configuration window is displayed when the agent in a workflow is double-clicked
or right-clicked, selecting Configuration...
328
Desktop 7.1
Keep Alive If enabled, the agent tells the system to perform a continuous test that the remote host
is up and running. The keep-alive functionality will make sure that bad connections
are discovered.
The keep-alive interval is system dependent, and can be displayed with the following
command on Sun Solaris:
10.1.2.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.
10.1.2.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS. The batch is
then handled through the ECS Inspector or ECS Collection agent.
10.1.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
329
Desktop 7.1
10.1.2.4.1. Publishes
10.1.2.4.2. Accesses
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
Reported along with the name of the source file that has been collected and inserted into the workflow.
Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted; refer to Section 10.1.2.2, “Transaction Behavior” for
further information.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
Reported along with the number set in EIU Port, when a connection to the EIU Host has been es-
tablished. The message will only be displayed if debug is acivated on workflow level.
330
Desktop 7.1
Reported when the sequence number of the last block, acknowledged by the downstream collector
in the switch start file output request, is not equal to 0 (zero). The switch will then try to send the
whole file again. The message will only be displayed if debug is acivated on workflow level.
10.2.1.1. Prerequisites
The reader of this information should be familiar with:
• FTAM
10.2.1.2. Documentation
• Lucent Data Link Interface Specification
The agent does not communicate directly with the 5ESS switch. Instead it connects via the FTAM
Interface service that must be running on a host in the MediationZone® network. The advantage with
this implementation is that only one host has to be equipped with FTAM software. For further inform-
ation about how the FTAM Interface service is maneuvered, see Section 10.2.3, “FTAM Interface
Service”.
When activated, the agent connects to the FTAM Interface service and requests to retreive the next
file, based on the file prefix name and the file generation number from the named host.
The FTAM/5ESS agent may not be combined with other collectors in the same workflow.
10.2.2.1. Configuration
The FTAM/5ESS agent configuration window is displayed when the agent in a workflow is double-
clicked or right-clicked, selecting Configuration...
331
Desktop 7.1
332
Desktop 7.1
Example 67.
The configuration in Figure 252, “FTAM/5ESS agent configuration window, Switch tab.” can
be used to retrieve the following switch files in SAFE state:
06091C.2216
06092C.2217
06093C.2218
06094C.2219
06094D.2220
...
When the agent is asking for the first file (06091C.2216) the corresponding file name pattern
will be *[A-Z].2216;STATE=SAFE.
Host Host name or IP-address of the host where the FTAM Interface service is running. This field
should contain the host alias, which is located in the <ROOT_DIR>/etc/host_def folder.
Port Port number on the Host, on which the FTAM Interface service is listening. The default port
number is 16702.
333
Desktop 7.1
10.2.2.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.
10.2.2.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
10.2.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
10.2.2.4.1. Publishes
334
Desktop 7.1
10.2.2.4.2. Accesses
For further information about the agent message event type, see the MediationZone® Desktop user's
guide.
Reported along with the name of the source file, that has been collected and inserted into the work-
flow.
Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted. For further information, see Section 10.2.2.2, “Transaction
Behavior”.
Figure 255. Communication between FTAM Collection Agents and Network Element, via the
FTAM Interface Service.
335
Desktop 7.1
It is important to start the interface by using a full path name. If the binaries are placed in
/opt/mz/ftam/bin the same path must be used when the interface is started. The following
command can be used to start the interface:
The root directory contains internal state information for recovery and log files.
10.3.1.1. Prerequisites
The reader of this information should be familiar with:
• FTAM
The FTAM/EWSD usually collects a cyclic file, since that is how Siemens EWSD switches generate
traffic data. When activated, the agent connects to the FTAM Interface service and requests the new
data. The switch keeps track of the data to be collected by using two parameters: begin and end
copy area pointer.
After the FTAM/EWSD agent has safely collected the data, a delete request is issued, resulting in
the begin copy area pointer being moved to the end. The collected data is saved in files,
each containing one copy area (all data from one(1) activation).
The FTAM/EWSD agent may not be combined with other collectors in the same workflow.
Since the FTAM/EWSD agent is the active part, it has to be scheduled to be invoked periodically.
336
Desktop 7.1
10.3.2.1. Configuration
The FTAM/EWSD agent configuration window is displayed when the agent in a workflow is double-
clicked or right-clicked, selecting Configuration...
Host Host name or IP-address of the host where the FTAM Interface service is running. This field
should contain the host alias, which is located in the <ROOT_DIR>/etc/host_def folder.
Port Port number on the Host, on which the FTAM Interface service is listening. The default port
number is 16702.
337
Desktop 7.1
10.3.2.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.
10.3.2.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
10.3.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
338
Desktop 7.1
10.3.2.4.1. Publishes
10.3.2.4.2. Accesses
For further information about the agent message event type, see the MediationZone® Desktop user's
guide.
Reported along with the name of the source file, when the file given in Filename has been collected
and inserted into the workflow.
Reported along with the name of the source file, each time a Cancel Batch message is received. This
assumes the workflow is not aborted. For further information, see Section 10.3.2.2, “Transaction
Behavior”.
339
Desktop 7.1
Figure 259. Communication between FTAM Collection Agents and Network Element, via the
FTAM Interface Service.
It is important to start the interface by using a full path name. If the binaries are placed in
/opt/mz/ftam/bin the same path must be used when the interface is started. The following
command can be used to start the interface:
The root directory contains internal state information for recovery and log files.
340
Desktop 7.1
10.4.1.1. Prerequisites
The reader of this information should be familiar with:
• FTAM
10.4.1.2. Documentation
• FTAM Responder Application, 56/155 17-ANZ 216 01 Uen, Ericsson
The FTAM/IOG agent collects subfiles that are part of a composite main file on the IOG. When activ-
ated, the agent connects to the FTAM Interface service and requests the new data. The FTAM/IOG
agent keeps track of the data to be collected by reading the contents of one or more directory control
files, which maintains the status of the subfiles
The FTAM/IOG agent may not be combined with other collectors in the same workflow.
10.4.2.1. Configuration
The FTAM/IOG agent configuration window is displayed when the agent in a workflow is double-
clicked or right-clicked, selecting Configuration...
341
Desktop 7.1
Directory Control File 1 The name of the first directory control file as specified in the IOG. Ad-
ditional directory control files can be specified in the Advanced tab.
Main Filename The name of the main file as defined in the IOG.
Stop at Subfile A subfile sequence number in the range of 0001-9999.
Regular Expression A regular expression according to Java syntax, using the subfilename as
input. The result are the names of the subfiles to be collected.
Remove After Collection If enabled, the source files will be removed from the IOG after the col-
lection.
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Host Host name or IP-address of the host where the FTAM Interface service is running. This field
should contain the host alias, which is located in the <ROOT_DIR>/etc/host_def folder.
Port Port number on the Host, on which the FTAM Interface service is listening. The default port
number is 16702.
342
Desktop 7.1
Directory Control File [2-4] The names of up to three additional directory control files as defined
in the IOG.
10.4.2.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.
10.4.2.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
10.4.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
10.4.2.4.1. Publishes
343
Desktop 7.1
10.4.2.4.2. Accesses
For further information about the agent message event type, see the MediationZone® Desktop user's
guide.
Reported along with the name of the source file, when the file given in Filename has been collected
and inserted into the workflow.
Reported along with the name of the source file, each time a Cancel Batch message is received. This
assumes the workflow is not aborted. For further information, see Section 10.4.2.2, “Transaction
Behavior”.
344
Desktop 7.1
Figure 263. Communication between FTAM Collection Agents and Network Element, via the
FTAM Interface Service.
It is important to start the interface by using a full path name. If the binaries are placed in
/opt/mz/ftam/bin the same path must be used when the interface is started. The following
command can be used to start the interface:
The root directory contains internal state information for recovery and log files.
10.5.1.1. Prerequisites
The reader of this information should be familiar with:
• FTAM
10.5.1.2. Documentation
• Nokia Data File Transfer from VDS Device to Postprocessing System
345
Desktop 7.1
The FTAM/Nokia agent collects files stored in a circular buffer in the Virtual Storing Device (VDS)
of the DX200 switch. When activated, the agent first reads the transfer control file, TTTCOF, to check
the validity of its timestamps. The agent then reads the storage control file, TTSCOF, to check the
number of data files that can be collected. Once the files are safely transferred, the agent updates
TTTCOF, enabling the VDS to overwrite the collected data.
The FTAM/Nokia agent may not be combined with other collectors in the same workflow.
10.5.2.1. Configuration
The FTAM/Nokia agent configuration window is displayed when the agent in a workflow is double-
clicked or right-clicked, selecting Configuration...
346
Desktop 7.1
Host Host name or IP-address of the host where the FTAM Interface service is running. This field
should contain the host alias, which is located in the <ROOT_DIR>/etc/host_def folder.
Port Port number on the Host, on which the FTAM Interface service is listening. The default port
number is 16702.
347
Desktop 7.1
10.5.2.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.
10.5.2.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
10.5.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
10.5.2.4.1. Publishes
348
Desktop 7.1
Source User Name This MIM parameter contains the User Name.
10.5.2.4.2. Accesses
For further information about the agent message event type, see the MediationZone® Desktop user's
guide.
Reported along with the name of the source file, when the file given in Filename has been collected
and inserted into the workflow.
Reported along with the name of the source file, each time a Cancel Batch message is received. This
assumes the workflow is not aborted. For further information, see Section 10.5.2.2, “Transaction
Behavior”.
349
Desktop 7.1
Figure 267. Communication between FTAM Collection Agents and Network Element, via the
FTAM Interface Service.
It is important to start the interface by using a full path name. If the binaries are placed in
/opt/mz/ftam/bin the same path must be used when the interface is started. The following
command can be used to start the interface:
The root directory contains internal state information for recovery and log files.
10.6.1.1. Prerequisites
The reader of this information should be familiar with:
• FTAM
10.6.1.2. Documentation
• Alcatel STR-FTAM-SERVICES 214 7944 AAAA
350
Desktop 7.1
The agent does not communicate directly with the S12 switch. Instead it connects via the FTAM Inter-
face service that must be running on a host in the MediationZone® network. The advantage with this
implementation is that only one host has to be equipped with FTAM software. For further information
about how the FTAM Interface service is maneuvered, see Section 10.6.3, “FTAM Interface Service”.
When activated, the agent connects to the FTAM Interface service and requests all available data from
the specified cyclic file.
The FTAM/S12 agent may not be combined with other collectors in the same workflow.
10.6.2.1. Configuration
The FTAM/S12 agent configuration window is displayed when the agent in a workflow is double-
clicked or right-clicked, selecting Configuration...
351
Desktop 7.1
Host Host name or IP-address of the host where the FTAM Interface service is running. This field
should contain the host alias, which is located in the <ROOT_DIR>/etc/host_def folder.
Port Port number on the Host, on which the FTAM Interface service is listening. The default port
number is 16702.
10.6.2.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.
10.6.2.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
352
Desktop 7.1
10.6.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
10.6.2.4.1. Publishes
10.6.2.4.2. Accesses
For further information about the agent message event type, see the MediationZone® Desktop user's
guide.
Reported along with the name of the source file, that has been collected and inserted into the work-
flow.
353
Desktop 7.1
Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted. For further information, see Section 10.6.2.2, “Transaction
Behavior”.
Figure 271. Communication between FTAM Collection Agents and Network Element, via the
FTAM Interface Service.
It is important to start the interface by using a full path name. If the binaries are placed in
/opt/mz/ftam/bin the same path must be used when the interface is started. The following
command can be used to start the interface:
The root directory contains internal state information for recovery and log files.
354
Desktop 7.1
10.7.1.1. Prerequisites
The reader of this information should be familiar with the:
• MediationZone® Platform
10.7.2. Overview
The FTP/DX200 collection agent collects data files from a DX200 network element, and inserts them
into a MediationZone® workflow, by using the FTP or SFTP protocol. To do this, the agent:
• Reads the storage control file TTSCOFyy.IMG, that specifies what to collect.
• Registers every file that has been successfully collected in the transaction control file
TTTCOFyy.IMG.
Note! By default, the agent will skip files if the sequential order has been lost, and files that
have been overwritten (reaching FULL state before being set to OPEN state) will not be collected.
See the System Administration Guide for further information about these properties.
10.7.3. Preparations
Prior to configuring a DX200 agent to use SFTP, consider the following preparation notes:
• Server Identification
• Attributes
• Authentication
• Server Keys
mz.ssh.known_hosts_file
355
Desktop 7.1
It is set in executioncontext.xml to manage where the file is saved. The default value is
${mz.home}/etc/ssh/known_hosts.
The SSH implementation uses JCE (Java Cryptography Exentsion), which means that there may be
limitations on key sizes for your Java distribution. This is usually not a problem. However, there may
be some cases where the unlimited strength cryptography policy is needed. For instance, if the host
RSA keys are larger than 2048 bits (depending on the SSH server configuration). This may require
that you update the Java Platform that runs the Execution Context.
For unlimited strength cryptography on the Oracle JRE, download the JCE Unlimited Strength Juris-
diction Policy Files from http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-
2133166.html. Replace the jar files in $JAVA_HOME/jre/lib/security with the files in this
package. The OpenJDK JRE does not require special handling of the JCE policy files for unlimited
strength cryptography.
10.7.3.2. Attributes
DX200 agent support the following SFTP algorithms:
10.7.3.3. Authentication
The DX200 agent support authentication through either username/password or private key. Private
keys can optionally be protected by a Key password. Most commonly used private key files, can be
imported into MediationZone® .
keyType The type of key to be generated. Both RSA and DSA key types are supported.
directoryPath The directory in which you want to save the generated keys.
356
Desktop 7.1
Example 68.
The private key may be created using the following command line:
When the keys are created the private key may be imported to the DX200 agent:
The agent uses a file with the known hosts and keys. It will accept the key supplied by the server if
either of the following is fulfilled:
1. The host is previously unknown. In this case the public key will be registered in the file.
2. The host is known and the public key matches the old data.
357
Desktop 7.1
3. The host is known however has a new key and the user has been configured to accept the new key.
For further information, see the Advanced tab.
If the host key changes for some reason, the file will have to be removed (or edited) in order for the
new key to be accepted.
10.7.4. Configuration
To configure the FTP/DX200 collection agent, in the Workflow configuration, either double-click on
the agent's icon, or right-click on the agent and then select the Configuration option in the popup
menu. The agent's configuration dialog box will then open. The dialog contains three tabs; Switch,
Advanced and TTSCOF Settings.
The Switch tab includes configuration settings that are related to the remote host and the directory
where the control files are located. In this tab you specify from which VDS device the control files
are retrieved, and the time zone location of the VDS device.
Host Name Enter the name of the Host or the IP address of the switch that is to be connected.
Transfer Protocol Choose transfer protocol.
Authenticate With Choice of authentication mechanism. Both password and private key authentic-
ation are supported. When you select Private Key, a Select... button will appear,
which opens a window where the private key may be inserted. If the private
key is protected by a passphrase, the passphrase must be provided as well. For
further information about private keys, see Section 10.7.3.3, “Authentication”.
User Name Enter the name of the user from whose account on the remote Switch the FTP
session is created.
Password Enter the user password.
Root Directory Enter the physical path of the source directory on the remote Host, where the
control files are saved.
Switch Time Zone Select the time zone location. Timezone is used when updating the transaction
control file.
VDS Device No. Enter the network element device from where the control files are retrieved.
358
Desktop 7.1
The Advanced tab includes configuration settings that are related to more specific use of the FTP
service.
For example: If you select 0001, data file number 99 will include the fol-
lowing four digits: 0099.
Ends With VDS Device Select this check box to create data file names that end with VDS device
No. No.
Server Port Enter the port number for the server to connect to, on the remote Switch.
Note! Make sure to update the Server Port when changing the
Transfer Protocol.
Number Of Retries Enter the number of attempts to reconnect after temporary communication
errors.
Retry Interval (ms) Enter the time interval, in milliseconds, between connection attempts.
Local Data Port Enter the local port number that the agent will listen to for incoming data
connections.
359
Desktop 7.1
Transfer Type Select either Binary or ASCII transfer of the data files.
FTP Command Trace Select this check box to generate a printout of the FTP commands and re-
sponses. This printout is logged in the Event Area of the Workflow Mon-
itor.
The TTSCOF Settings tab includes configuration settings that allows you to adjust the default settings
for the FTP DX200 agent
Collect when file is With this setting you can select to allow files to be collected, even though
present on only one WDU they are only present on one WDU. This may be useful if one of the WDUs
cannot be reached for some reason. Default is No, which means that files
will only be collected if they are present on both WDUs
Note! WDU is short for Winchester Drive Unit, and each VDS
(Virtual Data Storage) has two WDUs.
WDU0 Path In this field you can specify the path to WDU0. This setting is optional.
WDU1 Path In this field you can specify the path to WDU1. This setting is optional.
Select default collection Select the WDU you want to use as default in this list. WDU 1 is default.
WDU
Collect files with bit 5 In this list you can select if you only want to collect files where bit 5 IS
NOT set (Must not be set), or where bit 5 IS set (Must be set), or if you
360
Desktop 7.1
always want to collect files regardless of whether bit 5 is set or not (May
be set).
Collect files with bit 6 In this list you can select if you only want to collect files where bit 6 IS
NOT set (Must not be set), or where bit 6 IS set (Must be set), or if you
always want to collect files regardless of whether bit 6 is set or not (May
be set).
Collect files with bit 7 In this list you can select if you only want to collect files where bit 7 IS
NOT set (Must not be set), or where bit 7 IS set (Must be set), or if you
always want to collect files regardless of whether bit 7 is set or not (May
be set).
10.7.4.4.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Will be emitted right before the first byte of each collected file is fed into a workflow.
End Batch Will be emitted just after the last byte of each collected file has been fed into the system.
10.7.4.4.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
10.7.4.5. Introspection
The introspection is the type of data an agent expects and delivers.
10.7.4.6.1. Publishes
361
Desktop 7.1
10.7.4.6.2. Accesses
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
362
Desktop 7.1
Reported together with the name of the control (TTTCOFxx.IMG) and data file that have been col-
lected and inserted into the workflow.
Reported together with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted; please see, Transaction behaviour, Cancel Batch.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
• Command trace
A printout of the control channel trace. This is only valid if FTP command trace in the Advanced
tab is selected.
10.8.1.1. Prerequisites
The reader of this information should be familiar with:
10.8.2. Overview
The FTP/EWSD agent enables collection of cyclic files from Siemens EWSD switches into the Medi-
ationZone® workflow, by using the FTP protocol.
When the workflow is activated, the FTP/EWSD agent connects to the configured FTP service and
requests information about the cyclic file by using the LIST FTP command. The switch returns inform-
ation about the cyclic file. The agent compares the returned information field values COPY-DATA-
BEGIN and LAST-RELEASE with its internal state. If LAST-RELEASE has the same value that the
agent holds, the agent deletes the cyclic file with the FTP delete command DELE, and thereby releases
the currently established copy space.
The agent then retrieves the file contents with RETR and inserts it into the workflow. The RETR
command establishes a new copy space. When all the data is successfully collected into the workflow,
the agent generates another LIST command in order to retrieve the value of COPY-DATA-BEGIN
and of LAST-RELEASE. The agent saves these values and then generates the DELE command to
release the space of the collected data.
363
Desktop 7.1
By checking Automatic Seq No Assignment you enable the agent to automatically set these sequence
numbers. The agent connects to the configured FTP service and searches from the ACTIVE file slice,
backwards, to the oldest file slice in the FILLED state. The agent then collects all the file slices, from
the oldest to the most recent file slice.
When collection is complete, release all the file slices and continue to collect the currently
ACTIVE file slice.
10.8.3. Configuration
You open the FTP/EWSD agent configuration view from the workflow editor. In the workflow template,
either double-click the agent icon, or right-click it and select Configuration.
Hostname Enter the host name or the IP-address of the switch that you want the agent
to connect to.
Username Enter the username of the account on the remote switch, to enable the FTP
session to login.
Password Enter the Username user's password.
Filename Enter the name the cyclic file that the agent should collect.
Remote File is Cyclic Check to enable a transaction safe retrieval of data from the switch. This is
useful when you want to retrieve statistical switch data.
Multiple File View Check to enable sectioning and thereby a more effective data management
of the file. Clear to stay in single file view. For further information see Sec-
tion 10.8.2.1, “Multiple File View Option”.
364
Desktop 7.1
Command Port Enter a number between 1 and 65535 to define the port that the FTP service
will use to communicate with the agent from a remote switch.
Timeout (sec) Enter the maximum length of time, in seconds, while the agent should
await a reply after sending a command, before a timeout is called. 0 (zero)
means "wait forever".
Number of Retries The number of times to retry upon TCP communication failure. This is for
the FTP command channel only. An IO failure during the file transfer will
not trigger retries. If value set to O no retires will be done.
Delay (sec) Enter the length of the delay period between each connection atempt.
Local Data Port Enter the number of the port through which the agent should expect input
data connections (FTP PORT command). Enter 0 (zero) to have the oper-
ating system select a random port number that is not being used currently,
according to the system specifications for each data connection.
Enter a non-zero value to have the agent use the same local port for all the
data connections.
365
Desktop 7.1
Passive Mode (PASV) Check when using an FTP passive mode connection.
Currently, Siemens does not support this option. However, some firewalls
require passive mode.
Binary Transfer Check to enable binary transfer. Clear to enable ascii transfer.
FTP Command Trace Check to debug the communication with the remote switch. See a log of
the commands and responses in the workflow editor Event Area. The
LIST command results are traced as well.
Release File Slice After Check to release all the file slices after successful retrieval. File release is
Retrieval initiated by the FTP delete command.
Automatic Seq No As- Check to enable automatic numbering of the file slices in Multiple File
signment View mode. If this option is checked, the number management is done in
SAMAR.
Don't Collect The Active Check to have the agent collect only file slices that are prior to the active
File Slice one.
10.8.4.1. Emits
The agent emits commands that change the state of the file that is currently being processed.
Command Description
Begin Batch Invoked right before the first byte of each file is collected by a workflow.
End Batch Invoked right after the last byte of each file is collected by a workflow.
10.8.4.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
If the Cancel Batch behavior, defined for a workflow, is configured to abort the
workflow, the agent will not receive the last Cancel Batch message. In such
case an ECS is not involved, and the established copy is not deleted.
366
Desktop 7.1
10.8.5. Introspection
This section includes information about the data type that the agent expects and delivers.
10.8.6.1. Publishes
10.8.6.2. Accesses
The agent itself does not access any MIM resources.
This section includes the event messages that can be configured for the FTP/EWSD agent.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
Reported along with the name of the source file that has been collected and inserted into the workflow.
367
Desktop 7.1
Reported along with the name of the current file, every time a Cancel Batch message is received.
If the workflow is aborted no such messages are received. For further information see Section 10.8.4,
“Transaction Behavior”.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
• Command trace
A printout of the control channel trace. Valid only if FTP Command Trace is enabled in the Ad-
vanced tab.
10.9.1.1. Prerequisites
The reader of this information should be familiar with the:
• MediationZone® Platform
10.9.2.1. Configuration
To configure the FTP/NMSC collection agent, on the workflow editor either double-click the agent
icon, or right-click it and then select Configuration; The agent configuration dialog box opens.
368
Desktop 7.1
The Switch tab consists of configuration settings that are related to the remote host and directory,
where the control and data files are located, and of timezone location specification.
Host Name Enter the name of the Host or the IP-address of the switch that is to be connected
User Name Enter the name of the user whose account on the remote Switch will enable the
FTP session to be created
Password Enter the user password
File Information Detailed specification about control files that are going to be collected by the
agent.
Data Type Select from the drop down list the data file format, either SMS or MMS, that
the agent should collect.
File Directory Enter the physical path to the source directory on the remote Host, where the
control and data files are saved.
Switch Time Zone Select the timezone location. Timezone is used when updating the transaction
control file.
369
Desktop 7.1
The Advanced tab includes configuration settings that are related to more specific use of the FTP
service.
File Name Detailed specification about the data file that is to be collected by the agent
Prefix Enter the data files name prefix
Number Positions Select the length of the number-part in the data file name as follows:
For example: If you select 0001, data file number 99 will include the follow-
ing four digits: 0099.
Settings Detailed specification of specific use of the FTP service.
Command Port Enter the port number for the FTP server to connect to, on the remote
Switch.
Local Data Port Enter the local port number that the agent will listen on for incoming data
connections.
370
Desktop 7.1
Setting Transfer Type to the wrong type might corrupt the transferred
data files.
FTP Command Trace Check to generate a printout of the FTP commands and responses. This
printout is logged in the Event Area of the Workflow Monitor.
Command Description
Begin Batch Will be emitted right before the first byte of each collected file is fed into a workflow.
End Batch Will be emitted right after the last byte of each collected file has been fed into the
system.
Cancel Batch Never.
10.9.2.2.2. Retrieves
Command Description
Begin Batch Nothing.
End Batch Nothing.
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
10.9.2.3. Introspection
The agent consumes bytearray types.
File Creation Timestamp A parameter that contains a time stamp, that indicates when the file
has been created. The value originates from the Data Storage Control
File and is expressed in local time.
371
Desktop 7.1
None.
Reported together with the name of the file that have been collected and inserted into the workflow.
File cancelled:name
Reported together with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted; See Transaction behavior, Cancel Batch.
Command trace:trace
A printout of the control channel trace. This is only valid if FTP command trace in the Advanced
tab is selected.
10.10.1.1. Prerequisites
The reader of this information should be familiar with:
• MediationZone® Platform
• GPRS Tunneling Protocol (GTP) across the Gn and Gp Interface [3GPP TS 29.060 V4.2.0]:ht-
tp://www.3gpp.org/ftp/Specs/archive/29_series/29.060/29060-420.zip
• Call and event data for the Packet Switched (PS) domain [3GPP TS 32.015 V3.11.0]: ht-
tp://www.3gpp.org/ftp/Specs/archive/32_series/32.015/32015-3b0.zip
10.10.2. Overview
The GTP' Agent collects from GSM agents messages and datagrams of charging protocol of type GTP'.
By collecting this information the GTP' Agent enables MediationZone® to act as a Charging Gateway
device, providing Charging Gateway Functionality (CGF) within UMTS/GPRS networks.
372
Desktop 7.1
The GTP' agent awaits initialization from the GSN nodes of the types SGSN and GGSN. When initiated,
there are two protocols with which the Agent can interacts with the nodes:
• Datagram Protocol(UDP)
A GTP' workflow can alternate implementation of two different protocols by using two GTP' agents.
One for each protocol.
In case of failure, the GTP' agent can be configured to notify the GSN nodes to route the incoming
data to another host. An alternative configuration is to set up a second and identical workflow, on a
separate MediationZone® Execution Context.
The agent counts the received requests and publishes those values as MIM values. Those MIM values
can also be viewed from the commandline tool with the wfcommand printcounters command.
1. When started, the GTP' agent sends a Node Alive Request message to all configured GSN
nodes.
2. The GTP' agent awaits a Node Alive Response and will transmit Node Alive Request
repeatedly, according to the Advanced tab settings. For further information see Section 10.10.3.3,
“Advanced Tab”
3. After a successful Node Alive Response the GSN node starts to transmit Data record Transfer
Requests to the agent. When safely collected, the agent replies with a Data record transfer Response.
4. When the workflow is stopped, the message Redirection Request is automatically sent to
all configured GSN nodes. The workflow will not stop immediately, but waits for a Redirection
Response from each of the GSN nodes. If the Max Wait for a Response (sec) value is exceeded,
the workflow stops, regardless of whether Redirection Response from the GSN nodes has
been received or not.
Note! When using TCP, the behavior is different. For further information see Section 10.10.9,
“Limitations - GTP' Transported Over TCP ”.
373
Desktop 7.1
10.10.3. Configuration
You open the GTP' agent configuration view from the Workflow Editor either by double-clicking
the agent icon, or by right-clicking it and then selecting Configuration.
Protocol Enter the protocol type that you want the agent to use: Either TCP or UDP. For
further information see Section 10.10.9, “Limitations - GTP' Transported Over
TCP ”.
Port Enter the port through which the agent should await incoming data packages. This
port must be located on the host where the Execution Context is running.
Note! Two workflows that are running on the same Execution Context,
can subscribe to the same Port and GSNs, if they use different Protocol
settings.
GSN IP Address The IPv4 or IPv6 addresses of the GSN nodes that provide the data.
Server Port Enter the server port number for each node. The agent will detect GGSN source
port changes via the Node Alive Request or Echo Request messages. If a change
is detected, it is is registered in the System Log. The agent's internal configuration
is updated.
374
Desktop 7.1
The Miscellaneous tab includes collected format and storage settings of the data that is collected by
the agent.
Note! GTP' formats containing fields of the types IPv4 and IPv6 are supported.
Format Must match the Data Record Format in Data Record Packet IE.
This is applicable if the Packet Transfer Command is either Send Data
Record Packet or Send possibly duplicated Data Record
Packet.
Perform Format When checked, the Data Record Format Version in Data Record
Version Check Packet IE must be identical to the setting in Format Version.
Format Version Should match the Data Record Format version in Data Record
Packet IE. This is applicable if the Packet Transfer Command is Send
Data Record Packet or Send possibly duplicated Data
Record Packet.
Directory Enter either the relative pathname to the home directory of the user account,
or an absolute pathname of the target directory in the file system on the local
host, where the intermediate data for the collection is stored.
375
Desktop 7.1
Note! When using several Execution Contexts, make sure that the file
system that contains the GTP' information is mounted on all ECs.
Acknowledgement Check to enable acknowledgement from the APL module that follows the GTP'
from APL agent. This way the GTP' agent will expect a feedback route from the APL
module, as well. No GTP packets will be acknowledged before all the data that
is emitted into the workflow is routed back to the collector. Controlling acknow-
ledgement from APL enables you to make sure that data is transmitted complete.
Clear this check-box if you want the agent to acknowledge incoming packets
before any data is routed into the workflow.
No Private Exten- Check to remove the private extension from any of the agent's output messages.
sion
Use seq num of Can- Check to change the type of the sequence numbers that are populated in a Data
cel/Release req Record Transfer Response to either Release Data Record
Packet, or Cancel Data Record Packet, in the Requests Responded
field. Otherwise, the agent applies the sequence number of the released, or
cancelled, Data Record Packet.
Max Wait for a Re- The maximal period while the GTP' agent should expect a Node Alive
sponse (sec) Response message.
If both this value and the Max Number of Request Attempts value, are
exceeded, a message appears on the System Log.
The value also indicates the maximal period during which the GTP' agent
awaits a Redirection Response. This period begins right after the
agent releases a Redirection Request to the agents that it is con-
figured to receive data from.
376
Desktop 7.1
Max Number of Re- Enter the maximum number of attempts to perform in order to receive a
quest Attempts Node Alive Response and Redirection Response.
Max Outstanding Num- Enter the maximal number of packages that you want kept in memory for
bers sequence number checking.
Max Drift Between Two Enter the maximum numbers that can be skipped between two sequence
Numbers numbers.
Clear Checking Check to avoid saving the last sequence number when the workflow has
been stopped.
Clear to have the agent save the sequence number of the last collected
package when the workflow is stopped. This way, as soon as the workflow
is restarted, a package with the subsequent number is expected and the
workflow continues processing from the where it had stopped.
Agent Handles Duplic- Select this option to store duplicates in a persistent data directory that you
ates specify on the Miscellaneous tab. The packet will remain in the directory
until the agent receives a request to release or to cancel it.
Route Duplicates to Select this option to route duplicates to a link that you select from the drop-
down list.
Alternate Node Enter the IP-address of a host that runs an alternate Charging Gateway
device as a backup.
If you enter an IP address, the GTP' agent will include it in the redirection
request that it sends to the GSN.
Note! The GTP' agent does not backup any of the data that it man-
ages. Make sure that the GSN node takes care of backup.
Decoder Click Browse and select a pre-defined decoder. These decoders are defined in the
Ultra Format Editor, and are named according to the following syntax:
<decoder> (<module>)
The option MZ Format Tagged UDRs indicate that the expected UDRs are stored
in one of the built-in MediationZone® formats. If the compressed format is used,
the decoder will automatically detect this. Select this option to make the Tagged
UDR Type list accessible for configuration.
Tagged UDR Click Browse and select a pre-defined Tagged UDR Type. These UDR types are
Type stored in the Ultra and Code servers. The naming format is:
<internal>(<module>)
377
Desktop 7.1
• Clear to minimize decoding work in the agent. By clearing this check-box you
postpone decoding, and discovery of corrupt data to a later phase in the workflow.
10.10.4. Introspection
The introspection is the type of data that an agent expects and delivers.
The agent produces UDR types in accordance with the Decoder tab settings.
10.10.5.1. Publishes
10.10.5.2. Accesses
The agent does not access any MIM parameters.
378
Desktop 7.1
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
Note! The HiCAP agent requires OS dependent third party software and it is currently supported on
RedHat Enterprise Linux only.
10.11.1.1. Prerequisites
The reader of this information should be familiar with:
• MediationZone® Platform
10.11.2. Overview
The HiCAP agent collects data from an Automatic Message Account Transmitter (AMAT) using the
High Capacity AMATPS (Automatic Message Account Teleprocessing) API over an RPC interface.
The data is collected via primary or secondary AMA data files. A primary AMA data file contains
AMA blocks not previously sent and acknowledged by the collector. A secondary AMA data file
contains AMA data previously sent by the AMAT with receipt acknowledged by the collection. Upon
activation, the collector binds to the pre-defined RPC port and waits for connections to be accepted.
For further information about the HiCAP AMATPS API, see the Flexent/AUTOPLEX, Wireless
Networks, High Capacity AMATPS API documentation.
The UDRs generated by the HiCAP agent includes decoded connection information. The original data
from the AMAT is stored in a bytearray that you can decode in an Analysis agent e g by using the
379
Desktop 7.1
APL function udrDecode. Alternatively, you can route the bytearray to a batch workflow that performs
the decoding.
10.11.3. Configuration
You open the HiCAP agent configuration view from the Workflow Editor either by double-clicking
the agent icon, or by right-clicking it and then selecting Configuration.
380
Desktop 7.1
381
Desktop 7.1
Figure 288. The HiCAP Agent Configuration View - File Polling Tab
Poll Function Sets the collector to poll for either Primary or Secondary data. For inform-
ation about Primary and Secondary data, see the Flexent/AUTOPLEX,
Wireless Networks, High Capacity AMATPS API documentation.
Poll Sleep Time (ms) The interval between polls for Primary or Secondary data.
Remove Invalid Trail- The header in the AMA files contains the number of data blocks. Select this
ing Blocks checkbox to remove any trailing blocks that exceeds the number of blocks
specified in the header.
Starting Optional setting of start block in sequence. This setting is available when the
Poll Function is set to Secondary For information about block sequence
numbers, see the Flexent/AUTOPLEX, Wireless Networks, High Capacity
AMATPS API documentation.
Ending Optional setting of end block in sequence. This setting is available for second-
ary AMA data blocks. For information about block sequence numbers, see
the Flexent/AUTOPLEX, Wireless Networks, High Capacity AMATPS API
documentation.
382
Desktop 7.1
Class Specifies the type of logging that shall be performed. For information about classes
and log messages, see the Flexent/AUTOPLEX, Wireless Networks, High Capacity
AMATPS API documentation.
Level Specifies the level on which trace messages are recorded for the selected set of
classes. For information about trace levels, see the Flexent/AUTOPLEX, Wireless
Networks, High Capacity AMATPS API documentation.
File Prefix The trace function appends each message to primary log file named
TRACE_DIR/<file_prefix>.pid on the AMAT. For information about
trace files, see the Flexent/AUTOPLEX, Wireless Networks, High Capacity
AMATPS API documentation.
File Size The maximum file size in bytes of the primary file. For information about trace
files, see the Flexent/AUTOPLEX, Wireless Networks, High Capacity AMATPS
API documentation.
Enable Debug Select this checkbox to enable debugging of the AMATPS RPC Client of the
Log HiCAP agent. The debug information is stored in the Excecution Context log.
10.11.4. Introspection
The introspection is the type of data that an agent expects and delivers.
Depending on the settings in the File Polling tab, the agent may produce one of the following UDR
types:
• PrimaryUDR (HiCAP)
• SecondaryUDR (HiCAP)
383
Desktop 7.1
The agent does not publish nor access any MIM parameters.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
It also includes APL functions for connecting as a client to an external HTTP server.
10.12.1.1. Prerequisites
The reader of this information should be familiar with:
When a workflow acting as a web server is started, the HTTPD agent opens a port for listening and
awaits a request. The workflow remains active until manually stopped. In addition, the agent offers
the possibility to use an encrypted communication setup through SSL.
384
Desktop 7.1
Note! To fully support HTTP pipelining, you must add the property ec.httpd.ordered.re-
sponse with value true in the executioncontext.xml file. If this property is set to
true, responses will be guaranteed to be sent in the same order as the pipelined requestes were
received.
To ensure that a request is not blocking responses from being sent for too long, the Server
Timeout (sec) should be configured. If a response is not sent for a request within the specified
time, the response for the next request will be sent.
This property should not be set unless support for pipelining is required!
Setting this property to true will also have some effect on the performance since the requests
will be cached until the responses have been sent.
10.12.2.2. Configuration
The HTTPD agent configuration window is displayed when double-clicking on the HTTPD agent in
a workflow, or right-clicking and selecting Configuration...
The last alternative is to prefer if the disks on the execution host normally
cannot be reached for updates.
Password The password for the keystore file.
Local Address The local address that the server will bind to. If the field is left empty, the
server will bind to the default address.
Port The port the server will listen to. Default port for non-encrypted commu-
nication is 80 and for encrypted 443.
385
Desktop 7.1
Content Type The UDR Type, extended HttpdUDR, the collector will emit. Please refer
to Section 10.12.2.3.1, “Format” for an example.
Client Timeout (sec) Number of seconds a client can be idle while sending the request, before
the connection is closed. If the timeout is set to 0 (zero) no timeout will
occur. This is not recommended. Default value is 10.
Server Timeout (sec) The number of seconds before the server closes a request and a 500 Server
Error is sent back to the client. If the timeout is set to 0 (zero) no timeout
will occur. Default value is 0.
Character Encoding List of encoding options to use for handling of responses.
GZIP Compression Regulates compression data size. Valid levels are from 1-9, where 9 is
Level slowest but provides optimal compression.
10.12.2.3.1. Format
The built-in HTTP format definition must be extended prior to usage of the HTTPD format.
1. Open the Ultra Format Editor by clicking the New Configuration button in the upper left part
of the MediationZone® Desktop window, and then selecting Ultra Format from the menu.
2. Enter:
Field Description
accept(string) Media types that are acceptable in the response, e g
text/plain. This field is included in the request header.
acceptEncoding(string) Restricts the content encodings that are acceptable in the
response, e g gzip. This field is included in the request
header.
clientHost(string) The host.
content(string) The content itself.
contentEncoding(string) Additional content encodings that have been applied to the
entity-body, e g gzip. This field is included in the response
header.
contentLength(int) The length of the content in bytes.
contentType(string) The type of the content, e g "image/gif".
errorMessage(string) This field is populated by the HTTPD agent when an error
occurs. It is not included in the request or response.
query(string) The query.
redirectURL(string) In case you want to redirect a request, this field should
contain the URL to which you want to redirect.
386
Desktop 7.1
Field Description
requestMethod(string) The request method, e g GET, POST.
response(string) The response which will be returned to the requesting user.
responseBinary(bytearray) The response in binary format.
responseStatusCode(string) HTTP header reponse code.
responseType(string) The type of the response, e g "text/html".
userAgent(string) Shows user information, e g browser type and browser
version used.
Additional fields may be entered. This is useful mainly for transportation of variable values to
subsequent agents.
3. Save your Ultra by clicking on the Save button and entering the name of the Ultra.
10.12.2.4. Introspection
The introspection is the type of data an agent expects and delivers.
The agent consumes and produces UDR types extended with the built-in HTTP format.
The agent does not publish nor access any MIM parameters.
For information about HTTP client functions that are available in APL, see the APL Reference Guide.
For information about HTTP server functions that are available in APL, see the APL Reference Guide.
387
Desktop 7.1
MediationZone® supports both HTTP and HTTPS. The filename of the file to be collected must be
known before collection.
10.13.1.1. Prerequisites
The reader of this information should be familiar with:
• HTTP/HTTPS protocol
The agent will download the files from the web server as a byte stream and route the content of the
file into the workflow in parts of up to 32768 bytes.
10.13.2.1. Configuration
The HTTP Batch Collection agent configuration window is displayed when the agent in a workflow
is right-clicked selecting Configuration... or double-clicked.
10.13.2.1.1. Connection
Figure 291. HTTP Batch Collection agent configuration window, Connection Tab
URL URL to the file that will be collected, the full URL to a file must be given.
If collected file contains any links to other pages these will only be followed if
Index Based Collection is checked. Refer to Enable Index Based Collection in
the Section 10.13.2.1.2, “Source” tab.
388
Desktop 7.1
10.13.2.1.2. Source
Figure 292. HTTP Batch Collection agent configuration window, Source Tab
Compression Select if the agent should try to decompress the data collected before routing it
into the workflow. The options are 'No Compression' and 'Gzip'.
If Enable Index Based Collection is selected, only the links in the given
URL will be decompressed upon collection.
Enable Index Based Select to Enable Index Based Collection. All linked-to URLs found in the HTML-
Collection formatted document will be collected. The URL is pointed out in the URL field
in the Section 10.13.2.1.1, “Connection” tab.
URL Pattern Either leave empty or enter a regular expression filtering the full URL. If empty
all files are collected, otherwise files matching the URL Pattern will be collec-
ted.
The text entered in this field is the expected extension to the shared filename.
The Control File Extension will be attached to the shared filename depending
on the setting made in the Position field, refer to Example 69, “Control File
Extenstions” for more information.
Data File Extension The Data File Extension is an optional field that is used when a stricter definition
of files to be collected is needed, refer to Example 69, “Control File Extenstions”
for more information. It is only applicable if the Position is set to Suffix.
389
Desktop 7.1
• FILE1.dat
• FILE2.dat
• FILE1.ok
• ok.FILE1
• FILE1
1. The Position field is set to Prefix and the Control File Extension
field is set to .ok.
The control file is ok.FILE1 and FILE1 will be the file collected.
2. The Position field is set to Suffix and the Control File Extension
field is set to .ok.
The control file is FILE1.ok and FILE1 will be the file collected.
3. The Position field is set to Suffix and the Control File Extension
field is set to .ok and the Data File Extension field is set to .dat.
The control file is FILE1.ok and FILE1.dat will be the file col-
lected.
Enable HTTP DE- Selecting this will issue the web server to delete the file and the control file after
LETE the file has been successfully collected. If unchecked the file will be ignored
after collection, that is the file will be left in on the webserver.
10.13.2.1.3. Advanced
Figure 293. HTTP Batch Collection agent configuration window, Advanced Tab
Keystore Name of the keystore file that has been imported and will be used by the agent.
Select Import Keystore and select the file to be used by the agent.
Keystore Password Password to be used on the selected keystore file.
Read Timeout (ms) The maximum time, in milliseconds, to wait for response from the server. 0
(zero) means to wait forever
390
Desktop 7.1
Figure 294. HTTP Batch Collection agent configuration window, Duplicate Check Tab
The Duplicate Check feature is only used when Enable Index Based Collection found in the Sec-
tion 10.13.2.1.2, “Source” is enabled.
Enable Duplicate When selected, the agent will store every collected URL in a (configurable)
Check number of days. The storage will be checked to make sure that no URL is col-
lected again as long as it remains in the storage.
Database Profile Each collected URL will be stored in the database defined in the profile selected.
The schema must contain a table called "duplicate_check", for more information
about this table refer to Section 10.13.3, “Appendix”.
Max Cache Age The number of days to keep collected URLs in the database. When the workflow
(Days) starts, it will delete entries that are older than this number of days.
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch The agent will emit beginBatch before the first content of the file is routed into the
workflow. The agent will also use the ECS batch service and route the data to it.
End Batch The agent will emit endBatch after the final part of the file has been routed into the
workflow.
10.13.2.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Hint End Batch When hintEndBatch is called the agent will call endBatch as soon as the current data
block has been routed from the agent. If more data is available from the web server
the agent will call beginBatch and then continue to process the rest of the file.
391
Desktop 7.1
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
10.13.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
10.13.2.4.1. Publishes
10.13.2.4.2. Accesses
Ready with file Reported, along with the name of the URL, when the file is collected
and inserted into the workflow.
Failed to collect file Reported, along with the name of the URL, when the file failed to
be collected.
URL cancelled Reported, along with the name of the current URL, when a cancel-
Batch message is received. This assumes the workflow is not
aborted; refer to Section 10.13.2.2, “Transaction Behavior” for
further information.
392
Desktop 7.1
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
10.13.3. Appendix
10.13.3.1. Database Requirements for Duplicate Check
The Duplicate Check feature stores the collected URLs in an external database pointed out by a Database
profile. The schema of this database must contain a table definition that matches the needs of the agent.
The schema table name must be "duplicate_check". It must contain all the columns from this table:
The column types are defined by how the specific JDBC driver converts JDBC types to the database.
393
Desktop 7.1
10.14.1.1. Prerequisites
The reader of this information should be familiar with:
10.14.2.1.1. Connection
At startup, a connection towards an Queue Manager is set up to listen to a number of queues, topics
or durable subscriptions. This can be configured either directly in the IBM MQ Collection agent or be
dynamically set within an Analysis agent.
If the agent will fail to connect to all configured queues, topics or durable subscriptions, the workflow
will abort.
Message queues are used for storing messages in Websphere MQ Server. The messages consist of two
parts; the binary data used by the application and the delivery information handled by the Queue
Manager. The Queue Manager provides a logical container for message queues and is responsible for
transferring the data between local and remote queues.
The IBM MQ agent will read the messages in the configured local message queues and each message
data will be transferred as a UDR in to the workflow. Depending on the agent's configuration the Queue
Manager will remove the message from the queue directly or it will wait until it has been processed.
New messages can also be sent to the Queue Manager with the IBM MQ APL commands.
As opposed to the point-to-point communication, IBM Websphere offers the possibility to publish and
subscribe to topics. Neither the publisher or the subscriber need to know where the other part is located.
All interaction between publishers and subscribers are controlled by the Queue Manager.
The IBM MQ agent acts as a subscriber and will register which topics or durable subscriptions to listen
for at the Queue Manager. The Queue Manager will then examine every incoming publication and
place matching messages on the subscribers queue, which will be read by the IBM MQ agent and
transferred as UDRs into the workflow.
10.14.2.2. Preparations
The following jar files are required by the IBM MQ Collection agent:
com.ibm.mq.jar
394
Desktop 7.1
com.ibm.mq.jmqi.jar
connector.jar
com.ibm.mq.headers.jar
com.ibm.mq.commonservices.jar
The classpath for the jar files is specified for each executioncontext.xml file for each execution
context. For example:
<classpath path="/opt/mqm/java/lib/com.ibm.mq.jar"/>
After the classpath has been set, the jar file should be manually distributed to be in place when the
Execution Context is started.
10.14.2.3. Configuration
The IBM MQ Collection agent configuration window is displayed when double-clicking on the agent
in a workflow, or right-clicking on the agent and selecting Configuration...
Depending on the selected Connection Mode, different configuration fields are available.
Dynamic Initialization When this option is set, the configuration of the IBM MQ Collection agent
will not be made from the configuration window. The agent will instead
send a connection UDR to an Analysis agent which will populate the UDR
and send it back to the IBM MQ Collection agent. See Section 10.14.3.1,
“Connection UDRs” for more information regarding the connection UDRs.
MQ Host The host name of the queue manager host.
Port The port for the queue manager.
Channel The name of the MQ channel.
Queue Manager The name of the queue manager.
Connection Mode Select which Connection Mode the agent should use. The possible values
are Queues, Topics and Durable Subscriptions
Auto Remove This check box is only available if you have selected Queues as Connection Mode.
395
Desktop 7.1
Select this check box if the message should be removed from the queue without re-
quiring that the MQMessage UDR is routed back to the agent.
Queues List the queues that the agent should listen to.
Topics List the topics that the agent should listen to.
Durable Subscriptions List the subscriptions that the agent should listen to.
10.14.2.4. Introspection
If the IBM MQ Collection agent is configured to read connection parameters dynamically, it will de-
liver and expect a connection UDR during initialization. Depending on the configuration, the connection
UDR can be of the following types:
If the IBM MQ agent is configured for Queues, messages are delivered as MQMessage UDRs, while
Topics and Durable Subscriptions will deliver messages as MQMessageTopic and the agent expects
the same UDR type back.
If the agent is using dynamic initialization, the connection UDRs are used for setting up the connection.
The types MQMessage and MQMessageTopic UDR Types are used for handling the messages.
APL commands are used for producing outgoing messages and the UDR types used for this are
MQQueueManagerInfo, MQQueue and MQMessage.
10.14.3.1.1. MQConnectionInfo
If the connection mode is set to Queues, MQConnectionInfo will be used as the connection UDR.
396
Desktop 7.1
Field Description
ChannelName (string) The name of the MQ channel.
Host (string) The host name of the queue manager host.
Port (integer) The port for the queue manager.
Properties (map A map of optional properties to be set, for
<any,any>)(optional) example, user name.
QueueManager (string) The name of the queue manager.
Queues (list <string>) A list of queues to listen to.
10.14.3.1.2. MQConnectionInfoTopic
If the connection mode is set to Topics, MQConnectionInfoTopic will be used as the connection
UDR.
Field Description
ChannelName (string) The name of the MQ channel.
Host (string) The host name of the queue manager host.
Port (integer) The port for the queue manager.
Properties (map A map of optional properties to be set, for
<any,any>)(optional) example, user name.
QueueManager (string) The name of the queue manager.
TopicNames (list <string>) A list of topics to subscribe for.
10.14.3.1.3. MQConnectionInfoDurableTopic
Field Description
ChannelName (string) The name of the MQ channel.
DurableSubscriptions (list A list of subscriptions to listen to.
<string>)
Host (string) The host name of the queue manager host.
Port (integer) The port for the queue manager.
Properties (map A map of optional properties to be set, for
<any,any>)(optional) example, user name.
QueueManager (string) The name of the queue manager.
10.14.3.2. MQMessage
For each message in the MQ message queue, a UDR is created and sent into the workflow. When the
IBM MQ agent receives the MQMessage in return it will remove the message from the queue.
Field Description
CorrelationID (bytearray) This ID can be used for correlating mes-
sages that are related in some way or anoth-
397
Desktop 7.1
Field Description
er, e g requests and answers. The length
of this field will always be 24, meaning
that fillers will be added to IDs that are
shorter, and IDs that are longer will be cut
off.
Id (bytearray) The message id.
Message (bytearray) The message.
Persistent (boolean) If set to "true", the message will be sent as
a persistent message, otherwise the queue
default persistence will be used.
ReplyToQueue (string) The name of the queue to reply to.
ReplyToQueueManager (string) The name of the queue manager to reply
to.
SourceQueueName (string) The name of the source queue.
10.14.3.3. MQMessageTopic
For each topic message, a UDR is created and sent into the workflow.
Field Description
DataMessage (bytearray) The message id.
10.14.3.4. MQQueue
The MQQueue UDR is a reference to an IBM MQ queue when using APL commands. The UDR is
created by the mqConnect function and all fields are read-only.
Field Description
CurrentDepth (integer) The number of messages currently in the
queue.
ErrorDescription (string) A textual description of an error.
IsError (boolean) Returns true if the UDR contains an error
message.
IsOpen (boolean) Returns true if the connection was success-
fully opened.
MaxDepth (integer) The maximum number of messages al-
lowed in the queue.
MqError (string) The error code provided by IBM MQ when
a connection attempt fails or in case of an
error related to the mqPut or mqClose
commands occurs.
QueueManager (string) The name of the queue manager.
QueueName (string) The name of the queue to connect to.
398
Desktop 7.1
10.14.3.5. MQQueueManagerInfo
The MQQueueManagerInfo UDR type is used by the APL functions when establishing a connection
towards a queue on the Queue Manager for outgoing messages.
Field Description
ChannelName (string) The name of the MQ channel.
Host (string) The host name of the queue manager host.
Port (integer) The port for the queue manager.
Properties (map<any,any>) A map of optional properties to be set, for
example, user name.
QueueManager (string) The name of the queue manager.
The agent does not publish nor access any MIM parameters.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
Depending on the selected Connection Mode, different debug events are sent.
Reported if the queue manager has not been opened by the IBM MQ Collection agent.
399
Desktop 7.1
Reported after a successful connection to a queue from the IBM MQ Collection agent.
Reported when the MQ Collection Agent failed to subscribe to all configured topics.
Reported when the MQ Collection Agent failed to connect to all configured durable subscriptions.
10.14.7.2. mqConnect
This function will open a connection to a queue and queue manager.
Parameters:
400
Desktop 7.1
Returns Returns an MQQueue UDR. For further information about the MQQueue UDR type
see Section 10.14.3.4, “MQQueue”
Note! If there is no available queue status for some reason, the MaxDepth and CurrentDepth
fields will be assigned the value "-1" and the mqConnect function will still be able to connect.
10.14.7.3. mqPut
This function will put a message on a queue.
If the function fails, it will populate the ErrorDescription field with a description and set
isError to true. If the error was generated from an MQ exception it will also update the MqError
field in the MQQueue UDR.
Parameters:
queue The MQQueue UDR that is the result from the mqConnect function. For further inform-
ation about the MQQueue UDR type see Section 10.14.3.4, “MQQueue”
message The message to add to the queue. For further information about the MQMessage UDR
type see Section 10.14.3.2, “MQMessage”
Returns Returns null if the function was successful and an error message if it failed.
10.14.7.4. mqClose
This function closes the connection to the queue manager.
If the function fails, it will populate the ErrorDescription field with a description and set
isError to true. If the error was generated from an MQ exception it will also update the MqError
field in the MQQueue UDR.
Parameters:
queue The MQQueue UDR that is the result from the mqConnect function. For further information
about the MQQueue UDR type see Section 10.14.3.4, “MQQueue”
Returns Returns null if the function was successful and an error message if it failed.
10.14.7.5. mqStatus
This function will query the queue for MaxDepth and CurrentDepth and populate the corresponding
fields in the MQQueue UDR.
If the function fails, it will populate the ErrorDescription field with a description and set
isError to true. If the error was generated from an MQ exception it will also update the MqError
field in the MQQueue UDR.
Parameters:
queue The MQQueue UDR that is the result from the mqConnect function. For further information
about the MQQueue UDR type see Section 10.14.3.4, “MQQueue”
Returns Returns an error message describing the problem, or null if the function was successful.
401
Desktop 7.1
10.14.8. Examples
In an IBM MQ Collection agent workflow there are four different UDRs created and sent. This requires
one ore more agents containing APL code (Analysis or Aggregation) to be part of the workflow.
For each collected message, the IBM MQ Collection agent sends an InputMessage including an
MQMessage UDR to the Analysis agent. If "Auto Remove" has been configured, the message is re-
moved from the queue and a remMessage is sent to the agent.
Figure 295. Realtime workflow example - a message is collected from the IBM MQ Collection
agent
The following APL code example shows how to use dynamic initialization.
consume {
if (instanceOf(input, mq.MQConnectionInfo)) {
mq.MQConnectionInfo info = (mq.MQConnectionInfo) input;
info.Host = "mymqhost";
info.Port = 1415;
info.ChannelName = "CHANNEL2";
info.QueueManager = "mgr2.queue.manager";
info.Queues = queues;
udrRoute(info);
}
}
402
Desktop 7.1
The following APL code example shows how to process the IBM MQ message and remove it from
the queue.
consume {
if (instanceOf(input, mq.MQMessage)) {
mq.MQMessage msg = (mq.MQMessage) input;
//Process the MQ Message
handleResponse(msg);
//Remove the message from the queue
udrRoute(msg, "remMessage");
}
}
The following APL code example shows how to send an IBM MQ message to a queue.
mq.MQQueue queue;
initialize {
mq.MQQueueManagerInfo conUDR = udrCreate(mq.MQQueueManagerInfo);
conUDR.ChannelName = "CHANNEL1";
conUDR.Host = "10.46.100.86";
conUDR.Port = 1414;
conUDR.QueueManager = "mgr1.queue.manager";
queue = mqConnect(conUDR, "Q1.QUEUE");
}
consume {
mqStatus(queue);
debug("Queue Depth: "+queue.CurrentDepth);
debug("Queue Max Depth: "+queue.MaxDepth);
mq.MQMessage msg = udrCreate(mq.MQMessage);
msg.Message = input;
mqPut(queue, msg);
}
deinitialize {
mqClose(queue);
}
403
Desktop 7.1
10.15.1.1. Prerequisites
The reader of this information should be familiar with:
• The Cisco NetFlow export formats, see Cisco's web site for further information
10.15.1.2. Terminology
For information about Terms and Abbreviations used in this document, see the Terminology document.
Each router can potentially be identified through several IP addresses (interfaces) and if so, it may
send UDP packets on any of these interfaces to the agent. The agent offers a possibility of mapping
all these IP addresses into one that enables detection of the fact that they originated from the same
router.
When activated, the agent will connect to the configured port and start listening for incoming packets
from the routers. Each received packet will be unpacked into one or several flow records. Based on
the information in the flow record, the agent will create and populate one of the standard NetFlow
UDR types available in MediationZone® and forward the UDR into the workflow. If the agent fails
to unpack or read the packet/flow record, it will silently be removed from the stream.
Since Cisco routers do not offer the possibility of re-requesting historic data, the agent will lose all
data delivered from the router while the agent is not active.
Note! The real-time job queue may fill up, in which case a warning will be raised in the System
Log stating that the job queue is full. Records arriving to a full queue will be thrown away. A
message in the System Log will state when the queue status is back to normal.
404
Desktop 7.1
The UDR types created by default in the NetFlow agent can be viewed in the UDR Internal Format
Browser in the netflow folder. To open the browser open an APL Editor, right-click in the editing
area, and select the UDR Assistance... option in the pop-up menu.
10.15.2.2. Configuration
The NetFlow agent configuration window is displayed when double-clicking on the agent, or right-
clicking on the agent and selecting Configuration...
Port The port number where the NetFlow agent will listen for packets from the routers.
Two NetFlow agents may not be configured to listen on the same port on
the same host.
Only from Pre- If enabled, the agent will only accept packets from hosts specified in the Interface
defined Hosts Mapping tab. Data from other hosts will be discarded.
If disabled, all arriving data will be accepted. This may be suitable if a combin-
ation of routers is used. When a majority of the routers only send from one inter-
face (IP-address) each, and some is set according to the Interface Mapping tab.
Hence, when disabling this option, the one-interface-routers do not have to be
added to the interface mapping list.
Warn on Sequence Determines if a warning will be raised in the System Log when the sequence
Gap number gap between two sequential PDUs from the same router is equal to or
larger than specified in the Minimum Gap.
Minimum Gap The minimum sequence number gap between two flow records that will cause
warning in the System Log.
405
Desktop 7.1
UDR Type UDR Types expected to be delivered to this agent. If other types arrive, the NetFlow
agent will abort.
Maps several interface IP addresses to one main IP address. Each router using more than one interface
IP address when sending data to the agent, must be registered here. One of the IP addresses, supported
by the router, must be registered as Main IP Address. The others are configured in the IP Address
list.
If a packet arrives from an IP address configured in the IP Address list, it will be mapped to the cor-
responding Main IP Address. This way it will appear as if all packets originate from the same IP ad-
dress.
Main IP Address Each router that supports multiple interfaces must add one adress to this list.
When an existing row is selected, the content in the IP Address table will reflect
the slave IP addresses for the selected Main IP Addresses.
IP Addresses Additional IP addresses mapped to their corresponding main IP address by the
agent.
10.15.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
Depending on the incoming flow records, the agent may produce one of the following UDR types.
Their names reflect the NetFlow versions:
• V1UDR (netflow)
• V5UDR (netflow)
406
Desktop 7.1
• V7UDR (netflow)
• V8ASMatrixUDR (netflow)
• V8DestinationPrefixMatrixUDR (netflow)
• V8PrefixMatrixUDR (netflow)
• V8ProtocolPortMatrixUDR (netflow)
• V8SourcePrefixMatrixUDR (netflow)
• V9UDR (netflow)
10.15.2.4.1. Publishes
Incoming PDUs is of the long type and is defined as a global MIM context
type.
10.15.2.4.2. Accesses
The Netflow Agent does not itself detect templates, or map incoming data to the corresponding template,
or create UDRs of the incoming data. This functionality must be implemented in APL, as described
in section Section 10.15.3.2, “Workflow Design for V9UDR”. The agent will forward the Netflow
data using the rawData-field to the workflow.
Since the V9UDR format is dynamic, the workflow may not have access to the template when the first
UDRs arrive, or the template may have changed and not yet been sent to the workflow.
407
Desktop 7.1
For this reason, it is recommended to let the real time workflow with the Netflow collection agent(s)
forward the UDRs via Inter Workflow, or Workflow Bridge agents, to a batch workflow that stores
them on disk.
A third workflow may then collect, decode and aggregate the UDRs.
In order to decode the UDRs, you first have to decode the template, and this has to be done by defining
an Ultra format for the template. The template should then be sent to an Aggregation agent to start a
session, which will correlate all the UDRs that use the template. An Ultra defining the aggregation
session handling will also have to be created.
Since the aggregation has to be based on a template specific field, the templates have to be routed one
at the time to the Aggregation agent.
The APL code in the Aggregation agent will then have to handle the decoding of the actual UDRs.
10.16.1.1. Prerequisites
The reader of this document should be familiar with:
When a workflow is activated, the NokiaIACC agent opens a port and waits for requests. The workflow
remains active until manually stopped.
The orbd supplied with JDK could be used. The Object Request Broker Daemon orbd is used as
the naming service to enable clients to transparently locate and invoke objects on the Nokia IACC
agent in the CORBA environment.
The ORBInitialPort argument is a required argument for orbd, and is used to set the port number
on which the Naming Service will run.
When orbd starts up, it also starts a Naming Service. A Naming Service is a CORBA service that
allows CORBA objects to be named by means of binding a name to an object reference. The name
408
Desktop 7.1
binding may be stored in the naming service, and a client may supply the name to obtain the desired
object reference.
When using Solaris software, the root user must be used in order to start a process on a port under
1024. For this reason, is is recommended to use a port number greater than or equal to 1024. A different
port can be substituted if necessary.
The fourth IACC-method, isServerUp, is not supplied for the workflow. This method returns a
string running to the calling client if the agent is up and does not have a UDR data representation.
10.16.2.2.1. IACC_UDR
The NokiaIACC agent retrieves data via the IACC-method call and produces one type of UDR; the
IACC_UDR. The IACC UDR contains a request and a response field. These two fields are of
type Subscribers_UDR.
10.16.2.2.3. Subscribers_UDR
Each of these IACC- method UDRs has a list of Subscriber UDRs called subscribers. This list
needs to be created in the APL code using listCreate. The Subscriber UDR has a list of OneAt-
409
Desktop 7.1
tribute_UDRs called attributes. This list needs to be created in the APL code using listCre-
ate.
The UDR types created by default in the NokiaIACC agent can be viewed in the UDR Internal Format
Browser in the IACC folder. To open the browser open an APL Editor, in the editing area right-click
and select UDR Assistance...; the browser opens.
10.16.2.3. Configuration
The NokiaIACC agent configuration window is displayed when the agent in a workflow is double-
clicked, or right-clicked selecting Configuration...
Figure 304. Nokia IACC agent configuration window, Nokia IACC tab.
Host The host defines the IP-address or hostname where the Naming Service is to be found.
Port The port defines the port to be used for the Nameserver.
Name The service name the agent is to be connected with.
Timeout The maximum time to wait for an answer in seconds. If timeout is zero, however, then
real-time is not taken into consideration and the agent simply waits until notified with
an answer.
Server Host The Server Host defines the IP-address or hostname where the Nokia IACC server is
running. If the Server Host is empty local host will be used.
For the communication with the Nokia IACC agent to work, each Nokia network element needs to be
configured with the same Naming Service host, port and name.
If one of the configuration fields is incorrectly populated, the workflow will abort with a communication
failure.
410
Desktop 7.1
None.
10.16.2.4.2. Retrieves
None.
The agent does not publish nor access any MIM parameters.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
• Timeout on a response.
The timeout occurs when an expected answer is not received within 1000 milliseconds.
The message is sent when a response UDR is sent back via CORBA.
For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.
10.16.3. An Example
Here is an example of what a workflow design using the Nokia IACC agent could look like. A workflow
containing a Nokia IACC agent can be set up to receive requests and send responses. This requires an
Analysis agent to be part of the workflow.
Figure 305. An example workflow with a Nokia IACC agent sending an updated IACC_UDR
back to the source.
To keep the example as simple as possible, the valid records are not processed. To illustrate how the
workflow is defined, an example is given where an incoming UDR is validated, resulting in the field
response being updated and sent back as a reply to the source. Usually, no reply is sent back until
the UDRs are fully validated and processed. The example aims to focus on the request and response
handling only.
411
Desktop 7.1
Drag and drop a Nokia_IACC collection agent into the workflow. To be able to recieve and send requests
and responses the Nokia IACC agent needs to connect with the CORBA Naming Service. Therefore
the configuration window of the Nokia IACC agent needs to be updated with the appropriate values
for the Host, Port and service Name.
The Analysis agent handles both the validation of the incoming reqest, as well as sending the response.
Connect an Analysis agent to the Nokia IACC agent. Drag and release in the opposite direction to
create a response route in the workflow.
Note the use of the instanceOf function. This to verify the type of request and to be able to handle
it accordingly. This example assumes a request to the hasCredit2 method. Therefore, this will be
the only response populated and sent back with an updated response field.
consume {
if(instanceOf(input.request, HasCredit2)){
412
Desktop 7.1
10.17.1.1. Prerequisites
The reader of this information should be familiar with:
When a file has been successfully processed, the agent offers the possibility of moving, renaming, re-
moving or ignoring the original file. The agent can also be configured to keep files for a set number
of days. When all files in the batch are successfully processed, the agent stops awaiting the next activ-
ation, scheduled or manually initiated.
When Force Single UDR on the Merge Files tab is checked, the agent will try to read the complete
file into one UDR. The agent will however only be able to handle files with a file size that is smaller
than Integer.MAX_VALUE. While reading a file, if an exception such as OutOfMemoryError
or ArrayIndexOutOfBounds occurs, the workflow aborts and a message is logged indicating the
name of the file that caused the exception. For information about the Integer.MAX_VALUE type
see the Java documentation.
10.17.2.1. Configuration
The Merge Files collection agent configuration window is displayed when the node in a workflow is
double-clicked, or right-clicked, selecting Configuration.... Parts of the configuration may be done
in the Sort Order tab. For further information, see Section 4.1.6.2.3, “Sort Order Tab”.
The Merge Files tab contains configurations related to the location and handling of the source files
collected by the agent.
413
Desktop 7.1
Figure 306. Merge Files Collection agent configuration, Merge Files tab.
Base Dir- Pathname of the source base directory on the local file system of the execution context,
ectory where the source files reside.
Filename Name of the source files collected from the sub directory. Regular expressions according
to Java syntax applies. For further information, see
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Example 70.
Sub Dir- Name of the sub directory from where files will be collected (the Base Directory will al-
ectory ways be a match).
Compres- Compression type of the source files. Determines if the agent will decompress the files
sion before passing them on into the workflow.
Note that limits are set per directory, that is, the batch will be closed when the last
file of a sub directory has been processed even if the File Limit or Byte Limit
closing condition has not been reached.
414
Desktop 7.1
Move Be- If enabled, the source files are moved to the automatically created subdirectory
fore Col- DR_TMP_DIR under the directory from which they originated, prior to collection. This
lecting option supports safe collection of source files.
Inactive If the specified value is greater than zero, and if no file has been collected during the
Source specified number of hours, the following message is logged:
Warning
(hours) The source has been idle for more than <n> hours, the last
inserted file is <file>.
Move to If enabled, the files will after collection be moved to the sub directory specified in the
Directory filed. If Move Before Collecting is selected the file will be moved from the
directory DR_TMP_DIR. to a sub directory relative to the files original location.
The fields Prefix, Suffix and Keep (days) will be enabled when the Move to are set. In-
formation about them will follow.
Rename If enabled, the source files will after the collection be renamed and kept in the source
directory from which they were collected. If using Move Before Collecting the files will
after the renaming be moved from the DR_TMP_DIR directory back to the original location.
If Rename is enabled, the source files will be renamed in the current directory
(source or DR_TMP_DIR). Make sure that the new name does not match the regular
expression or the file will be collected over and over again.
Remove If enabled, the source files will, after successfully being processed, be removed from the
source directory (or from the DR_TMP_DIR directory if Move Before Collecting is
used).
Ignore If enabled, the source files will remain in the source directory.
Directory Pathname relative to the current position of a file where the source files will be moved.
Note, a date tag is added to the filename, determining when the file may be removed. This
field is only enabled if Move to or Rename is selected.
After each successful execution of the workflow the agent will search recursively under
Base Directory for files to remove.
Force If this is disabled the output files will automatically be divided in multiple UDRs per file.
Single The output files will be divided in suitable block sizes.
UDR
Begin Batch Emitted right before the first file in a Merged File group batch.
415
Desktop 7.1
End Batch Emitted when the last file of a sub directory has been processed or when a Merge
Closing Condition is reached.
10.17.2.2.2. Retrieves
Cancel Batch In case a cancelBatch is generated, all files in that merged set will be canceled as
one batch. Depending on the workflow configuration, the batch (consisting of several
input files) will either be stored in ECS or the workflow will abort and the files be-
longing to the batch will be left untouched.
Hint End Batch If a Hint End Batch message is received, the collector splits the batch after the current
file has been processed.
After a batch split, the collector emits an End Batch message, followed by a Begin
Batch message (provided that there is data in the subsequent block).
10.17.2.3. Introspection
The agent produces CollectedFileUDR types.
10.17.2.4.1. Publishes
Source File Count is of the long type and is defined as a global MIM
context type.
Batch Retrieval This MIM parameter contains a timestamp, indicating when the batch was read
Timestamp in the beginBatch block.
Source Files Left is of the long type and is defined as a header MIM
context type.
416
Desktop 7.1
10.17.2.4.2. Accesses
For a list of the general MIM parameters, see Section 2.2.10, “Meta Information Model”.
10.17.2.7. CollectedFileUDR
The agent produces and routes CollectedFileUDR types with a structure described in the following
example.
Example 71.
CollectedFileUDR:
internal CollectedFileUDR {
string fileName;
string baseDirectoryPath;
string subDirectory; // relative to base directory
int sizeOnDisk; // will differ if file was compressed
boolean wasCompressed; // true if file was decompressed
on collection
date fileModifiedTimestamp
int fileIndex // Index number within the current merged
batch, starts with 1.
bytearray content;
boolean isLastPartial; // True if last UDR of the input file
int partialNumber; // Sequence number of the UDR within the
file. 1 for first, 2 for second so on.
};
10.18.1.1. Prerequisites
The reader of this information has to be familiar with:
417
Desktop 7.1
The Ultra Format Definition Language is described in the MediationZone® Ultra Reference Guide.
The latency information is presented as an agent specific UDR type. Depending on costumer specific
needs the information can be converted in a number of different ways.
10.18.2.1. Overview
The Latency Statistics agent collects latency information on workflow level. Only one independent
latency measurement agent can be used per workflow since there is only one histogram measurement
collection point per workflow.
Note: Latencey measurement is enabled in a workflow that contains a Latency collector agent.
10.18.2.2. Configuration
The Latency Statistics agent configuration window is displayed when the agent in a workflow is right-
clicked selecting Configuration... or double-clicked.
Granularity (ms) Specifies the number of millisecond increments used when measuring latency.
Default value is 5 milliseconds.
Bucket Count Specifies the number of buckets used to record latency history. Each bucket is an
increment of granularity. Maximum latency recorded will be: Granularity x
Number of buckets. Default value is 400.
Timeout (ms) Number of latency milliseconds before MediationZone® assumes that a response
from a latency request will not arrive. Default value is 5000 milliseconds.
Duration (s) Number of seconds between which the agent emits all latency histograms. The
agent will emit a LatencyHistogramList UDR on each of its output routes
after every reached interval time. Default value is 10 seconds.
418
Desktop 7.1
10.18.2.3.2. Retrieves
10.18.2.4. Introspection
The agent emits UDRs of LatencyHistogramList type.
APL offers the possibility of both publishing and accessing MIM resources and values. For a
list of the general MIM parameters, see Section 2.2.10, “Meta Information Model”.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
10.18.3.1. latencyStart
Starts a latency measurement and (unless it already exists) creates its associated histogram.
any latencyStart
( string key1,
string key2
[,long startTime] //Optional
);
Parameters:
key1 Primary measurement identifier. This denotes the end-points between which to measure
the latency. It is an arbitrary string such as “CCR_CCA” (which could indicate that
the latency between reception of a CCR and response of CCA is measured).
key2 Traffic case discriminator. Arbitrary string to further classify the histogram. Key2
combined with key1 identifies a unique set of buckets that can be added together to a
latency histogram. This key could be constructed from usage data in the request for
example source, event type, etc. A null value indicates no further classification.
startTime The time when the latency should begin being measured. If the startTime parameter
is not defined in another way, the APL function dateNanoseconds() will on entry,
assign the startTime.
419
Desktop 7.1
Note if the Latency Statistics agent is not present in a workflow the function will always
return null.
10.18.3.2. latencyStop
The function stops a latency timer. When the latency value has been determined, it will increment the
corresponding bucket in the appropriate latency histogram (identified by key1 and key2 of latencyStart).
long latencyStop
( any statID,
[long stopTime] //Optional
);
Parameters:
• -1: Id mismatch, the statID was not found. Either because the argument was not
returned from latencyStart, or the timeout value of the Latency Collector was
exceeded. This is measured from the point where latencyStart was called.
10.18.3.3. latencyAdd
The function is a latencyStart and latencyStop combined.
long latencyAdd
( string key1,
string key2,
long startTime
[, long stopTime] //Optional
);
Parameters:
420
Desktop 7.1
10.18.3.4. isLatencyEnabled
boolean isLatencyEnabled;
internal LatencyHistogramList {
long startTime;
// Time when measurement collection started as
// java.lang.System.currentTimeMillis()
int stopMismatchCount;
// Number of calls to latencyStop that did not
// match an id, i.e. where -1 was returned (see
// description for latencyStop)
long stopTime;
// Time when measurement collection stopped as
// java.lang.System.currentTimeMillis()
internal LatencyHistogram {
any key1; // e.g. CCR_CCA
any key2: // e.g. source, event_type, etc
int granularity; // as defined in workflow
int bucketCount; // as defined in workflow
long timeout; // as defined in workflow
list <int> buckets; // measurement counts assigned to
appropriate buckets
int totalCount;
// Sum of measurements in buckets[0..n] cells +
// outsideBucketCount
int outsideBucketCount;
421
Desktop 7.1
int notStoppedCount;
// Number of measurements that fell outside timeout
// duration. These were either lost or exposed
// to a configuration error (i.e. that latencyStart
// was called without corresponding latencyStop)
int negativeLatencyCount;
// Number of calls to latencyStop that resulted in
// a negative latency (see description for latencyStop)
};
422
Desktop 7.1
11.1.1.1. Prerequisites
The reader of this information should be familiar with:
• Couchbase database
11.1.2. Overview
The Aggregation agent enables you to consolidate related UDRs that originate from either a single
source, or from several sources, into a single UDR. Related UDRs, are grouped into sessions according
to configurable data fields in each of the UDRs. The agent uses APL coded criteria (if) series to as-
sociate a specific partial UDR to another, or to a session that already includes a matching UDR.
The agent stores the session in a file system or in a Couchbase database. When the agent is about to
save a group of UDRs, it creates a UDR list by using the APL code.
To ensure the integrity of the session's data in the storage, the Aggregation agent may use read- and
write locks. When using file storage and an active agent has write access, no other agent can read or
write to the same storage. It is possible to grant read-only access for mutliple agents, provided that the
storage is not locked by an agent with write access. When using Couchbase storage, multiple Aggreg-
ation agents can be configured to read and write to the same storage. In this case, write locks are only
enforced for sessions that are currently updated and not the entire storage. For information about how
to configure read-only access, see Section 11.1.3.4, “Agent Configuration - Batch” or Section 11.1.3.5,
“Agent Configuration - Real-Time”.
In a batch workflow, the aggregation agent receives collected and decoded UDRs one by one.
In a real-time workflow, the aggregation agent may receive UDRs from several different agents sim-
ultaneously.
423
Desktop 7.1
Figure 310, “The Aggregation Flow Chart” illustrates how an incoming UDR is treated when it is
handled by the Aggregation agent. If the UDR leaves the workflow without having called any APL
code, it is handed over to error handling. For detailed information about handling unmatched UDRs
see Section 11.1.3.4.1, “General Tab” and Section 11.1.3.5.1, “General Tab”.
When several matching sessions are found, the first one is updated. If this occurs, redesign the workflow.
There must always be zero or only one matching session for each UDR.
11.1.3. Configuration
You configure the Aggregation agent with these steps:
424
Desktop 7.1
11.1.3.1. SessionUDRType
Each Aggregation profile stores sessions of a specific Session UDR Type, defined in Ultra. For further
information about Ultra formats, see the MediationZone® Ultra Reference Guide.
You define a Session UDR Type in the same way as you define internal Ultra types, with only one
difference; replace the keyword internal with session.
Example 72.
session SessionUDRType {
int intField;
string strField;
list<drudr> udrList;
};
Note! Take special precaution when changing, updating or renaming formats. If the updated
format does not contain all the fields of the historical format, in which UDRs may already reside
within the ECS or Aggregation storage, data will be lost. When a format is renamed, it will still
contain all the fields. The data, however, cannot be collected.
Note! In general, the session UDR should be kept as small as possible. A larger UDR decreases
performance compared to a small one.
When using file storage and sharing an Aggregation profile across several workflow configurations,
the read and write lock mechanisms that are applied to the stored sessions must be considered. For
information about read and write locks, see Section 11.1.3.4, “Agent Configuration - Batch” or Sec-
tion 11.1.3.5, “Agent Configuration - Real-Time”.
The Aggregation profile is loaded when you start a workflow that depends on it. Changes to the profile
become effective when you restart the workflow.
To open the configuration, click the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select Aggregation Profile from the menu.
The main menu changes depending on which configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all configurations and these are
described in Section 3.1.1, “Configuration Menus”.
425
Desktop 7.1
Item Description
External References To Enable External References in an agent profile field. For detailed instruc-
tions, see Section 9.5.3, “Enabling External References in an Agent Profile
Field”.
The toolbar changes depending on which configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.
In the Session tab you can browse and select Session UDR Type and configure the Storage selection
settings.
Session UDR Type Click on the Browse... button and select the Session UDR Type, defined in Ultra,
that you want ap- that you want to use, see Section 11.1.3.1, “SessionUDRType”.
plied.
Storage Select the type of storage for aggregation sessions. The available settings are
File Storage and Couchbase.
File Storage can be used for both batch and real-time workflows.
Couchbase can only be used for real-time workflows, for which it is the preferred
setting.
426
Desktop 7.1
You use the Association tab to configure rules that are used to match an incoming UDR with a session.
Every UDR type requires a set of rules that are processed in a certain order. In most cases only one
rule per incoming UDR type is defined.
You can use a primary expression to filter out UDRs that are candidates for a specific rule. If the UDR
is filtered out by the primary expression, it is matched with the existing sessions by using one or sev-
eral ID Fields as a key.
For UDRs with ID Fields matching an existing session, an additional expression may be used to specify
additional matching criteria. For example: If dynamic IP addresses are provided to customers based
on time intervals, the field that contains the IP address could be used in ID Fields while the actual
time could be compared in the Additional Expression.
UDR Types Click on the Add button to select a UDR Type in the UDR Internal Format dialog.
The selected UDR type will then appear in this field. Each UDR type may have a
list of rules attached to it. Selecting the UDR type will display its rules as separate
tabs to the right in the Aggregation profile configuration.
Primary Ex- (Optional) Enter an APL code expression that is going to be evaluated before the ID
pression Fields are evaluated. If the evaluation result is false the rule is ignored and the
evaluation continues with the next rule.
427
Desktop 7.1
Note! Make sure that the selected fields are of the same type and appear in
the same order for all the rules that are defined for the agent.
Additional Ex- (Optional) Enter an APL code expression that is going to be evaluated along with
pression the ID Fields.
The Additional Expression is useful when you have several UDR types with a
varying number of ID Fields, that are about to be consolidated. Having several UDR
types requires the ID fields to be equal in number and type. If one of the types requires
additional fields that do not have any counterpart in the other type or types, these
must be evaluated in the Additional Expression field. Save the field contents as a
session variable, and compare the new UDRs with it. For an example, see Sec-
tion 11.1.5.2, “Association - Radius UDRs”.
Create Session Select this check box to create a new session if no matching session is found. If the
on Failure check box is not selected, a new session will not be created when no matching session
is found.
Note! If you provide a primary expression, and it evaluates to false, the rule
is ignored and no new session is created.
If the order of the input UDRs is not important, all the rules should have this check
box checked. This means that the session object is going to be created regardless of
the order in which the UDRs arrive.
However, if the UDRs are expected to arrive in a particular sequence, Create Session
on Failure must only be selected for the UDR type/field that is considered to be the
master UDR i e the UDR that marks the beginning of the sequence. In this case, all
the slave UDR types/fields are targeted for error handling if they arrive before their
master UDR.
Note! At least one of all defined rules must have this check box selected.
Otherwise, no session will ever be created.
Add Rule Click on this button to add a new rule for the selected UDR Type. The rule will appear
as a new folder to the right of the UDR Types in the Aggregation profile configuration.
Usually only one rule is required. However, in a situation where a session is based
on IP number, stored in either a target or source IP field, two rules are required. The
source IP field can be listed in the ID Fields of the first rule and the target IP field
listed in the ID Fields of the second rule.
Remove Rule Click on this button to remove the currently displayed rule.
428
Desktop 7.1
Directory Enter the directory on the Storage Host where you want the aggregation data to be
stored.
Partial File In this field you can enter the maximum number of partial files that you want to be
Count stored. Consider the following:
• Startup: All the files are read at startup. It takes longer if there are many partial
files. This is significant especially in a High Availability solution.
• Transaction commitment: When the transactions are committed, many small files
(large Partial File Count) increase performance.
Max Cached Enter the maximum number of sessions to keep in the memory cache.
Sessions
429
Desktop 7.1
This is a performance tuning parameter that determines the memory usage of the
Aggregation agent. Set this value to be low enough so that there is still enough space
for the cache in memory, but not too low, as this will cause performance to deteriorate.
For further information see Section 11.1.3.12, “Performance Tuning with File Stor-
age”.
• Directory
Note! The fields listed above are only applicable when using file storage for aggregation.
You enable External Referencing of profile fields from the profile view's main menu. For detailed in-
structions, see Section 9.5.3, “Enabling External References in an Agent Profile Field”.
11.1.3.3.5.3. Couchbase
Profile Select a Couchbase profile. This profile is used to access the primary storage for
aggregation sessions.
Mirror Profile Selecting this Couchbase profile is optional. It is used to access a secondary storage,
providing read only access for aggregation sessions. Typically, the mirror profile is
identically configured to a (primary) profile, that is used by workflows on a different
Execution Context or other MediationZone® system. This is useful to minimize data
loss in various failover scenarios. The read only sessions can be retrieved with APL
commands. For more information and examples, see the description of the Aggregation
functions in the APL Reference Guide.
430
Desktop 7.1
The Advanced tab is only available when you have selected Couchbase Storage. It contains properties
that can be used for performance tuning. For information about performance tuning, see Sec-
tion 11.1.3.13, “Performance Tuning with Couchbase Storage”.
431
Desktop 7.1
• The Aggregation tab - In a batch workflow this tab contains the three subsidiary tabs, General,
APL Code and Storage.
• The Thread Buffer tab - For further information about the Thread Buffer tab, see Section 4.1.6.2.1,
“Thread Buffer Tab”.
Note! The Thread Buffer tab is only available for batch workflows.
The General tab enables you to assign an Aggregation profile to the agent and to define error handling.
With the Error Handling settings you can decide what you want to do if no timeout has been set in
the code or if there are unmatched UDRs.
432
Desktop 7.1
Profile Click Browse and select an Aggregation profile. In batch workflows, the profile must
use file storage.
All the workflows in the same workflow configuration can use different Aggregation
profiles. For this to work, the profile has to be set to Default in the Field settings tab
in the Workflow Properties dialog. After that, each workflow in the Workflow Table
can be assigned with the correct profile.
Force Read Select this check box to only use the Aggregation Storage for reading aggregation
Only session data. Selecting this check box also means that the agent cannot create new
sessions when an incoming UDR cannot be matched to an existing session. A UDR
for which no matching session is found is handled according to the setting If No UDR
Match is Found.
If you enable the read only mode, timeout and defragmentation handling is also dis-
abled.
When using file storage and sharing an Aggregation profile across several workflow
configurations, the read and write lock mechanisms that are applied to the stored ses-
sions must be considered:
• There can only be one write lock at a time in a profile. This means that all but one
Aggregation agent must have the Force Read Only setting enabled.
• If all of the Aggregation agents are configured with Force Read Only, any number
of read locks can be granted in the profile.
If Timeout is Select the action to take if timeout for sessions is not set in the APL code using ses-
Missing sionTimeout. The setting is evaluated after each consume or timeout function
block has been called (assuming the session has not been removed).
• Ignore - Do nothing. This may leave sessions forever in the system if the closing
UDR does not arrive.
• Abort - Abort the agent execution. This option is used if a timeout must be set at
all times. Hence, a missing timeout is considered being a configuration error.
• Use Default Timeout - Allow the session timeout to be set here instead of within
the code. If enabled, a field becomes available. In this field, enter the timeout, in
seconds.
433
Desktop 7.1
If No UDR Select the action that the agent should take when a UDR that arrives does not match
Match is any session, and Create Session on Failure is disabled:
Found
• Ignore - Discard the UDR.
• Abort - Abort the agent execution. Select this option if all UDRs are associated
with a session. This error case indicates a configuration error.
• Route - Send the UDR on the route selected from the on list. This is a list of output
routes on which the UDR can be sent. The list is activated only if Route is selected.
The APL Code tab enables you to manage the detailed behavior of the Aggregation agent. You use
the Analysis Programming Language (APL) with some limitations but also with additional functionality.
For further information see the APL Reference Guide.
The main function block of the code is consume. This block is invoked whenever a UDR has been
associated with a session.
The timeout block enables you to handle sessions that have not been successfully closed, e g if the
final UDR has not arrived.
Code Area This is where you write your APL code. For further information about the code
area and its right-click menu, see Section 2.2.7, “Text Editor”.
Compilation Test... Use this button to compile the entered code and check for validity. The status
of the compilation is displayed in a dialog. Upon failure, the erroneous line is
highlighted and a message, including the line number, is displayed.
The Storage tab contains settings that are specific for the selected storage in the Aggregation profile.
Different settings are available in batch and real-time workflows.
434
Desktop 7.1
Figure 319. The Aggregation Agent Configuration View - Storage Tab for File Storage
Defragment Session Storage For batch workflows, the Aggregation Session Storage can option-
Files ally be defragmented to minimize disk usage. When checked, con-
figure the defragmentation parameters:
Defragment After Every [] Run defragmentation after the specified number of batches. Enter
Batch(es) the number of batches to process before each defragmentation.
Defragment if Batch(es) Fin- Set a value to limit how long the defragmentation is allowed to run.
ishes Within [] Second(s) This time limitation depends on the execution time of the last batch
processed. If the last batch is finished within the specified number
of seconds, the remaining time will be used for the defragmentation.
The limit accuracy is +/- 5 seconds.
Defragment Session Files Older Run defragmentation on session storage files that are older than this
Than [] Minute(s) value to minimize moving recently created sessions unnecessarily
often.
Note! Defragmentation is only available for batch workflows using file storage.
The General tab enables you to assign an Aggregation profile to the agent and to define error handling.
With the Error Handling settings you can decide what you want to do if no timeout has been set in
the code or if there are unmatched UDRs.
435
Desktop 7.1
All the workflows in the same workflow configuration can use different Aggregation
profiles. For this to work, the profile has to be set to Default in the Field settings
tab in the Workflow Properties dialog. After this, each workflow in the Workflow
Table can be assigned with the correct profile.
Force Read Select this check box to only use the aggregation storage for reading aggregation
Only session data.
If you enable the read only mode, timeout handling is also disabled.
When using file storage and sharing an Aggregation profile across several workflow
configurations, the read and write lock mechanisms that are applied to the stored
sessions must be considered:
• There can only be one write lock at a time in a profile. This means that all but one
Aggregation agent must have the Force Read Only setting enabled.
• If all of the Aggregation agents are configured with Force Read Only, any number
of read locks can be granted in the profile.
If Timeout is Select the action to take if timeout for sessions is not set in the APL code using
Missing sessionTimeout. The setting is evaluated after each consume or timeout
function block has been called (assuming the session has not been removed).
• Ignore - Do nothing. This may leave sessions forever in the system if the closing
UDR does not arrive.
• Abort - Abort the agent execution. This option is used if timeout must be set at
all times. Hence, a missing timeout is considered being a configuration error.
• Use Default Timeout - Allow the session timeout to be set here instead of within
the code. If enabled, a field becomes available. In this field, enter the timeout, in
seconds.
If No UDR Select the action that the agent should take when a UDR that arrives does not match
Match is any session, and Create Session on Failure is disabled:
Found
• Ignore - Discard the UDR.
• Log Event: Discard the UDR and generate a message in the System Log.
• Route: Send the UDR on the route selected from the on list. This is a list of output
routes through which the UDR can be sent. The list is activated only if Route is
selected.
11.1.3.5.3. Storage
The Storage tab contains settings that are specific for the selected storage in the Aggregation profile.
Different settings are available in batch and real-time workflows.
436
Desktop 7.1
When using file storage for sessions in a batch workflow, the Storage tab contains a setting to control
how often the timeout block should be executed. In this tab, it is also specified when the changes to
the aggregation data is written to file.
Figure 321. The Aggregation Agent Configuration View - Storage Tab for File Storage
Session Timeout Interval Determines how often, in seconds, the timeout block is activated
(seconds) for all outdated sessions.
Storage Commit Interval Determines how often, in seconds, the in-memory data is saved to
(seconds) files on disk.
Storage Commit Interval Determines the number of Processing Calls before the in memory
(#Processing Calls) data is saved to files on disk. A 'Processing Call' is an execution of
any of the blocks consume, command or timeout.
Note!
• If Storage Commit Interval (seconds) and Storage Commit Interval (#Processing Calls)
are not configured, none of the sessions that are in the RAM are saved onto the local hard
disk. This also means that the session count displayed in the Aggregation Inspector will not
include these sessions.
• When the Max Cached Sessions in the Aggregation profile is exceeded, and Storage Commit
Interval (seconds) and Storage Commit Interval (#Processing Calls) are not configured,
the agent deletes the oldest session. This is done in order to allocate space for the new session
while still staying within the limit.
11.1.3.5.3.2. Couchbase
Figure 322. The Aggregation Agent Configuration View - Storage Tab for Couchbase
437
Desktop 7.1
If Error Occurs in Select the action that the agent should take when an error occurs in the storage:
Storage
• Ignore - Discard the UDR.
• Log Event - Discard the UDR and generate a message in the System Log.
• Route - Send the UDR on the route selected from the on list. This is a list
of output routes on which the UDR can be sent. The list is activated only in
case Route is selected.
11.1.3.6.1. Emits
The agent emits commands that change the state of the file currently processed.
Command Description
Cancel Batch The agent itself does not emit Cancel Batch messages. However, if the code contains
a call to the method cancelBatch this causes the agent to emit a Cancel Batch.
Hint End Batch If the code contains a call to the method hintEndBatch, this causes the agent to
emit a Hint End Batch.
11.1.3.6.2. Retrieves
The agent retrieves commands from other agents and, based on those commands, changes the state
change of the file currently processed.
Command Description
Begin Batch When a Begin Batch message is received, the agent calls the beginBatch function
block, if present in the code.
End Batch When an End Batch message is received, the agent calls the endBatch function
blocks, if present in the code.
Prior to End Batch, possible timeouts are called. Thus, when a time limit is reached,
the timeout function block will not be called until the next End Batch arrives. If the
workflow is in the middle of a data batch or is not currently receiving any data at all,
this could potentially be some time after the configured timeout.
Cancel Batch When a Cancel Batch message is received, the agent calls the cancelBatch function
block, if present in the code.
11.1.3.8. Introspection
The introspection is the type of data an agent expects and delivers.
The agent produces UDRs or bytearray types, depending on the code since UDRs may be dynam-
ically created. It consumes any of the types selected in the UDR Types list.
438
Desktop 7.1
11.1.3.9.1. Publishes
Note! The MIM parameters listed in this section are applicable when File Storage is selected
in the Aggregation profile.
This counter is reset each time the EC is started, but it might also be reset using
the resetCounters alternative through a JMX client. See Section 8.5, “Aggreg-
ation Monitoring” for further information.
Online Session This MIM parameter contains the number of sessions cached in memory.
Count
Online Session Count is of the int type and is defined as a global MIM
context type.
Session Cache Hit When an already existing session is read from the cache instead of disk, a cache
Count hit is counted.
This counter is reset each time the EC is started, but it might also be reset using
the resetCounters alternative through a JMX client. See Section 8.5, “Aggreg-
ation Monitoring” for further information.
Session Cache Miss When an already existing session is requested and the Aggregation profile cannot
Count read the session information from the cache and instead reads the session inform-
ation from disk, a cache miss is indicated. If a non-existing session is requested,
this will not be counted as a cache miss.
This MIM parameter contains the number of cache misses counted by the Ag-
gregation profile.
This counter is reset each time the EC is started, but it might also be reset using
the resetCounters alternative through a JMX client. See Section 8.5, “Aggreg-
ation Monitoring” for further information.
Session Count This MIM parameter contains the number of sessions in storage on the file sys-
tem.
Session Count is of the int type and is defined as a global MIM context
type.
439
Desktop 7.1
11.1.3.9.1.2. Couchbase
Note! The MIM parameters listed in this section are applicable when Couchbase is selected
in the Aggregation profile.
Mirror Error Count is of the long type and is defined as a global MIM
context type.
Mirror Found Count is of the long type and is defined as a global MIM
context type.
Mirror Not Found Count is of the long type and is defined as a global
MIM context type.
The parameter contains 20 counters for a series of 100 ms intervals. The first
interval is from 0 to 99 ms and the last interval is from 1900 ms and up.
440
Desktop 7.1
Example 73.
• There are 100 mirror session retrieval attempts with a latency of 100
ms to 199 ms.
The parameter contains 15 counters for a series of one-minute intervals. The first
interval is from 0 to 1 minutes and the last interval is from 14 minutes and up.
Example 74.
• There are 1000 sessions with a timeout latency that is less than one
minute.
• There are 100 sessions with a timeout latency of one to two minutes.
441
Desktop 7.1
11.1.3.9.2. Accesses
The agent does not itself access any MIM resources. However, APL offers the possibility of both
publishing and accessing MIM resources and values.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
Note! The information in this section is only applicable when using file storage.
The Aggregation agent can store sessions on the file system using a storage server, but also in a cache.
The maximum size of the cache will be determined by the Max Cached Sessions parameter in the
Aggregation profile (see Section 11.1.3.3, “Aggregation Profile”) and the average size in memory of
a session. It is difficult to estimate the exact memory consumption through testing but the following
should be considered when implementing an Aggregation workflow:
1. Try to keep the session data small. Specifically, do not use large maps or lists in the sessions. These
will take up a lot of memory.
2. If memory issues are encountered, try decreasing the Max Cached Sessions. In order to find out
if the cache size is over dimensioned, you can study the memory of the Execution Context that is
hosting the workflow in System Statistics. For information about System Statistics, see Section 7.12,
“System Statistics”
To avoid a large aggregation cache causing out of memory errors, the aggregation cache detects that
the memory limit is reached. Once this is detected, sessions will be moved from the memory cache to
the file system.
442
Desktop 7.1
Note! This has a performance impact, since the agent will have to read these sessions from the
file system if they are accessed again. The Aggregation agent will log information in the Execu-
tion Context's log file in case the memory limit has been reached and the size of the cache needs
to be adjusted.
It is also possible to specify when updated aggregation sessions shall be moved from the cache to the
file system by setting the mz.aggregation.storage.maxneedssync property in the
executioncontext.xml file. This property shall be set to a value lower than Max Cached Ses-
sions. For performance reasons, this property should be given a reasonably high value, but consider
the risk of a server restart. If this happens, the cached data might be lost.
Hint! To speed up the start of workflows that run locally (on the Execution Context), set the
mz.aggregation.storage.profile_session_cache property in the
executioncontext.xml file to true (default value is false).
By doing so, the aggregation cache will be kept in memory for up to 10 minutes after a workflow
has stopped.
This in turn enables another workflow, that runs within a 10 minute interval after the first
workflow has stopped, and that is configured with the same profile, to use the very same allocated
cache.
Note that since the cache remains in memory for up to 10 minutes after a workflow stopped
executing, other workflows using other profiles might create caches of their own during this
time.
The memory space of the respective aggregation caches will add up in the heap. If the Execution
Context at a certain point runs out of memory, performance deteriorates as cache is cleared and,
as a result, sessions have to be read from and written to disk.
The profile session cache functionality will only be enabled in batch workflows where the Ag-
gregation profile is not set to read-only, and the storage is placed locally to the Execution Context.
Warning! In real-time, when memory caching without any file storage, i e Storage Commit
Interval is set to zero, make sure that you carefully scale the cache size to avoid losing a session
due to cache over-runs. An over-run cache is recorded by the system event in the System Log.
For further information, see Section 11.1.4, “Aggregation Session Inspection”.
While the aggregation cache will never cause the Execution Context to run out of memory, it is still
recommended that you set the Max Cached Sessions low enough so that there is enough space for
the full cache size in memory. This will increase system performance.
11.1.3.12.3. Multithreading
If you have many sessions ending up in timeout, you can improve the performance by enabling multi-
threading, i e use a thread pool, for the timeout function block in the Aggregation agent. When
multithreading is enabled, the workflow can hand over sessions to the pool via the queue without
having to wait for the read operations to complete, since the threads in the thread pool will take care
of that. With many threads, the throughput of read operations completed per second can be maximized.
443
Desktop 7.1
Example:
Warning! Setting or changing the aggregation properties that are described in the following
sections can have a negative impact on performance and may also cause loss of data. These
properties should only be used after consulting a MediationZone® expert.
When Couchbase is selected as the storage type in an Aggregation profile, a bucket is automatically
created during execution of a workflow. The bucket is named according to the configuration of the
assigned Couchbase profile. The bucket is populated with JSON documents that contain the aggregation
session data. This makes it possible to index the timeout information of aggregation sessions in
Couchbase.
The aggregation session data is fetched using a Couchbase view. The default name of the view is
timeout. Changing the default name is not recommended, even though it is possible to do so by
setting the property mz.cb.agg.viewname in the Advanced tab in the Aggregation profile. For
further information, see Section 11.1.3.3.6, “Advanced Tab”.
The data returned by the view is split into chunks of a configurable size. The size of each partial set
of data can be configured by setting the property view.iteratorpageSize in the Advanced tab
of the assigned Couchbase profile. Setting a higher value than the default 1000, may increase
throughput performance but it depends on the available RAM of the Execution Context host.
You can choose to update the result set from a view before or after a query. Or you can choose to retrieve
the existing result set from a view. In this case the results are possibly out of date, or stale. To control
this behavior, you can set the property view.index.stale in the Advanced tab of the assigned
Couchbase profile. The following settings are available:
• FALSE - The index is updated before the query is executed. This ensures that any documents updated
(and persisted to disk) are included in the view. The client waits until the index has been updated
before the query is executed, and therefore the response is delayed until the updated index is available.
• OK - The index is not updated. If an index exists for the given view, the information in the current
index is used as the basis for the query and the results are returned accordingly. This value is seldom
used and only if automatic index updates are enabled in Couchbase.
• UPDATE_AFTER - This is the recommended setting when using a Couchbase profile with Aggreg-
ation. The existing index is used as the basis of the query, but the index is marked for updating once
the results have been returned to the client.
444
Desktop 7.1
11.1.3.13.2. Timeout
There are by default, two timeout threads that periodically check the Couchbase aggregation storage
for timed out sessions. You can control how often this check is performed by setting
mz.cb.agg.timeoutwait.sec in the Advanced tab in the Aggregation profile. The default
value is 10 seconds. For further information, see Section 11.1.3.3.6, “Advanced Tab”.
You can also increase the number of threads that perform this check by setting the property
mz.cb.agg.timeout_no_of_thread. Setting a higher value than default may speed up detection
of timeouts. However, the number of CPUs and the time that it takes for Couchbase to index accessed
documents (session data) are limiting factors.
Hint! You can use the MIM parameter Session Timeout Latency as an indicator of the
timeout handling performance.
The sessions that are fetched from the Couchbase view are shuffled randomly in temporary buffers,
one for each workflow. This is done to minimize the probability that multiple workflows attempt to
time out the same sessions simultaneously. You can control the size of these buffers by setting
mz.cb.agg.randombuffer in the Advanced tab in the Aggregation profile. The default value
is 1000 sessions.
An Aggregation agent may receive a duplicate UDR that is a handled within the same session but by
a different workflow. When set to true, the property mz.cb.agg.collision.check prevents
the last of the duplicate UDRs from being added to the session. The additional checks in the session
data that are required for this may have a negative impact on throughput performance. The default
value of this property is false.
When the Aggregation agent creates a session or adds a UDR to an existing session, it waits for a re-
sponse from the Couchbase storage before proceeding to the next UDR in the queue. This synchronous
mode is enabled by default.
Note! When enabling asynchronous mode, it is strongly recommended to also set the Queue
Worker Strategy to RoundRobin. This setting is available for real-time workflows in the Ex-
ecution tab of the Workflow Properties. The default strategy may cause the workflow to hang.
For information about the Queue Worker Strategy, see Section 4.1.8.4, “Execution Tab”.
In asynchronous mode, the maximum number of pending requests to Couchbase can be limited with
the property mz.cb.agg.async.nroutstandingrequests. The Aggregation agent blocks
incoming UDRs when the limit is reached. The default limit is 1000.
In order to obtain the best possible performance in the Aggregation agent, you should disable automatic
index updates in Couchbase.
From a terminal window, update the index settings using the curl tool.
445
Desktop 7.1
You may specify the IP address or hostname of any available node in the Couchbase cluster. If the
updates are successful, the changes will be applied to all nodes.
Note! Aggregation Session Inspector only inspects sessions stored on disk. Hence, a real-time
workflow Aggregation agent that is not configured with any Storage Commit Interval, or that
uses Couchbase for storage, will not show any sessions.
A real-time work-flow Aggregation agent that is configured with a Storage Commit Interval
will not show all sessions.
To open the Aggregation Session Inspector, click the Tools button in the upper left part of the Medi-
ationZone® Desktop window, and then select Aggregation Session Inspector from the menu.
Initially the window is empty and must be populated with data using the corresponding Search Sessions
dialog, see Section 11.1.4.2, “The Search Sessions Dialog” for details. The following section describes
the options in the Edit menu. The File and View menus contain standard options for saving, closing
and refreshing.
Search... Displays the Search Sessions dialog where search criteria may be defined to identify
the group of sessions to be displayed, see Section 11.1.4.2, “The Search Sessions
Dialog” for further information.
446
Desktop 7.1
Explore Ses- Displays a new window where the session variables may be viewed and if Read Only
sion was disabled in the Search Session dialog, the session variables may be edited as well.
An example of a UDR Viewer window:
Note! The window also appears when you double-click the Index column of
the session.
Validate When you select the Validate Storage menu item, after performing a search, all the
Storage aggregation storage session files are validated. This is done by attempting to read the
session data to establish what can and cannot be read. If the storage contains references
to corrupt sessions, an option to remove them is given.
Views Allows a more detailed view of the UDRs in the session list. For further information
about UDR Views, see the MediationZone® Ultra Reference Guide.
Profile Select the Aggregation profile that corresponds to the data of interest.
Timeout Period If you select this check box you can select a timeout period from which you want
to display data. You can either select the User Defined option in the drop-down
list and then enter date and time in the From and To fields, or you can select one
of the predefined time intervals in the drop-down list; Today, Yesterday, This
Week, Previous Week, Last 7 Days, This Month or Previous Month.
Search Hand- Disable Read Only if content of the sessions need to be altered. Exclusive access
ling to the repository is required to alter the sessions, meaning that if a currently running
447
Desktop 7.1
workflow is using the selected profile, the workflow needs to be stopped to be able
to get exclusive access.
Disable Limit results if you want to fetch all sessions in the session index. This
can be time and memory consuming. Change Limit results to get fewer/more results.
The total number of results is briefly shown in the status bar of the search result
window.
The Netflow agent collects router data and logs the interacting network elements' addresses and amount
of bytes handled, while the Radius agent keeps track of who has initiated the connection, and for how
long the connection was up. Thus, each user login session will consist of two Radius UDRs (start and
stop), and one or several Netflow UDRs. The Aggregation agent is used to associate this data from
each login session. These additional rules apply:
• A Radius UDR belonging to a specific login session must always arrive before its corresponding
Netflow UDRs. If a Netflow UDR arrives without a preceding Radius UDR, it must be deleted.
• Within a Netflow UDR, the user initiating the session may act as a source or destination, depending
on the direction of data transfer. Thus, it is important to match the IP address from the Radius UDRs
with source or destination IP from the Netflow UDRs.
Note! The Radius specific response handling will not be discussed in this example. For further
information, see Section 13.1, “Radius Agents”.
Note! The input UDRs are not stored. Information from the UDRs is extracted and saved in the
session variables.
448
Desktop 7.1
session ExampleSession {
string user;
string IPAddress;
long sessionID;
long downloadedBytes;
long uploadedBytes;
};
user The user initiating the network connection. This value is fetched from the
start Radius UDR.
IPAddress The IP address of the user initiating the network connection. This value is
fetched from the start Radius UDR.
sessionID A unique ID grouping a specific network connection session for the specific
user. This value is fetched from the start Radius UDR.
downloadedBytes The amount of downloaded bytes according to information extracted from
Netflow UDRs.
uploadedBytes The amount of uploaded bytes according to information extracted from
Netflow UDRs.
Pay attention to the use of the Additional Expression. The fields associating the start and stop Radius
UDRs are framedIPAddress and acctSessId. However, since there is no field matching the
latter within the Netflow UDRs, this field cannot be entered in the ID Fields area.
449
Desktop 7.1
This is how arriving Radius UDRs are evaluated when configured according to Figure 326, “The Ag-
gregation Profile - Association Tab - Radius UDRs”:
1. Initially, the UDR is evaluated against the Primary Expression. If it evaluates to false, all further
validation is interrupted and the UDR will be deleted without logging (since no more rules exist).
Usually invalid UDRs are set to be deleted. In this case, only the UDRs of type start (acct-
StatusType=1) or stop (acctStatusType=2) are of interest.
2. If the Primary Expression evaluation was successful, the fields entered in the ID Fields area, together
with the Additional Expression are used as a secondary verification. If it evaluates to true, the
UDR will be added to the session, if not - refer to subsequent step.
3. Create Session on Failure is the final setting. It indicates if a new session will be created if no
matching session has been found in step 2.
450
Desktop 7.1
Figure 327. The Aggregation Profile Editor - Association Tab - Netflow UDRs
This is how arriving Netflow UDRs are evaluated when configured according to Figure 327, “The
Aggregation Profile Editor - Association Tab - Netflow UDRs”:
1. If the DestinationIP, situated in the ID Fields area in the first Rules tab, does not match any existing
session, no new session is created. If a match is found, the UDR is associated with this session.
2. Regardless of the outcome of the first rule, all rules are always evaluated. Hence the second rule is
evaluated. If the SourceIP situated in the ID Fields area in the second Rules tab does not match
any existing session, no new session is created. If a match is found, the UDR is associated with this
session.
Note! Since Create Session on Failure is not enabled for any of the rules, the UDRs which do
not find a matching session will be deleted and cannot be retrieved.
Note! The timeout of a session is set to five days from the current date. Outdated sessions are
removed and their data is transferred to a UDR of type outputUDR, which is sent to ECS.
import ultra.Example.Out;
sessionInit {
Accounting_Request_Int radUDR =
(Accounting_Request_Int) input;
session.user = radUDR.User_Name;
session.IPAddress = radUDR.framedIPAddress;
session.sessionID = radUDR.acctSessionId;
}
consume {
/* Radius UDRs.
If a matching session is found, then there are two Radius UDRs
451
Desktop 7.1
if (instanceOf(input, Accounting_Request_Int)) {
Accounting_Request_Int radUDR = (Accounting_Request_Int)input;
if (radUDR.acctStatusType == 2 ) {
OutputUDR finalUDR = udrCreate( OutputUDR );
finalUDR.user = session.user;
finalUDR.IPAddress = (string)session.IPAddress;
finalUDR.downloadedBytes = session.downloadedBytes;
finalUDR.uploadedBytes = session.uploadedBytes;
udrRoute( finalUDR );
sessionRemove(session);
return;
}
}
/* Netflow UDRs.
Depending on if the user downloaded or uploaded bytes, the
corresponding field data is used to update session variables. */
if (instanceOf(input, V5UDR)) {
V5UDR nfUDR = (V5UDR)input;
if ( session.IPAddress == nfUDR.SourceIP ) {
session.downloadedBytes = session.downloadedBytes +
nfUDR.BytesInFlow;
} else {
session.uploadedBytes = session.uploadedBytes +
nfUDR.BytesInFlow;
}
}
timeout {
// Outdated sessions are removed, and a resulting UDR is sent on.
OutputUDR finalUDR = udrCreate( OutputUDR );
finalUDR.user = session.user;
finalUDR.IPAddress = (string)session.IPAddress;
finalUDR.downloadedBytes = session.downloadedBytes;
finalUDR.uploadedBytes = session.uploadedBytes;
udrRoute( finalUDR );
sessionRemove(session);
}
452
Desktop 7.1
The Analysis agent can be part of both batch and realtime workflows. Differences in the configuration
are described in Section 11.2.2.2.2, “Realtime Workflows”.
The Analysis Programming Language, APL, used by the Analysis agent is described in the APL Ref-
erence Guide.
11.2.1.1. Prerequisites
The reader of this information should be familiar with:
• Basic programming
For information about Terms and Abbreviations used in this document, see the Terminology document.
See the APL Reference Guide for descriptions of the Analysis Programming Language, APL, and the
available functions.
To open the APL Code Editor, click the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select APL Code from the menu.
Hint! When double-clicking the dotted triangle in the lower right corner of the Configuration
window, the code area will be maximized. This is a useful feature when coding.
The generic code is imported by adding the following code in the agent code area, using the import
keyword:
If generic code is modified in the APL Code Editor, the change will automatically be reflected in all
agents that contain this code the next time each workflow is executed.
453
Desktop 7.1
Since Function Overloading is not supported, make sure not to import functions with equal names,
since this will cause the APL code to become invalid, even if the functions are located in different
APL modules. This also applies if the functions have different input parameters, for example, a(int
x) and a(string x).
Note! Not all functions will work in a generic environment, for example, functions related to
specific workflows or MIM related functions. This type of functionality must be included in the
agent code area instead.
Example 75.
import apl.Default.MyGenericCode;
The main menu changes depending on which Configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all Configurations and these are
described in Section 3.1.1, “Configuration Menus”.
The menu items that are specific for APL Code Editor are described in the following sections:
Item Description
Import... Select this option to import code from an external file. Note that the file has to reside on
the host where the client is running.
454
Desktop 7.1
Export... Select this option to export your code to an *.apl file that can be edited in other code ed-
itors, or be used by other MediationZone® systems.
Item Description
Validate Compiles the current APL code. The status of the compilation is displayed in a dialog.
Upon failure, the erroneous line is highlighted and a message, including the line number,
is displayed.
Undo Select this option to undo your last action.
Redo Select this option to redo the last action you "undid" with the Undo option.
Find... Displays a dialog where chosen text may be searched for and, optionally, replaced.
Find Again Repeats the search for the last string entered in the Find dialog.
The toolbar changes depending on which Configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all Configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.
The additional buttons that are specific for APL Code Editor tabs are described in the following sections:
Button Description
Validate Compiles the current APL code. The status of the compilation is displayed in a dialog.
Upon failure, the erroneous line is highlighted and a message, including the line number,
is displayed.
Undo Select this option to undo your last action.
Redo Select this option to redo the last action you "undid" with the Undo option.
455
Desktop 7.1
Find... Displays a dialog where chosen text may be searched for and, optionally, replaced.
Find Again Repeats the search for the last string entered in the Find dialog.
Zoom Out Zoom out the APL Code Area by modifying the zoom percentage number that you find
on the toolbar. The default value is 100(%). Clicking the button between the Zoom
Out and Zoom In buttons will reset the zoom level to the default value. Changing the
view scale does not affect the configuration.
Zoom In Zoom in the APL Code Area by modifying the zoom percentage number that you find
on the toolbar. The default value is 100(%). Clicking the button between the Zoom
Out and Zoom In buttons will reset the zoom level to the default value. Changing the
view scale does not affect the configuration.
11.2.2.2. Configuration
The Analysis agent configuration window is displayed when the agent in a workflow is right-clicked
selecting Configuration... or double-clicked.
Hint! When double-clicking the dotted triangle in the lower right corner of the Configuration
window, the code area will be maximized. This is a useful feature when coding.
The Configuration... dialog differs slightly depending on if the Workflow Configuration is of batch
or realtime type. The differences will be pointed in the Section 11.2.2.2.1, “Batch Workflows” and
Section 11.2.2.2.2, “Realtime Workflows” sections.
When the Analysis agent configuration window is confirmed, a compilation is performed in order to
extract the configuration data from the code.
The configuration dialog consists of two tabs; Analysis and Thread Buffer.
If the routing of the UDRs (the udrRoute command) is left out, it will make the outgoing connection
point disappear from the window, disabling connection to a subsequent agent.
456
Desktop 7.1
Code Area This is the text area where the APL code, used for UDR processing, is entered.
Code can be entered manually or imported. There is also a third possibility, which
is to set an import command stated to access the generic code created in the APL
Code Editor.
Entered code will be color coded depending on the code type, and for input assist-
ance, a pop-up menu is available. See Section 11.2.2.3, “Syntax Highlighting and
Right-click Menu” for further information.
Below the text area there are line, column and position indicators, for help when
locating syntax errors.
Compilation Compiles the entered code to evaluate the validity. The status of the compilation
Test... is displayed in a dialog. Upon failure, the erroneous line is highlighted and a
message, including the line number, is displayed.
UDR Types Enables selection of UDR Types. One or several UDR Types that the agent expects
to receive may be selected. Refer to Section 11.2.2.5, “Input and Output Types”
for a detailed description.
Set To Input Automatically selects the UDR Type distributed by the previous agent.
For further information about the pop-up menu in the Code Area and the UDR Internal Format
browser, see Section 2.2.7, “Text Editor”.
The use and settings of private threads for an agent, enabling multi-threading within a workflow, is
configured in the Thread Buffer tab. For further information, see Section 4.1.6.2.1, “Thread Buffer
Tab”.
457
Desktop 7.1
An Analysis agent may be part of batch as well as realtime workflows. The dialogs are identical, except
for the Thread Buffer tab which is not present in realtime workflows. Other than that, the agent is
configured in the same way.
There are, however, some restrictions and differences to consider when designing a realtime workflow
configuration:
• APL plug-ins with transaction logic are not allowed. The agent will not perform any validation
during workflow configuration - the workflow will abort upon activation of illegal code. Note, the
user must therefore keep track of what type of plug-ins are invoked.
• In order to make functions thread-safe they must be preceded by the synchronized keyword,
which will make it possible to alter global variables. It is possible to read global variables from any
function block, however, to avoid race conditions with functions updating the global variables, they
must only be accessed from within synchronized functions. See the section about Synchronized
Keyword in the APL Reference Guide for further information.
See the APL Reference Guide for further details on the specific commands.
Brown - Strings
Blue - Functions
Green - Types
Purple - Keywords
Orange - Comments
458
Desktop 7.1
You can also press the CTRL+H keys to perform this action.
You can also press the CTRL+F keys to perform this action.
Find Again Repeats the search for last entered text in the Find/Replace dialog.
You can also press the CTRL+G keys to perform this action.
459
Desktop 7.1
Go to Line... Opens the Go to Line dialog where you can enter which line in the code you
want to go to. Click OK and you will be redirected to the entered line.
You can also press the CTRL+L keys to perform this action.
Show Definition If you right click on a function in the code that has been defined somewhere
else and select this option, you will be redirected to where the function has
been defined.
If the function has been defined within the same configuration, you will simply
jump to the line where the function is defined. If the function has been defined
in another configuration, the configuration will be opened and you will jump
directly to the line where the function has been defined.
You can also click on a function and press the CTRL+F3 keys to perform this
action.
Note! If you have references to an external function with the same name
as a function within the current code, some problems may occur. The
Show Definition option will point to the function within the current
code, while the external function is the one that will be called during
workflow execution.
Show Usages If you right click on a function where it is defined in the code and select this
option, a dialog called Usage Viewer will open and display a list of the Con-
figurations that are using the function.
You can also select a function and press the CTRL+F4 keys to perform this
action.
UDR Assistance... Opens the UDR Internal Format Browser from wihich the UDR Fields may be
inserted into the code area.
You can also press the CTRL+U keys to perform this action.
MIM Assistance... Opens the MIM Browser from which the available MIM Resources may be
inserted into the code area.
You can also press the CTRL+M keys to perform this action.
Import... Imports the contents from an external text file into the editor. Note that the file
has to reside on the host where the client is running.
Export... Exports the current contents into a new file to, for instance, allow editing in
another text editor or usage in another MediationZone® system.
Use External Editor Opens the editor specified by the property mz.gui.editor.command in
the $MZ_HOME/etc/desktop.xml file.
Example 76.
Example:
mz.gui.editor.command = notepad.exe
460
Desktop 7.1
You can also press the CTRL+SPACE keys to perform this action.
Indent Adjusts the indentation of the code to make it more readable.
You can also press the CTRL+I keys to perform this action.
Jump to Pair Moves the cursor to the matching parenthesis or bracket.
You can also press the CTRL+SHIFT+P keys to perform this action.
Toggle Comments Adds or removes comment characters at the beginning of the current line or
selection.
You can also press the CTRL+7 keys to perform this action.
Surround With Adds a code template that surrounds the current line or selection:
• if Condition (CTRL+ALT+I)
In order to make APL coding easier, the APL Code Completion feature will help you find and add
APL functions and UDR formats.
To access APL Code Completion, place the cursor where you want to add an APL function, press
CTRL+SPACE and select the correct function or UDR format. In order to reduce the number of hits,
type the initial characters of the APL function. The characters to the left of the cursor will be used as
a filter.
461
Desktop 7.1
Note! Cloning is a costly operation in terms of performance, therefore it must be used with care.
462
Desktop 7.1
Example 77.
In this example it is desired to alter the UDR in the Analysis agent and send it to Encoder_1,
while still sending its original value to Encoder_2. To achieve this, the UDR must be cloned.
The following code will create, alter, and route a cloned UDR on r_2 and will leave the original
UDR unchanged.
input=udrClone(input);
input.MyNumber=54;
udrRoute(input);
Note that input is a built-in variable in APL, and must be used for all UDRs entering the
agent.
463
Desktop 7.1
Example 78.
An alternative solution to the one presented in the previous example is to clone the UDRs in an
Analysis agent and then route the UDRs to another Analysis agent in which amendment is per-
formed.
udrRoute(input,"r_3",clone);
input.MyNumber=54;
udrRoute(input,"r_2");
The incoming UDR is cloned and the clone is routed on to r_3. After that the original UDR can
be altered and routed to r_2.
464
Desktop 7.1
Example 79.
Suppose there is a workflow with one Analysis agent, one input route streaming two different
input types (typeA and typeB), and two output routes. The two output routes take two different
UDR types - the first equaling one of the input types (typeA), and the second is a new UDR
type (typeC) which is created out of information fetched from the other input type (typeB).
Figure 335. Several UDR types can be routed to and from an Analysis agent.
if (instanceOf(input, typeA)) {
udrRoute((typeA)input,"r_2");
}
else {
typeC newUDR = udrCreate(typeC);
newUDR.field = ((typeB)input).field;
// Additional field assignments...
udrRoute(newUDR, ,"r_3");
}
The first udrRoute statement explicitly typecasts to the typeA type, while there is no
typecasting at all for the second udrRoute statement. This is because the input variable
does not have a known type (it can be either typeA or typeB), while newUDR is known by the
compiler to be of typeC.
Without any typecasting, the output type on r_2 would have been reported as an undefined UDR,
drudr, and the workflow would not have been valid.
11.2.2.6.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
End Batch The agent itself does not emit End Batch, however it can trigger the collector to do
so by calling the hintEndBatch method. See the sections about beginBatch and
465
Desktop 7.1
endBatch in the APL Reference Guide for information about the hintEndBatch
method.
Cancel Batch The agent itself does not emit Cancel Batch however, it can trigger the collector to
do so by calling the cancelBatch method. See the section about cancelBatch in
the APL Reference Guide for further information.
Hint End Batch If the code contains a call to the method hintEndBatch this will make the agent
emit a Hint End Batch.
Note! Not all collectors can act upon a call on a hintEndBatch request. Please
refer to the user's guide for the respective Collection agent for information.
11.2.2.6.2. Retrieves
The agent retrieves commands from other agents and, based on them, generates a state change of the
file currently processed.
Command Description
Begin Batch When a Begin Batch message is received, the agent calls the beginBatch function
block, if present in the code.
Cancel Batch When a Cancel Batch message is received, the agent calls the cancelBatch function
block, if present in the code.
End Batch When an End Batch message is received, the agent calls the drain and endBatch
function blocks, if present in the code.
11.2.2.7. Introspection
The introspection is the type of data an agent expects and delivers.
Produced types are dependent on input type and the APL code. The agent consumes byte arrays and
any UDR type selected from the UDR Types list.
The agent does not publish or access any additional MIM parameters. However, MIM parameters can
be produced and accessed through APL. For further information about available functions, see the
section MIM Related Functions in the APL Reference Guide.
466
Desktop 7.1
11.3.1.1. Prerequisites
The reader of this information has to be familiar with:
The Ultra Format Definition Language is described in the MediationZone® Ultra Reference Guide.
11.3.2.1. Overview
11.3.2.1.1. Categorizing
Incoming data is divided into categories by the agent. All data assigned the same categoryId will
be accumulated in one category. It is performed according to conditions set in APL in the analysis
agent, usually preceding the Categorized Grouping agent.
11.3.2.1.2. Grouping
The incoming data accumulated in a category can be grouped into one or several files. Each categorized
set of data sent to the agent will have an associated filename, set either by default or in the APL con-
figuration. If no filename is configured in the preceding APL agent a DEFAULT_FILENAME will be
automatically set by the agent for each category. The default file will always be situated in the top
directory of its category.
A category will be closed as soon as one of the configured closing conditions is met. There are four
different closing conditions available to configure in the agent. It is also possible to use APL to add a
closing condition. To close a category from APL, it is enough that one UDR sets the closing condition
to true.
When a closing condition for a category is met, an external script is executed generating a file containing
all the data of one category. If the Grouping feature is not enabled, the filename associated with incoming
data is ignored and only one file is created for each category. The resulting file is emitted upon a
closing condition. This is useful when splitting is desired and grouping is not needed. The external
script will not be used.
467
Desktop 7.1
The UDR types created by default in the Categorized Grouping agent can be viewed in the UDR In-
ternal Format Browser in the CatGroup folder. To open the browser open an APL Editor, in the
editing area right-click and select UDR Assistance...; the browser opens.
The Categorized Grouping profile is loaded when you start a workflow that depends on it. Changes
to the profile become effective when you restart the workflow.
To open the editor, click the New Configuration button in the upper left part of the MediationZone®
Desktop window, and then select Categorized Grouping Profile from the menu.
Working Directory Absolute path to the working directory to be used by the agent.
It will be used to keep data over multiple input transaction boundaries. The same
root directory can be used for several agents over several workflows. To enable
concatenation and grouping over several activations of the workflow, all Execu-
tion Contexts where the workflow can be activated, must be able to access the
same global root directory. Each agent will create and work in its own unique
subdirectory.
An agent may leave persistent data if closing conditions are not met, the workflow
aborts or Cancel Batch occurs in the last transaction.
Abort on Inconsist- This setting controls the agents behavior if its storage is not in its expected state,
ent Working Dir- that is, if the agent discovers that the persistent directory does not have the ex-
ectory pected contents.
• Not selected - Warn about the condition and continue from a clean state. Any
old data will be moved to a subdirectory.
Activate Use of Activating this option will lead to the spawning of an external process of the
Grouping script identified by the "Script path" described next. If not used, the category
data will be concatenated.
468
Desktop 7.1
Script Path Specifies the external script to be used for grouping operations.
Script Arguments Used to state the order and contents of arguments and flags in the user defined
script. The reserved words %1 and %2 states the position in the call where the
arguments are expected.
%1 This reserved word will during execution be replaced with the target file
the script should create (including absolute path)
%2 An absolute path to a directory which contains the files (and potential sub-
directories) that should be grouped. The agent guarantees that this directory
does not contain any other files or directories except for those that are subject
to grouping.
#!/bin/sh
cd $2 //Go to the directory stated in %2
tar cf $1 * //Tar file contents of %2
gzip -9 $1 //Gzip the tared file
mv $1.gz $1 //Rename to previous filename
exit 0 //Exit
Byte Count Specifies the byte count closing condition for the agent. This field will have to
be set to a value larger than zero.
File Count Specifies the file count closing condition for the agent. This value is optional.
Closing Interval Specifies the closing interval in seconds for the agent. This value is optional.
After each timeout, all categories will be closed and the timer will be moved
forward according to the timeout interval.
Close on Deactiva- Setting this option will cause the agent to emit its data when the last file has been
tion finished by the workflow. If a workflow aborts before all categories have been
committed and this checkbox is enabled, the agent shall try to log a warning in
the system log that data is remaining in the persistent storage.
If more than one Closing Condition is reached, the condition with the highest number will be
reported.
0 - Timeout
1 - This is the last transaction during this activation
2 - APL code requested closing
3 - The input file count limit is reached
4 - The input file size limit for this category is reached or exceeded
11.3.2.3. Configuration
The Categorized Grouping agent configuration window is displayed when the agent in a workflow is
right-clicked, selecting Configuration... or double-clicked.
469
Desktop 7.1
Browse... Select the Browse... button to open the Configuration Selection dialog. Browse
for and select the preferred Profile to be added to the agent.
Force Single UDR If this is disabled the output files will automatically be divided in multiple UDRs
per file. The output files will be divided in suitable block sizes.
11.3.2.4.2. Retrieves
Cancel Batch If a cancelBatch is emitted by any agent in the workflow, all data in the current trans-
action will be disregarded. No closing conditions will be applied.
11.3.2.5. Introspection
The agent receives UDRs of CGAgentInUDR type and emits UDRs of CGAgentOutUDR type.
11.3.2.6.1. Publishes
Using Clean Stor- This MIM parameter is used if an inconsistency is detected in the persistent storage
age and the agent is configured to continue with a clean state. The MIM can be true
during the first transaction of an activation but will always be false on the sub-
sequent transactions within the same activation.
11.3.2.6.2. Accesses
Source File Count Received from the collector. This is accessed if Close on Deactivation is enabled.
APL offers the possibility of both publishing and accessing MIM resources and values. For a
listing of general MediationZone® MIM parameters, see Section 2.2.10, “Meta Information
Model”.
470
Desktop 7.1
The agent does not itself produce any debug events. However, APL offers the possibility of producing
events.
11.3.3.1. Analysis_1
In the first Analysis agent a UDR of the type CGAgentInUDR is created and populated.
471
Desktop 7.1
Example 81.
consume{
//Create CGAgentInUDR.
debug(input.categoryID);
udr.categoryID = (string)input.categoryID;
udr.data = input.OriginalData;
udr.fileName= "IncomingFile"; //When "Activate use of
grouping" is enabled in the Cat_Group
profile this file name will be used for
the grouped data in the tar-file.
udr.closeGroup = false;
//Route UDR
udrRoute(udr);
When the CGAgentInUDR is created the field structure can be viewed from the UDR Internal Format
Browser. For further information, see Section 11.3.2.1.4, “Categorized Grouping Related UDR Types”.
11.3.3.2. Cat_Group_1
The Cat_Group agent is configured via the GUI. For configuration instructions see Section 11.3.2.2,
“Categorized Grouping Profile”.
When configured correctly incoming UDRs with different categoryIDs are collected and handled
by the agent to return UDRs consisting of identical categoryID types.
472
Desktop 7.1
Example 82.
internal CGAgentOutUDR {
string categoryID;
int closingCondition; //Indicates closing condition
that emits the file.
bytearray data;
boolean isLastPartial; //True if last UDR of the input file.
int partialNumber; //Sequence number of the UDR in the
file. 1 for the first, 2 for the
second so on.
};
11.3.3.3. Analysis_2
In the Analysis_2 agent the CGAgentOutUDR will be processed in the way the agent is configured.
In this example one directory, one directory delimiter and one file name are created. The CGAgentOu-
tUDR information is thereafter put in a multiForwardingUDR and last routed to the Disk_2 agent.
473
Desktop 7.1
Example 83.
consume {
//Create a fntUDR
fntAddString(fntudr, "CG_Directory");
fntAddDirDelimiter(fntudr);
//Create a filename.
//Create a multiForwardingUDR.
multiUDR.fntSpecification = fntudr;
multiUDR.content = input.data;
//print closingCondition,
0 = timeout,
1 = close on deactivation,
2 = APL requested closure,
3 = input file count limit is reached,
4 = the input file size limit is reached
udrRoute(multiUDR);
counter = counter + 1;
}
474
Desktop 7.1
The functions and components available in your installation depend upon the features in your chosen
license package. As such certain features and functions described in this document may not be available
to you. Please, consult your system administrator for further information.
11.4.1.1. Prerequisites
The reader of this User's Guide should be familiar with:
• APL
11.4.2. Overview
The Decompressor agent receives compressed data batches in Gzip format, extracts them, and routes
the decompressed data forward in the workflow. An empty or corrupt batch is handled by the agent
according to your configuration.
The Compressor agent receives data batches, compresses the data to Gzip format and routes the com-
pressed data forward in the workflow.
• Gzip: The agent will decompress the files by using gzip (Default).
Error Hand- Select how you want to handle errors for files that cannot be decompressed:
ling
• Cancel Batch: The agent will cancel the batch when a file cannot be decompressed
(Default). The default setting for Cancel Batch is to abort the workflow immediately,
but you can also configure the workflow to abort after a certain number of consec-
utive Cancel Batches, or to never abort the workflow on Cancel Batch. See Sec-
tion 4.1.8, “Workflow Properties” for further information about Workflow Proper-
ties.
• Ignore: The agent will ignore an input batch when a file cannot be decompressed,
and a log message will be generated in the System Log, see Section 7.11, “System
Log”.
475
Desktop 7.1
Note! If you select the Ignore option, data will continue to be sent until
an error occurs in a batch, which means that erroneous data might be routed
from the Decompressor agent.
11.4.3.2.1. Emits
The agent emits commands that change the state of the file currently processed.
Command Description
Cancel Batch Emitted if a file cannot be decompressed. If you you have configured the workflow to
abort after a certain number of consecutive Cancel Batches, or never to abort on Cancel
Batch, in the Workflow Properties, the collection agent will send the file to ECS along
with a message describing the error. See Section 4.1.8.2, “Error Tab” and Section 16.1,
“Error Correction System” for further information.
11.4.3.2.2. Retrieves
11.4.3.3. Introspection
Introspection is the type of data that an agent both recognizes and delivers.
11.4.3.4.1. Publishes
11.4.3.4.2. Accesses
• Ignored batch
OR
476
Desktop 7.1
Reported when a batch cannot be decompressed and Error Handling is configured with ignore,
see Section 11.4.3.1, “Configuration”.
Note! The event message includes the file name if the appropriate MIM parameter is provided
by the collecting agent. For example, if data is collected from a database, no MIM parameter
is provided.
• Gzip: The agent will compress the files by using gzip (Default).
Compression Level Select the Compression level. The speed of compression is regulated using
a level, where "1" indicates the fastest compression method (less compres-
sion) and "9" indicates the slowest compression method (best compression).
The default compression level is "6".
Produce Empty Check this to make sure that an archive is produced and routed forward
Archives even if it has no content.
11.4.4.2.1. Emits
11.4.4.2.2. Retrieves
477
Desktop 7.1
11.4.4.3. Introspection
Introspection is the type of data that an agent both recognizes and delivers.
11.4.4.4.1. Publishes
11.4.4.4.2. Accesses
11.5.1. Configuration
The Decoder configuration window is displayed when you right-click on a Decoder agent and select
the Configuration... option or when you double-click on the agent.
Decoder List of available decoders introduced via the Ultra Format Editor, as well as the default
built-in decoder for the MediationZone® internal format (MZ format tagged UDRs).
If the compressed format is used, the decoder will automatically detect this.
478
Desktop 7.1
If the decoder for the MZ format tagged UDRs format is chosen, the Tagged UDR
type list is enabled.
Tagged List of the available internal UDR formats, stored in the Ultra and Code servers.
UDR type
• Cancel Batch - The entire batch is cancelled. This is the standard behavior.
• Route Raw Data - Route the remaining, undecodable, data as raw data. This option
is useful if you want to implement special error handling for batches that are partially
processed.
Full Decode If enabled, the UDR will be fully decoded before output from the decoder agent. This
action may have a negative impact on performance, since not all fields may be accessed
in the workflow, making decoding of all fields in the UDR unnecessary. If it is important
that all decoding errors are detected, this option must be enabled.
If this option is disabled (default), the amount of work needed for decoding is minimized
using "lazy" decoding of field content. This means that the actual decoding work might
not be done until later in the workflow, when the field values are accessed for the first
time. Corrupt data (that is, data for which decoding fails) might not be detected during
the decoding stage, but can however cause a workflow to abort at a later processing
stage.
Note! The use and settings of private threads for an agent, enabling multi-threading within a
workflow, are configured in the Thread Buffer tab. For further information, see Section 4.1.6.2.1,
“Thread Buffer Tab”.
11.5.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Cancel Batch Emitted on failure to decode the received data.
479
Desktop 7.1
11.5.2.2. Retrieves
The agent retrieves commands from other agents and, based on them, generates a state change of the
file currently processed.
Command Description
End Batch Unless this has been generated by a Hint End Batch message, the decoder evaluates
that all the data in the batch was decoded. When using a constructed or blocked decoder,
the decoder does additional validation of the structural integrity of the batch.
11.5.3. Introspection
The introspection is the type of data an agent expects and delivers.
The agent produces one or more UDR types depending on configuration and consumes bytearray
type.
The agent does not publish nor access any MIM parameters.
For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.
• Splitting batch
11.6.1.1. Prerequisites
The reader of this information should to be familiar with:
480
Desktop 7.1
Meta data for data batches are kept for a configurable number of days. If the meta data of a batch has
been removed, a duplicate of this batch can no longer be detected.
If any duplicates are detected, a message is logged to the System Log, and the duplicate batch is can-
celled, which may cause the workflow to abort. For further information, see Section 7.11, “System
Log”.
To monitor the duplicate batches the Duplicate Batch Inspector may be used. for further information,
see Section 11.6.3, “Duplicate Batch Inspector”.
It is only appropriate to use the agent after agents that may create duplicate batches. Normally this is
a file based collection agent. Several workflows may utilize the same Duplicate Batch profile. In this
case, their batches will be mutually compared.
11.6.2.1. Profile
The Duplicate Batch Detection profile is loaded when you start a workflow that depends on it. Changes
to the profile become effective when you restart the workflow.
To configure a Duplicate Batch Detection profile, click the New Configuration button in the upper
left part of the MediationZone® Desktop window, and then select Duplicate Batch Profile from the
menu.
If the Detection Method is modified after a Duplicate Batch Detection agent has been executed,
the already stored information will not match any records processed with the new profile version.
Max Cache Age Enter the number of days you want to keep the batch information in the
(Days) database.
Use CRC Check to create a checksum from the batch file data. You use the checksum
when comparing it with other batch files checksum while searching for du-
plicate batch files.
Use Byte Count Check to compare a number with the number of bytes in the batch file.
481
Desktop 7.1
Use MIM Value Check to use a MIM value for duplicate detection.
A MIM name defined in the Named MIMs table is compared with a MIM
Resource that can be connected both with batches and workflows.
Named MIMs Use the Add button to create a list of user defined MIM names.
When the Duplicate Batch Detection agent is configured, each MIM name is
assigned to one MIM Resource that detection will be applied for.
Within the same workflow configuration the profiles configured to Use MIM
Value must map to the same MIM names.
11.6.2.2. Configuration
The Duplicate Batch Detection agent configuration window is displayed when a Duplicate Batch De-
tection agent is double-clicked or right-clicked, selecting Configuration...
All workflows in the same workflow configuration can use separate Duplicate
Batch profiles, however it is not possible to map MIM Values with different names
via different profiles. The mapping of MIM values regarding Duplicate Batch agent
is done in the agent for the entire workflow configuration.
In order to appoint different workflow profiles, the Field Settings found in the
Workflow Properties dialog must be set to Default. When this is done each
workflow in the Workflow Table can be appointed the correct profile.
Named MIMs A list of user defined MIMs as defined in the profile.
MIM Resource A list of existing MIM values to be mapped against the user defined Named MIMs.
Logged MIMs Selected MIM values to be used in duplicate detection message.
482
Desktop 7.1
11.6.2.3.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Cancel Batch Emitted if a duplicate is found.
11.6.2.3.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Begin Batch Removes all timed out detection data from the database cache.
End Batch Compares the incoming batch against the ones existing in the database. If a duplicate
is found, a cancel mark is emitted and an error message is written in the System Log.
If no duplicate is found, the data batch information for the current batch is stored in
the database.
11.6.2.4. Introspection
The introspection is the type of data an agent expects and delivers.
11.6.2.5.1. Publishes
11.6.2.5.2. Accesses
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
483
Desktop 7.1
Initially, the window is empty. To populate it, search criteria needs to be specified in the Search Du-
plicate Batches dialog. Select Search... from the Edit menu to access the dialog.
Edit menu Delete... Removes selected entry from the list. If no entry is selected, all entries
are deleted.
Edit menu Search... Displays the Search Duplicate Batches dialog, where search criteria
may be modified.
Edit menu Show MIM Val- Shows all MIM values for the selected duplicate batch.
ues
Show Batches Matching entries are bundled into groups of 500. This list shows which
group, out of how many, is currently displayed. An operation targeting
all matching entries, will have affect on all groups.
484
Desktop 7.1
Note, there is a limit of 100 000 entries for a match. If the match exceeds
this limit, any bulk operation (deleting etc) must be repeated for each
multiple of 100 000.
ID The index of the batch in the search results.
Txn ID The transaction ID of the batch.
Creation Time The time when the transaction was created.
MIM Values The MIM data stored for the batch.
11.7.1.1. Prerequisites
The reader of this information should be familiar with:
If a duplicate is found, a message is automatically logged in the System Log, and the UDR is marked
as erroneous and routed on a user defined route, for instance to ECS. If the UDR is routed to ECS, an
automatically generated ECS Error Code, DUPLICATE_UDR, is assigned to the UDR, which enables
searching for duplicate UDRs in ECS.
Duplication comparison is not based on the content of a complete UDR but on the content of the fields
selected by the user.
Note! If the same file happens to be reprocessed, all UDRs will be considered as being duplicates,
unless the cache is full, in which case a part of the cache will be cleared and the corresponding
amount of UDRs will be considered as non-duplicates. If the file contains a considerable number
of UDRs, the process of inserting all of them in ECS may be time-consuming.
Having a Duplicate Batch agent prior to the Duplicate UDR Detection agent will only make the
problem worse. The Duplicate Batch agent will not detect that the batch is a duplicate until the
end of the batch. At that point all UDRs have already passed the Duplicate UDR Detection agent
and are inserted, as duplicates, into ECS. Since the Duplicate Batch agent will flag for a duplicate
batch, the batch is removed from the stream forcing the Duplicate UDR Detection agent to also
remove all UDRs from ECS.
11.7.2.1. Profile
A Duplicate UDR Detection agent is configured in two steps. First a profile has to be defined, then
the regular configurations of the agent are made.
485
Desktop 7.1
The Duplicate UDR profile is loaded when you start a workflow that depends on it. Changes to the
profile become effective when you restart the workflow.
To create a new Duplicate UDR profile configuration, click the New Configuration button in the upper
left part of the MediationZone® Desktop window, and then select Duplicate UDR Profile from the
menu.
Storage Host In the drop down menu, the preferred storage host, where duplicate UDRs are to be
stored, can be selected. The choice for storage of duplicate repositories is either on
a specific Execution Context or Automatic. If Automatic is selected, the same Ex-
ecution Context used by the running workflow will be selected, or when the Duplicate
UDR Inspector is used, the Execution Context will be selected automatically.
Note! The workflow must be running on the same Execution Context as its
storage resides, otherwise the Duplicate UDR Detection Agent will refuse to
run. If the storage is configured to be Automatic, its corresponding directory
must be a file system shared between all the Execution Contexts.
Directory An absolute path to the directory on the selected storage host, in which to store the
duplicate cache.
Max Cache The maximum number of days to keep duplicated UDRs in the cache. The age of a
Age (days) UDR stored in cache is either calculated from the Indexing Field (timestamp) of a
UDR in the latest processed batch file, or from the system time, depending on
whether the Based on system arrival time is selected or not.
If the Date Field option, below, is not selected as indexing field, this field will be
deactivated and ignored, and cache size may only be configured using the Max
Cache Size settings. Default is 30 days.
Note! If the UDRs are out of range, this will be logged in the System Log.
486
Desktop 7.1
Based on sys- If selected (it is unselected by default), the calculation of a UDR's age will be based
tem arrival on the time when the UDR arrived in the system. In case of a longer system idle
time time, this might have consequences, as described in Section 11.7.2.1.3, “Using In-
dexing Field Instead of System Time”.
If not selected, the UDR age calculation will instead be made towards the latest In-
dexing Field (timestamp) of a UDR that is included in the previously processed
batch files.
See Figure 348, “UDR Removed from Cache based on Indexing Field or System
Time” to get an overview of the difference when calculating UDR age using
timestamp and system time.
This option is used in combination with Date Field and Indexing Field.
Max Cache The maximum number of UDRs to store in the duplicate cache. The value must be
Size (thou- in the range 100-9999999 (thousands), default is 5000 (thousands). The cache will
sands) be made up of containers covering 50 seconds each, and for every incoming UDR,
it will be determined in which cache container the UDR will be stored
During the initialization phase, the agent checks whether the cache is full or not. If
the check indicates that there will be less than 10% of the cache available, cache
containers will start to be cleared until 10% free cache is reached, starting with the
oldest container. Depending on how many UDRs are stored in each container, this
means that different amounts of UDRs may be cleared depending on the setup. If
the index field happens to have the same value in all the UDRs, all of the UDRs in
the cache will be cleared.
Note! If you have a very large cache size, it may be a good idea to split the
workflows in order to preserve performance.
For performance reasons, this field should preferably be either an increasing sequence
number, or a timestamp with good locality. This field will always be implicitly
evaluated.
Date Field If selected (default), the indexing field will be treated as a timestamp instead of a
sequence number, and this has to be selected to be able to set the maximum age of
UDRs to keep in the cache in the Max Cache Age (days) field above.
Checked Fields The fields to use for the duplication evaluation, when deciding whether or not a UDR
is a duplicate.
If the Checked Fields or Indexing Field are modified after an agent is ex-
ecuted, the already stored information will be considered useless the next time
the workflow is activated. Hence, duplicates will never be found amongst the
old information since other type of meta data has replaced them.
487
Desktop 7.1
The main menu changes depending on which configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all configurations and these are
described in Section 3.1.1, “Configuration Menus”.
There is one menu item that is specific for Duplicate UDR profile configurations, and it is described
in the coming section:
Item Description
External References To Enable External References in an agent profile field. Please refer to Sec-
tion 9.5.3, “Enabling External References in an Agent Profile Field” for further
information.
The toolbar changes depending on which Configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all Configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.
The "cache time window" (see Figure 348, “UDR Removed from Cache based on Indexing Field or
System Time”) decides whether a UDR shall be removed from the cache or not. The maximum number
of days to store a UDR in cache is retrieved from the Max Cache Age configuration, and each time a
new batch file is processed (and the age of duplicate UDRs is calculated) the "cache time window"
will be moved forward and old UDRs will be removed.
• Using the latest indexing field (timestamp) of a UDR that is included in the previously processed
batch files.
488
Desktop 7.1
Figure 348. UDR Removed from Cache based on Indexing Field or System Time
If the system has been idle for an extended period of time, there will be a "delay" in time. So when a
new batch file is processed, and if system time is used for UDR age calculation, the "cache time window"
will be moved forward with the delay included, and this might result in all UDRs being removed from
the cache, as shown in Figure 348, “UDR Removed from Cache based on Indexing Field or System
Time”. The consequence of this is that the improperly removed UDRs will be considered as non-du-
plicates and, hence, might be handled even though they still are duplicates.
If the indexing field is used instead, a more proper calculation will be done, since the "system delay
time" will be excluded. In this case only UDR 1 and UDR 2 will be removed.
• Directory
External Referencing of profile fields is enabled from the profile view's main menu. For detailed
instructions, see Section 9.5.3, “Enabling External References in an Agent Profile Field”.
11.7.2.2. Configuration
The Duplicate UDR Detection configuration window is displayed when double-clicking on a Duplicate
UDR Detection agent or right-clicking the agent and selecting Configuration....
489
Desktop 7.1
Figure 349. Duplicate UDR Detection configuration window, Dup UDR tab.
Profile Select the Duplicate UDR profile you want the agent to use.
All workflows in the same workflow configuration can use separate Duplicate
UDR profiles, if that is preferred. In order to do that the profile must be set to De-
fault in the Workflow Table tab found in the Workflow Properties dialog. After
that, each workflow in the Workflow Table can be appointed the correct profile.
Duplicate Route Indicates on which route to send detected duplicates.
The list is not populated with output routes until the routes have been created and
the dialog is reopened.
Batch Source In- A list of MIM values, used when creating the error information for the ECS (if
formation routed to an ECS Forwarding agent). To display it from ECS, double-click the Error
Code for the UDR (that is, DUPLICATE_UDR for all duplicates, regardless of
which workflow or profile they originate). Also, note that MIM values are selected
from the original UDR, not the duplicate.
The use and settings of private threads for an agent, enabling multi-threading
within a workflow, is configured in the Thread Buffer tab. For further information,
see Section 4.1.6.2.1, “Thread Buffer Tab”.
11.7.2.3.2. Retrieves
11.7.2.4. Introspection
The agent produces and consumes UDR types selected from the UDR Type list.
11.7.2.5.1. Publishes
Detected Duplicates This MIM parameter contains the number of detected duplicates in the current
batch.
490
Desktop 7.1
11.7.2.5.2. Accesses
User selected The agent accesses user selected values to log in ECS.
Reported when the agent has successfully opened its duplicate detection repository (cache).
Reported after each processed batch. The last number denotes UDRs that were too old to be compared
against, that is, they were older than the configured maximum age.
Note! Ensure that the Read Only check box is selected unless you need to delete batches from
the cache. If not selected, the profile will be locked and workflows using the profile will not be
able to write to the cache.
491
Desktop 7.1
Processed Period Select to search for batches processed during a certain time period.
Content Period Select to search with respect to time span of the indexing field in batches. This
option is only available if the selected profile has a timestamp indexing field.
MIM Criteria Select to use a regular expression to search for a selected MIM resource value.
Sort Order Select to specify the sort order when displaying the list of batches.
Lock Handling Disable Read Only if batches need to be deleted from the cache. Exclusive access
to the cache is required for deleting batches, meaning that if a currently running
workflow is using the selected profile, the workflow needs to be stopped to be
able to get exclusive access.
File
Edit
View
Result List
Show Batches If a search results in a large number of batches, this enables switching between
different batches in the result list.
ID The index of the batch in the search results.
Txn ID The transaction ID of the batch.
Processed Date The date when the batch was processed.
MIM Values The MIM data stored for the batch. Double-click this field to view all MIM
values.
492
Desktop 7.1
Content Start / End The Duplicate UDR Detection agent stores batches in date segments. The
columns show the date range of the actual data that was duplication checked
during transaction. If the transaction contains dates older than the Max Cache
Age, configured in the Duplicate UDR profile, Outside range is displayed.
If both Start and End show Outside range, all dates in the transaction were
older than Max Cache Age. UDRs that are outside range are always routed
as non-duplicates since there is no duplicate data to compare them to.
These columns are only visible if Date Field is enabled in the Duplicate UDR
profile.
Records The number of records (UDRs) processed for a given batch.
Duplicates The number of duplicates found in the batch.
Suppress Encod- If enabled, the agent will not encode the incoming data. It expects a raw byte array
ing as the input type and will pass it through untouched. This mode is used when only
a header and/or a trailer is added to a data batch.
Encoder List of available encoders introduced via the Ultra Format Editor, as well as the
default built-in encoders for the MediationZone® internal formats; MZ format
tagged UDRs and MZ format tagged UDRs (compressed). Using the compressed
format will reduce the size of the UDRs significantly. However, since compression
will require more CPU, you should consider the trade off between I/O and CPU
when chosing encoder.
Note! The Header and Trailer tabs are described in Section 11.8.8, “Agent Services - Batch
Workflow”. The use and setting of private threads for an agent, enabling multi-threading within
a workflow, is configured in the Thread Buffer tab. For further information, see Section 4.1.6.2.1,
“Thread Buffer Tab”.
493
Desktop 7.1
Encoder List of available encoders introduced via the Ultra Format Editor, as well as the default
built-in encoders for the MediationZone® internal formats; MZ format tagged UDRs
and MZ format tagged UDRs (compressed). Using the compressed format will reduce
the size of the UDRs significantly. However, since compression will require more CPU,
you should consider the trade off between I/O and CPU when chosing encoder.
11.8.3.1. Emits
This agent does not emit anything.
11.8.3.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Begin Batch Possible headers defined in the Header tab are created and dispatched on all outgoing
routes before the first UDR is encoded.
End Batch Possible trailers defined in the Trailer tab are created and dispatched on all outgoing
routes after the last UDR has been encoded.
11.8.4. Introspection
The introspection is the type of data an agent expects and delivers.
The agent produces bytearray type and consumes the UDR types corresponding to the selected Encoder.
If Suppress Encoding is enabled bytearray type is consumed.
This agent does not publish nor access any MIM parameters.
494
Desktop 7.1
They both offer the possibility of using MIM values, constants and user defined values in the header
or trailer. When selecting MIM resources, note that MIM values used in the data batch header are
gathered when a new batch begins, while MIM values used in the data batch trailer are gathered when
a batch ends. Thus, the numbers of outbound bytes, or UDRs, for any agent will always be zero if they
are referred to in data batch headers.
The windows for both header and trailer configuration are identical.
Suppress On No Data Indicates if the header/trailer will be added to the batch even if the batch
does not contain any data (UDRs or byte arrays).
Value Click on the Add button to populate the columns with items to the header
or trailer of the file. They will be added in the order they are specified.
MIM Defined If enabled, a MIM value will be part of the header. Size and Padding must be entered
as well.
For data batch headers, the MIM values are gathered at beginBatch.
User Defined If enabled, a user defined constant must be entered. If Size is empty or less than the
number of characters in the constant, Size is set to the number of characters in the
495
Desktop 7.1
constant. If Size is greater than the length of the constant, Padding must be entered
as well.
Pad Only If enabled, a string is added according to the value entered for Size, filled with
Padding characters.
Size Size must always be entered to give the item a fixed length. It can only be omitted
if User Defined is selected, in which case it will be calculated automatically.
Padding Character to pad any remaining space with. Either a user defined character can be
entered, or one of the four predefined/special characters can be selected (Carriage
return, Line feed, Space, Tabulator).
Alignment Left or right alignment within the allocated field size.
Date Format Enabled when a MIM of type date is selected. A Date Format Chooser dialog is
opened, where a date format may be entered.
11.9.1.1. Prerequisites
The reader of this information has to be familiar with:
11.9.2. Overview
The SQL Loader agent is a batch processing agent designed to populate the database with data from
existing files, either residing in a local directory or on the server filesystem of the database.
Data can be collected using either the Disk, FTP, or SFTP agents, and supported databases are; MySQL,
Sybase IQ, Netezza and SAP HANA.
The Disk, FTP and SFTP agents now have an additional check box called Route FileReferenceUDR
in their configuration dialogs:
496
Desktop 7.1
This check box should be selected when using the SQL Loader agent.
The SQL Loader agent then forwards an SQLLoaderResultUDR containing information about loaded
file, number of inserted rows, execution time and any error messages, for logging purposes.
11.9.2.2.1. FileReferenceUDR
The FileReferenceUDR is the UDR format used to send data from the collection agent to the SQL
Loader agent.
Field Description
directory (string) This field states the name of the directory the data file is located in.
filename (string) This field states the name of the file data should be collected from.
fullpath (string) This field states the full path to the data file.
OriginalData (long) This field contains the original data in byte array format.
11.9.2.2.2. SQLLoaderResultUDR
The SQLLoaderResultUDR is the UDR that the SQL Loader agent sends out after having loaded the
data into the database. This UDR contains information for logging purposes that should be handled
by another agent in the workflow.
Field Description
errorMessage (string) This field contains any error message that might have been returned
during the loading of data.
executionTime (long) This field indicates the time it took to load the data into the database.
filename (string) This field contains the name of the file from which data was uploaded.
rowsAffected (long) This field indicates the number of affected rows.
OriginalData (long) This field contains the original data in byte array format.
11.9.3. Configuration
The configuration dialog for the SQL Loader agent is opened either by double clicking on the agent,
or right clicking and selecting the Configuration... option.
497
Desktop 7.1
The SQL Loader tab contains configurations related to the SQL query used for populating the database
with data from external files, as well as error handling.
Database Profile name of the database that the agent will connect to and forward data to.
MySQL, SybaseIQ, Netezza and SAP HANA profiles are supported.
SQL Statement In this field you enter the SQL statement to be used for stating where the files
containing the data are located, into which table in the database the data should
be inserted, as well as the formatting of the data.
See Section 11.9.8, “SQL Statements” for information about how to write the
statements.
Abort if exception Select this check box if you want the workflow to abort in case of an exception.
11.9.4.2. Retrieves
Cancel Batch If a cancelBatch is emitted by any agent in the workflow, all data in the current trans-
action will be disregarded. No closing conditions will be applied.
11.9.5. Introspection
The agent receives UDRs of FileReferenceUDR type and emits UDRs of SQLLoaderResultUDR
type.
The agent does not itself produce any debug events. However, APL offers the possibility of producing
events.
11.9.8.1. MySQL
For remote loading (the file resides in a local directory)
For server side loading (file resides in the server filesystem of the database)
498
Desktop 7.1
11.9.8.2. Sybase IQ
For server side loading (file resides in the server filesystem of the database)
The Sybase JConnect driver does not support remote file loading.
11.9.8.3. Netezza
For remote loading (the file resides in a local directory)
For server side loading (file resides in the server filesystem of the database)
1. Create a control file containing code below (this example ctl file name is abc.ctl, abc.txt is the csv
file):
import data
into table SYSTEM."TEST_LOADER"
from 'abc.txt'
record delimited by '\n'
fields delimited by ','
optionally enclosed by '"'
error log 'abc.err'
499
Desktop 7.1
11.10.1.1. Prerequisites
The reader of this information should be familiar with:
• Comverse Real-Time Billing Solutions proprietary protocol for Payment Server Interface
11.10.2.1. Overview
The PSI agent is a processing agent designed to support the Payment Server Interface provided by the
Comverse Realtime Billing System. The functionality exposed by these services are mapped to the
MediationZone® type system. The agent thereby emits and accepts a set of UDRs that represent the
requests that can be made and their corresponding responses and acknowledgements.
The PSI agent contains a number of different UDRs. The UDRs in turn contain a set of fields corres-
ponding to the fields required by the PSI application. The UDRs also contain a few internal fields that
can be used by the workflow logic.
In the UDR Internal Format Browser a detailed view of the available fields is displayed. To open
the browser, double click or right click on the Analysis Agent and select Configuration.... You then
right-click in the editing area and select the option UDR Assistance....
500
Desktop 7.1
Figure 359. UDR Internal Format Browser showing a UDR with fields
The following requests and responses are provided by the agent and shown in the psi folder.
Note! Not all of the PSI messages are supported, just those included in the UDRs below.
11.10.2.1.2.1. ApplyTariffRequest
The ApplyTariffRequest UDR sends the parameters to the PSI agent to charge a subscriber using the
RTB tariff.
Field Description
bearerCapability Refer to 'Bearer Capability' in the Comverse proprietary protocol
(string) for Payment Server Interface for the Apply Tariff Request specific-
ations.
discount (string) Refer to 'Discount' in the Comverse proprietary protocol for Pay-
ment Server Interface for the Apply Tariff Request specifications.
originatingCallerId Refer to 'Originating Caller ID' in the Comverse proprietary protocol
(string) for Payment Server Interface for the Apply Tariff Request specific-
ations.
501
Desktop 7.1
Field Description
originatingSubscriberM- Refer to 'Originating Subscriber MSC Address' in the Comverse
SCAddress (string) proprietary protocol for Payment Server Interface for the Apply
Tariff Request specifications.
subscriberType (int) Refer to 'Subscriber Type' in the Comverse proprietary protocol
for Payment Server Interface for the Apply Tariff Request specific-
ations.
terminatingCallerId Refer to 'Terminating Caller ID' in the Comverse proprietary pro-
(string) tocol for Payment Server Interface for the Apply Tariff Request
specifications.
terminatingSubscriberM- Refer to 'Terminating Subscriber MSC Address' in the Comverse
SCAddress (string) proprietary protocol for Payment Server Interface for the Apply
Tariff Request specifications.
11.10.2.1.2.2. ApplyTariffResponse
This UDR send the response message from the PSI agent.
Field Description
statusID (int) Refer to 'Status ID' in the Comverse proprietary protocol for Payment
Server Interface for the Apply Tariff Response specifications.
transactionID (byte- Refer to 'Transaction ID' in the Comverse proprietary protocol for
array) Payment Server Interface for the Apply Tariff Response specifications.
11.10.2.1.2.3. PSICycleUDR
This UDR contains all the relevant UDRs for the entire message cycle between a SLU and the PSI
Agent, including errors and contexts. This is the only UDR that the PSI agent accepts and emits.
Field Description
ackUDR (PSISessionAck- Refer to Section 11.10.2.1.2.4, “TransactionIdAcknowledge”.
UDR)
associatedNumber (long) A value generated by the PSI Agent to uniquely identify a request.
This should not be set by the calling agent.
context (any) This is an internal working field that can be used in the workflow
configuration to keep track of and use internal workflow inform-
ation related to the request, when processing the answer.
errors (list<string>) This list contains the errors sent from the PSI Agent.
hasErrors (boolean) If there are errors, this is set to true. The errors are listed in er-
rors (list<string>).
reqUDR (PSISessionRe- Refer to Section 11.10.2.1.2.1, “ApplyTariffRequest”.
qUDR)
respUDR (PSISession- Refer to Section 11.10.2.1.2.2, “ApplyTariffResponse”.
RespUDR)
SLUIndex (int) This field indicates the specific SLU to which the session is
bound. You do not modify this value.
11.10.2.1.2.4. TransactionIdAcknowledge
This UDR acknowledges the receipt of the response message sent from the PSI agent to a SLU.
502
Desktop 7.1
Field Description
statusID (int) Refer to the 'Status ID' in the Comverse proprietary protocol for Payment
Server Interface, in the Apply Tariff section for the Transaction ID Acknow-
ledgement specifications.
The agent will, to the extent possible, manage errors without aborting the workflow. Errors related to
the communication between the agent and the PSI will be sent to the System Log and from the PSI
agent via the cycleUDR (see errors and hasErrorshasErrors in Section 11.10.2.1.2.3, “PSI-
CycleUDR”).
11.10.2.2. Configuration
The PSI agent configuration window is opened by right clicking on the node in a realtime workflow,
and selecting the Configuration... option, or by double clicking on the node.
Figure 360. The PSI Agent Configuration View - SLU list tab
In the SLU list tab, use the Add button to add the SLU (Service Logic Unit) Host and corresponding
Server Ports. The PSI requests are equally distributed by round robin to all the PSI SLU servers added.
503
Desktop 7.1
In the Connection tab, you configure the heartbeat messages which are sent between MediationZone®
and the Payment Server to keep the session active when there is no other message activity from Medi-
ationZone® .
Heartbeat interval (s) Specifies the interval period in seconds for sending heartbeat messages. The
default value is 60, the maximum value is 1800.
Request timeout (ms) Specifies the timeout period for responses from a SLU.
Request retries Specifies the number of SLUs to which a request is attempted to be sent on
a timeout. Entering a value of 1 means you an attempt is made to send a re-
quest to 2 SLUs.
Figure 362. The PSI Agent Configuration View - Advanced properties tab
One example of properties that can be configured under the advanced tab is reconnectInterval,
which specifies the time interval after which the agent will try to reconnect.
See the text in the Properties field for further information about the properties.
11.10.2.3. Introspection
The agent receives and emits UDR types as defined in Section 11.10.2.1.1, “Request/Response Mapping”
section.
The agent does not publish or access any additional MIM parameters. However, MIM parameters can
be produced and accessed through APL. For further information about available functions, see the
section MIM Related Functions in the APL Reference Guide.
504
Desktop 7.1
For information about the agent message event type, see Section 5.5.12, Agent Event.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
• PSI.TransactionIdAcknowledge
This message is displayed when the PSI agent attempts to write a TransactionID acknowledgement
message.
• PSI.ApplyTariffRequest
This message is displayed when the SLU does not understand the message sent.
• No connected SLU
This message is displayed when none of the SLUs send a heartbeat response.
505
Desktop 7.1
This message is displayed when an unexpected heartbeat response is received from a SLU.
This message is displayed when a message received from a SLU is not recognized.
This message is displayed when a response is received from a SLU for which a request has not been
sent.
This message is logged when the SLU does not understand the message sent.
• No connected SLU
This message is logged when none of the SLUs send a heartbeat response.
This message is logged when an unexpected heartbeat response is received from a SLU.
506
Desktop 7.1
This message is logged when a message received from a SLU is not recognized.
This message is logged when a response is received from a SLU for which a request has not been
sent.
11.10.4. Example
This example shows a workflow configured to, via Diameter_Stack, receive requests and, through the
Analysis agent, sends messages to and receives messages from the PSI agent.
consume {
if (instanceOf(input, Diameter.RequestCycleUDR)) {
Diameter.RequestCycleUDR diameterUDR = (Diameter.RequestCycleUDR )input;
Credit_Control_Request ccr = (Credit_Control_Request)diameterUDR.Request;
// Create and populate PSI.ApplyTariffRequest
PSI.ApplyTariffRequest applyTariffReq = udrCreate(PSI.ApplyTariffRequest);
applyTariffReq.originatingCallerId = ccr.Subscription_Id.Subscription_Id_Data;
applyTariffReq.terminatingSubscriberMSCAddress = "10.0.17.42";
applyTariffReq.subscriberType = 1;
applyTariffReq.bearerCapability = "2";
applyTariffReq.discount = "0";
// etc ...
// Create and populate PSICycleUDR
PSI.PSICycleUDR cycle = udrCreate(PSI.PSICycleUDR);
cycle.context = ccr;
cycle.reqUDR = applyTariffReq;
udrRoute(cycle,"to_psi");
} else if (instanceOf(input, PSI.PSICycleUDR)) {
// Create and route Credit_Control_Answer
udrRoute(createCCA((PSI.PSICycleUDR)input), "to_diameter");
// Create transactionId ack
PSI.PSICycleUDR cycle = (PSI.PSICycleUDR)input;
if (cycle.hasErrors) {
for (int i=0; i < listSize(cycle.errors); i++) {
string error = listGet(cycle.errors, i);
// handle errors
debug(error);
}
} else {
if (instanceOf(cycle.respUDR, PSI.ApplyTariffResponse)) {
PSI.TransactionIdAcknowledge trIdAck = udrCreate(PSI.TransactionIdAcknowledge);
trIdAck.statusId = 0;
507
Desktop 7.1
cycle.ackUDR = trIdAck;
udrRoute(cycle, "to_psi");
}
}
}
}
11.11.1.1. Prerequisites
The reader of this information should be familiar with:
Note! The agent is compatible with the services defined in the Comverse Interface Control
Document for Open Services Access release 4.6.
The agent communicates with the billing system using SOAP over HTTP. In order to retrieve responses
from the billing system, a separate service needs to be configured for the Execution Context. For further
information, see Section 11.11.2.1, “Preparations”. This service is shared by all RTBS agents running
on the same Execution Context.
11.11.2.1. Preparations
When installing the RTBS agent for the first time, properties must be added to configure the Execution
Contexts.
1. For each Execution Context that the RTBS workflows will execute on, two properties for callback
has to be added in the executioncontext.xml file found in $MZ_HOME/etc:.
• Callbackhost, defines the name or IP address of the interface that should receive responses
from the machine hosting the Execution Context.
508
Desktop 7.1
11.11.2.2. Overview
11.11.2.2.1. Asynchronous Request
The RTBS agent (Parlay) uses both synchronous and asynchronous requests. It is important to understand
the differences between them, since it affects the requirements on the business logic in the workflow.
Usually, when routing a UDR to a subsequent agent, the agent performing the route can trust that the
subsequent agents have completed their tasks before it continues with any post activity. For asynchron-
ous requests, this is not the case.
The RTBS agent can not assume that the request has been successful until the corresponding response
comes back from the agent. This means that the agent following the RTBS agent must manage any
operations that are supposed to take place when the response comes back.
Note! When a response connected to an asynchronous request returns, the workflow is driven
by the response and not by the collection agent as usual. Therefore, the call path for the workflow
is different and the workflow logic needs to manage this. This is typically done by using the
Aggregation agent to store variables and states that are needed to pick up and handle a response
from an asynchronous agent.
For a list of requests and the category that each of them belongs to, see Section 11.11.2.2.2, “Request/Re-
sponse Mapping”. Basically, UDRs ending with Req represents an asynchronous request and UDRs
starting with Get represents a synchronous request.
The RTBS agent contains a number of different UDRs. The UDRs in turn contain a set of fields cor-
responding to the fields required by the billing system. The UDRs also contain a few internal fields
that can be used by the workflow logic.
There are also additional types that the agent provides, which are used in order to populate the request
and response. These data types are prefixed Tp.
In the UDR Internal Format Browser a detailed view of the available fields is displayed. The browser
is opened by clickin on the Configuration menu and selecting the option APL Code... and then right-
clicking in the editing area and selecting the option UDR Assistance....
509
Desktop 7.1
Figure 364. UDR Internal Format Browser showing a UDR with fields
The following requests and responses are provided by the agent and shown in the rtbs folder.
510
Desktop 7.1
The agent will, to the extent possible, manage errors without aborting the workflow. Errors related to
the communication between the agent and the billing system will be sent to the system log and routed
into the workflow via the RequestException UDR where the two fields errorMessage and
errorDetails contain the details of the error. The RequestException type will also be used
for invalid requests detected by the agent. Other errors will only be written to the System Log.
11.11.2.3. Configuration
The RTBS agent configuration window is opened by right clicking on the node in a realtime workflow,
and selecting the Configuration... option, or by double clicking on the node.
Host Specifies the name or IP address of the machine where the Charging Manager
services is located.
Port Specifies the port that the Charging Manager listens to.
Path Specifies the path to the Charging Manager service at the supplied host and
port above.
Max Blocking Enter the maximum number of simultaneous threads that can call upon a service.
Threads Any workflow thread that attempts an I/O request on this specific service after the
max quota has been reached, will be blocked and will cause the RTBS agent to
generate an exception.
Note! The service is identified by its host- and port numbers. For example:
http://127.0.0.1:8080/axis/services/IPChargingManager/12345 is identified
as 127.0.0.1:8080. This value is also used when determining the number of
simultaneous threads that are sent to the IpChargingManager.
Blacklist Enter the number of seconds during which a service should remain blacklisted.
Timeout
A service blacklist is triggered when a thread receives a SocketTimeoutException.
A blacklisted service cannot be called upon. Any attempt to call it will result in
an exception that is routed into the workflow.
511
Desktop 7.1
HTTP Timeout Enter the maximum number of milliseconds during which a service can be blocked.
If a call does not receive a reply within this period, an exception is generated. This
in turn, triggers the service blacklist.
Debug Specifies if the agent should provide latency and throughput measures as debug
events. The latency and throughput of every asynchronous request will be measured.
It will send this information as a Debug Event every ten seconds, if this option is
enabled. For further information about debug events, see Section 11.11.2.7, “Debug
Events”.
11.11.2.4. Introspection
The agent receives and emits UDR types as defined in Section 11.11.2.2.2, “Request/Response Mapping”
section.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
This event is reported during workflow initialize. It shows the reference to the Charging Man-
ager.
This event is reported during workflow initialize in order to tell the user that the centralized web
server in the Execution Context is about to be initialized.
This event is reported in case Debug has also been activated as part of the agent's configuration
dialog. It reports statistics over the measured number of simultaneous RTBS sessions, the average
throughput, latency (average and max) and number of requests.
512
Desktop 7.1
11.11.4. WFCommands
The RTBS relevant wfcommands:
• listBlackList: Lists all the OSA service nodes that are currently blacklisted. The list also includes
the time when the OSA service nodes were blacklisted.
Example 84.
Example 85.
11.11.5. APL
Unlike other MediationZone® agents, the RTBS agent emits RequestException with a message string
that begins with "NODE BUSY ERROR". In order to handle such errors correctly, you need to adjust
your APL code as follows: When an error occurs, the request is dropped by MediationZone® .
Therefore, for Diameter compliance, the workflow should send the message "DIAMETER_UN-
ABLE_TO_DELIVER" to the GGSN node.
11.11.6. Example
This example shows a workflow configured to, via TCPIP, receive requests and, through the Request
(Analysis) node, make a reservation in the RTBS node. The session state between network and RTBS
is stored in the State (Aggregation) node whilst the network sends a new request for an ongoing session.
The Request node checks if there is an ongoing session in the State node before a new request is sent
to the RTBS node.
513
Desktop 7.1
Before passing a response back to the network element, the response returning from the RTBS agent
may need to be enriched with data required for the response to the network element using the state of
the ongoing session, kept in the State agent.
import ultra.rtbs;
consume {
// Initial reservation request
ReserveUnitReq req = udrCreate(ReserveUnitReq);
req.sessionDescription = "";
req.merchantAccount = udrCreate(TpMerchantAccountID);
req.merchantAccount.AccountID = 1732380001;
req.merchantAccount.MerchantID = "OSA Merchant-001 (ET-NJ)";
req.correlationID = udrCreate(TpCorrelationID);
req.correlationID.CorrelationID = 10;
req.correlationID.CorrelationType = 1;
req.user = udrCreate(TpAddress);
req.user.AddrString = "8082340211";
req.user.Name = "Name";
req.user.Plan = "P_ADDRESS_PLAN_E164";
req.user.Presentation = "P_ADDRESS_PRESENTATION_UNDEFINED";
req.user.Screening = "P_ADDRESS_SCREENING_UNDEFINED";
req.user.SubAddressString = "Subaddresssss";
// Application Description
req.applicationDescription = udrCreate(TpApplicationDescription);
req.applicationDescription.Text = "description";
req.applicationDescription.AppInformation = listCreate
(TpAppInformation);
TpAppInformation appInfo = udrCreate(TpAppInformation);
appInfo.Timestamp = "2008-06-01 00:00";
listAdd(req.applicationDescription.AppInformation, appInfo);
// Charging Parameters
req.chargingParameters = listCreate(TpChargingParameter);
TpChargingParameter param = udrCreate(TpChargingParameter);
param.ParameterID = 2;
514
Desktop 7.1
param.ParameterValue = udrCreate(TpChargingParameterValue);
param.ParameterValue.StringValue = "OSA";
listAdd(req.chargingParameters, param);
// Volume
req.volumes = listCreate(TpVolume);
TpVolume volume = udrCreate(TpVolume);
volume.Unit = 3;
volume.Amount = udrCreate(TpAmount);
volume.Amount.Exponent = 1;
volume.Amount.Number = 6;
listAdd(req.volumes, volume);
// Pass to RTBS
udrRoute(req);
}
515
Desktop 7.1
12.1. Archiving
12.1.1. Introduction
This section describes the Archiving agents. These are standard agents on the DigitalRoute® Medi-
ationZone® platform.
12.1.1.1. Prerequisites
The reader of this information should be familiar with:
12.1.2. Overview
With the Archiving agents, MediationZone® offers the possibility to archive data batches for a config-
urable period of time. There are two agents:
• The Archiving agent (also referred to as the Global Archiving agent), which stores the data on the
platform machine.
• The Local Archiving agent, which stores the data locally on the Execution Context machine. Note
that local data is not possible to export.
The Archiving agents can be configured to archive all received data batches. Each data batch is saved
as a file in a user specified repository. The Global Archiving agent also saves a corresponding reference
in the database, enabling the Archive Inspector to browse and purge the data batch files.
Depending on the selected profile, the Archive services are responsible for naming and storing of each
file and to purge outdated files at a regular basis. Utilizing the Directory Templates and base direct-
ories specified in the Archive profile, directory structures are dynamically built when files are stored.
The system administrator defines what structure is suitable for each profile. For instance, set the dir-
ectory structure to be changed with respect to collecting agent name on a daily basis. The Archive
services will automatically create all directories needed in the base directory or directories.
12.1.3. Configuration
You configure an archiving agent in three steps:
The full path of each filename to store in the archive is completely dynamic via the Archive File
Naming Algorithm. The name is determined by three parameters:
516
Desktop 7.1
AAA/BBB/CCC
Where:
AAA Represents one of the base directories specified in the Base Directory list in the Archive profile
configuration. If several base directories exist, this value will change according to the frequency
selected from the Switch Policy list.
For instance, if the template contains Month, Directory delimiter, Day this will yield new
directories every day, named 03/01, 03/02 ... 03/31, 04/01, 04/02 ... 04/30 and so
on. In this example, files are stored in a directory structure containing all months, which in
turn contains directories for all days (which in turn will contain all files from that day).
The Archive profile is loaded when you start a workflow that depends on it. Changes to the profile
become effective when you restart the workflow.
To open the configuration, click the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select Archive Profile from the menu.
Switch Policy If several base directories are configured, the switch policy determines for how
long the Archive services will populate each base directory before starting to
populate the next one (daily, weekly, or monthly). After the last base directory
has been populated, the archiving wraps to the first directory again.
Base Directory One or several base directories that can be used for archiving of files. For consid-
erable amounts of data to be archived, several base directories located on different
disk partitions might be needed.
Directory Tem- List of tokens that, in run-time, builds subdirectory names appended to one of the
plate base directories. The tokens could be either special tokens or user defined values.
517
Desktop 7.1
Subdirectories on any level, can be constructed by using the special token Direct-
ory delimiter.
Remove Entries If enabled, files older than the entered value will be deleted from the archive.
(days) Depending on the agent using the profile the removal will occur differently.
• For the Local Archiving agent the cleanup of outdated files is mastered by the
workflow. It removes the file from its archive directory.
• For the Archiving agent the cleanup of outdated files is mastered by the Archive
Cleaner task. It removes the reference in MediationZone® , as well as the file
itself from its archive directory. Consequently, the data storage is also dependent
on the setup of the task scheduling criteria.
If Keep Files and Remove Entries (days) are combined, only references in the
database are removed while the files remain on disk. (not valid for the Local
Archiving agent).
You enable External Referencing of profile fields from Archive Profile in the Edit menu. For detailed
instructions, see Section 9.5.3, “Enabling External References in an Agent Profile Field”.
When you apply External Referencing to profile fields, the following profile parameters are affected:
Base Directory The directory paths that you add to this list are included
in the properties file that contains the External Refer-
ences.
Example 86.
For example:
myBaseDirectoryKey =
/mypath/no1, /mypath/no2
Remove Entries (days) The value with which you set this entry is included in
the properties file and interpreted as follows:
myRemoveEntriesKey = 1
#! Remove after 1 day
myRemoveEntriesKey = 365
#! Remove after 365 days
myRemoveEntriesKey = -1
#! Do not remove.
#! This value is equal to clearing the
#! check-box.
518
Desktop 7.1
• Month - Inserts two digits representing the month the file was archived.
• Day - Inserts two digits representing the day of the month the file was archived.
• Hour - Inserts two digits representing the hour (24) of the day the file was archived.
• Agent directory name - Inserts the MIM value(s) defined in the Agent Directory
Name list in the Archiving agent configuration window.
• Day index - Inserts a day index between zero and the value entered in Remove Entries
(days) field. This number is increased by one every day until (Remove Entries (days)
number - 1) is reached. It then wraps back to zero. Day index may not be used in the
template if Remove Entries (days) is disabled.
• Directory delimiter - Inserts the standard directory delimiter for the operating system
it distributes files to. This way, a sub-directory is created.
Text If enabled, the token is entered from the text field. When disabled, the token is instead
selected from the Special token list.
The main menu changes depending on which Configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all Configurations and these are
described in Section 3.1.1, “Configuration Menus”.
There is one menu item that is specific for Archive profile configurations, and it is described in the
coming section:
Item Description
External References To Enable External References in an agent profile field. Please refer to Sec-
tion 12.1.3.1.1, “Enabling External Referencing” for further information.
The toolbar changes depending on which Configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all Configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.
519
Desktop 7.1
Profile Name of the Archive profile to be used when determining the attributes of the
target files.
Input Type The agent can act on two input types. The behavior varies depending on the
input type that you configure the agent with.
The default input type is bytearray. For information about the agent beha-
vior with the MultForwardingUDR input type, see Section 12.1.3.4, “Mul-
tiForwardingUDR Input”.
Compression Compression type of the target files. Determines if the agent will compress the
files before storage or not.
Agent Directory Possibility to select one or more MIM resources to be used when naming a sub-
Name directory in which the archived files will be stored. If more than one MIM re-
source is selected, the values making up the directory name will automatically
be separated with a dot.
Logged MIM Data MIM values to be logged as meta data along with the file. This is used for
identification of the files. The meta data is viewed in the Archive Inspector.
520
Desktop 7.1
The names of the created files are determined by the settings in the Filename Template tab.
For further information about the Filename Template service, see Section 4.1.6.2.4, “Filename
Template Tab”.
Profile Name of the Archive profile to be used when determining the attributes of the
target files.
The default input type is bytearray. For information about the agent behavior
with the MultForwardingUDR input type, see Section 12.1.3.4, “MultiFor-
wardingUDR Input”.
Compression Compression type of the target files. Determines if the agent will compress the
files before storage or not.
521
Desktop 7.1
Agent Directory Possibility to select one or more MIM resources to be used when naming a sub-
Name directory in which the archived files will be stored.
If more than one MIM resource is selected, the values making up the directory
name will automatically be separated with a dot.
The names of the created files are determined by the settings in the Filename Template tab.
For further information about the Filename Template service, see Section 4.1.6.2.4, “Filename
Template Tab”.
internal MultiForwardingUDR {
// Entire file content
byte[] content;
// Target filename and directory
FNTUDR fntSpecification;
};
Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.
A UDR of the type MultiForwardingUDR which target filename is not identical to its precedent is
saved in a new output file.
After a target filename that is not identical to its precedent is saved, you cannot use the first fi-
lename again. For example: Saving filename B after saving filename A, prevents you from using
A again. Instead, you should first save all the A filenames, then all the B filenames, and so forth.
522
Desktop 7.1
Example 87.
This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs. In this example, the data is being buffered in
the consume block. This makes it possible to route a complete batch to multiple files from the
drain block. NOTE that the execution context needs available memory to buffer the whole file.
import ultra.FNT;
bytearray data;
MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent) {
MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;
return multiForwardingUDR;
}
beginBatch {
data = baCreate(0);
}
consume {
data = baAppend(data, input);
}
drain {
//Send MultiForwardingUDRs to the forwarding agent
udrRoute(createMultiForwardingUDR("dir1", "file1", data));
udrRoute(createMultiForwardingUDR("dir2", "file2", data));
}
523
Desktop 7.1
Example 88.
This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.
import ultra.FNT;
MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){
MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;
return multiForwardingUDR;
}
consume {
bytearray file1Content;
strToBA (file1Content, "file nr 1 content");
bytearray file2Content;
strToBA (file2Content, "file nr 2 content");
The Analysis agent mentioned previous in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the root directory.
Note that this section is only valid for the Global Archiving agent.
To locate files in the archive, the Archive Inspector is used. The access group user is permitted to
launch and purge these files. Once a file is located, it is treated as a regular UNIX file using regular
UNIX commands to view or copy it.
524
Desktop 7.1
It is not encouraged to alter or remove a file from the archive using UNIX commands. If altering
is desired, make a copy of the file. If removal is desired, use the Archive Inspector.
To open the Archive Inspector, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select Archive Inspector from the menu.
Initially, the window is empty and must be populated with data using the corresponding Search Archive
dialog. For further information, see Section 12.1.4.1, “Searching the Archive”. Each row represents
information about a data batch (file).
Edit menu Search... Displays the Search Archive dialog where search criteria may be defined
to limit the entries in the list. For further information about setting the filter
for this dialog, see Section 12.1.4.1, “Searching the Archive”.
Edit menu Delete... If Keep Files is disabled in the Archive profile, all selected files are removed
from the archive, including their corresponding references in the database.
If Keep Files is enabled, only the references are removed while the files
shown in Archive Inspector are still kept on disk.
View menu View data... Shows the raw data content for the selected file.
Show Archives If the query resulted in a match larger than the Archive page size, this list
toggles between the result sets.
ID Holds the index of the rows in the archive.
Workflow Name of the archiving workflow.
Agent Name of the archiving agent.
Filename Full pathname to the file as stored on disk.
Timestamp Time when the entry was inserted in the archive.
MIM Values When double-clicking a cell in the MIM Values column, a dialog is dis-
played where values for the adherent MIM resources are displayed. Adherent
MIM resources are defined as Logged MIM Data in the Archiving agent
configuration window. For further information, see Figure 372, “MIM Re-
sources Dialog”.
525
Desktop 7.1
The Search Archive dialog is displayed when Search... is selected from the Edit manu.
Profile Select the profile that corresponds to the data of interest. If no profile is selected archive
entries for all profiles will be shown.
Workflow Option to narrow the search with respect to what workflow that archived the file.
Agent Option to narrow the search with respect to what agent that archived the file.
Period Option to search for data archived during a certain period.
For further information, see also Section 4.1.1.4, “System Task Workflows”.
12.1.6.1. Emits
The agents do not emit any commands.
12.1.6.2. Retrieves
The agents retrieve commands from other agents and based on them generate a state change of the file
currently processed.
Command Description
Begin Batch When a Begin Batch message is received, if the temporary directory DR_TMP_DIR
is not already in the base directory, the agent creates it. Then, the agent creates a target
file in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is closed.
Finally, the file is moved from the temporary directory to the target directory.
526
Desktop 7.1
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.
12.1.7. Introspection
The introspection is the type of data an agent expects and delivers.
12.1.8.1. Publishes
The agents publish the following MIMs:
This MIM parameter is of the string type and is defined as a batch MIM
context type.
12.1.8.2. Accesses
Various MIM resources are accessed depending on the MIM value selection in the Agent Directory
Name and Logged MIM Data lists. The MIM values are read at End Batch.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
A message event is reported along with target filename each time a file is archived.
527
Desktop 7.1
13.1.1.1. Prerequisites
The reader of this information should be familiar with:
13.1.2.1. Overview
The Radius accounting data contains information about the last client that had logged in, the log-in
time, the duration of the session etc. Other than collecting such data, the Radius agent may act as an
extension to the NAS, creating accounting data itself. For instance, when receiving a packet containing
a login request, it may reply with an accept or reject packet. The reply logic is handled through APL
code (an Analysis or Aggregation node).
Note the absence of a Decoder. For Realtime workflows, field decoding is handled via APL commands.
The Radius format is included when a Radius bundle is committed into the system. The format
contains record identification information on the first level (code, identifier, length, authenticator and
attributes) to be used by the Radius agent. Hence, the agent is responsible for recognizing the type of
data, while the Analysis node does the actual decoding of the contents (the attributes). A UFDL format
needs to be defined for this purpose.
When activated, the agent will bind to the configured port and wait for incoming UDP packets from
NASes. Each received UDP will be converted to a UDR and forwarded into the workflow. If fields
are missing in a UDP, the agent will still create a UDR, filling in all found fields. If the data in the
UDP is corrupt, or if data arrives from a host not present in the configuration window of the node, a
message will be sent to the System Log and the data will be discarded.
528
Desktop 7.1
Since NASes do not offer the possibility of requesting historic data, the agent will lose all data that is
delivered from the NAS while the agent is not executing.
13.1.2.2. Configuration
The Radius agent configuration window is displayed when the agent in a workflow is double-clicked
or right-clicked, selecting Configuration...
Figure 375. The Radius Server Agent Configuration View - NAS Tab
In the NAS tab, all NASes the agent will collect information from are specified.
IP Address The IP address that the NAS, sending packets, is located on.
Secret Key Key used for authentication of a received packet. This key must be identical to the one
defined in the NAS.
Figure 376. The Radius Server Agent Configuration View - Miscellaneous Tab
Port The port number where the Radius agent will listen for packets from the NAS(es).
Two Radius agents may not be configured to listen on the same port, on
the same host.
529
Desktop 7.1
PDU Lifetime If set to a value larger than 0 (zero), duplicate check is activated. The buffer
(millisec) saved for comparison is the packets collected during the set time frame.
Skip MD5 Calcula- If enabled, the check for MD5 signatures is excluded. This is necessary if the
tion Radius client does not send MD5 signatures along with the packets, in which
case they would be discarded by the Radius agent.
Duplicate Check Checking for duplicate packets can be made based on:
• Radius Standard - the identifier within the packet (byte number 2).
13.1.2.3. Introspection
The agent emits and retrieves UDRs of the Radius UDR type. For further information, see Section 13.1.5,
“The Radius Format”.
13.1.2.4.1. Publishes
13.1.2.4.2. Accesses
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
530
Desktop 7.1
Indicates the calculated sum, based on secret specified for the agent, is not equal to the secret in the
incoming packet.
For each strategy you can select if you want to reject 25, 50, or 100 % of the requests.
13.1.3.1. Overview
You include the Radius_Client agent in a workflow in order to transmit requests from the workflow.
During runtime a Radius UDR that includes a request field that is assigned with a value is routed into
the Radius_Client agent. When the answer field is assigned, the Radius UDR is routed back.
When combined with the Radius_Server agent, MediationZone® operates as a Radius Proxy.
13.1.3.2. Configuration
The Radius agent configuration window is displayed when the agent in a workflow is double-clicked
or right-clicked, selecting Configuration...
531
Desktop 7.1
Figure 377. The Radius Client Agent Configuration View - Radius Servers Tab
The Radius Servers tab enables you to configure an IP address and a secret key for every RADIUS
server that the agent communicates with.
1. In the configuration for the Radius Client agent, click on the Add button.
2. Enter the IP address and secret key for the server in the IP Address and Secret Key fields.
3. If you want to enable throttling for the host, select the Enable Throttling check box, and then enter
the the maximum number of UDRs (requests) per second you want the agent to forward in the
Throughput Threshold (UDR/s) field.
Note! Ensure that you handle the throttled UDRs in your APL code in the workflow in order
to not loose any UDRs.
532
Desktop 7.1
4. Click on the Add button and the server will be added in the table containing Radius Servers, and
then click on the Close button to close the dialog when you are finished adding hosts.
Figure 379. The Radius Client Agent Configuration View - Miscellaneous Tab
Host Enter either the IP address or the hostname through which the agent will bind
with the Radius servers.
Two Radius agents should not be configured to listen through the same
port, on the same host.
Source Port Enter the local port through which the agent will bind with the Radius servers.
Additional Ports In case you want to use a range of ports, enter the number of consecutive ports
in this field.
For example, if you enter 2000 in the Source Port field and 10 in the Additional
Ports field, the ports 2000-2010 will be used.
Retry Count The maximum number of attempts to send. An attempt to send occurs if a re-
sponse is not received within the Retry Interval time.
Retry Interval Enter the time interval, in seconds, between repeated attempts to send.
Skip MD5 Calcula- Check to exclude the use of the MD5 hashing algorithm.
tion
Identifier Calcula- Select this check box if you want an identifier to be calculated and appended
tion to the requests automatically. This identifier will be used for correlating requests
with answers. As the maximum number of pending requests to a specific port
is 256, the identifier range will be 0-255.
533
Desktop 7.1
13.1.3.3. Introspection
The agent emits and retrieves UDRs of the Radius UDR type. For further information see Section 13.1.5,
“The Radius Format”.
For a list of general MediationZone® MIM parameters, see Section 2.2.10, “Meta Information Model”.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
Indicates the calculated sum, based on secret specified for the agent, is not equal to the secret in the
incoming packet.
Indicates that the maxumim nuber of retries have been reaced and no more attempts will be made
to resend the message.
Indicates that the request destination that is set in the workflow has not been configured for the
agent.
534
Desktop 7.1
Indicates that the incoming response code is not the expected one.
Indicates that the maximum number of repeated attempts had been reached and that no more attempts
will be made to send the message.
Included with the MediationZone® Radius bundle is a general Radius format, containing all possible
record types valid for Radius. The Radius agent will use this format for recognizing the type of data.
The actual decoding of the contents ( requestMessage), and the encoding of the reply ( re-
sponseMessage) must be handled through a user defined format.
535
Desktop 7.1
throttled(boolean) This flag indicates whether the UDR has been throttled or not. De-
fault is false, and if the UDR has been throttled, it will be set to
yes.
13.1.6. An Example
A Radius agent can act as an extension to a NAS and to illustrate such a scenario an example is intro-
duced. In the example an Analysis agent is used to validate the content of the received UDP packet,
and depending on the outcome a reply is sent back (also in the form of a UDP packet). Valid UDRs
are routed to the subsequent agent, while invalid UDRs are deleted. Schematically, the workflow will
perform the following:
1. Decode the data into a UDR. Discard and continue with the next packet upon failure.
3. If the user was found in the table, send the UDR to the next agent and a reply UDR of type Ac-
cess_Accept_Int back to the Radius agent. If the user was not found, delete the UDR and
send a reply UDR of type Access_Reject_Int to the Radius agent. Both reply UDRs must
have the Identifier field updated first.
To keep the example as simple as possible, valid records are not processed. Usually, no reply
is sent back until the UDRs are fully validated and manipulated. The focus of the example is
the MediationZone® specific issues, such as decoding, validation and reply handling.
The radius agent will forward all received packets. The actual discarding and validation of the data is
handled in the Analysis agent.
536
Desktop 7.1
The full Ultra Format Definition for the example is not shown, since it is beyond the scope of this
manual to handle packet content or UFDL syntax.
The format definition is here stored in the Default directory with the name extendedRadius.
internal Vendor_Specific_Int {
int Type;
int Length;
int VendorID;
int SubAttrID;
int VendorLength;
int InfoCode;
string Data;
};
537
Desktop 7.1
1. Each time the workflow is activated, a subscriber table is read into memory. To keep the example
simple, the table content are assumed to be static. For a real implementation, it is recommended to
re-read the table on a regular basis.
3. Perform a lookup against the subscriber table, and create a reply of type Default.extendedRa-
dius.Access_Accept_Int or Default.extendedRadius.Access_Reject_Int,
depending on if the subscriber was found in the table.
table tmp_tab;
initialize {
tmp_tab = tableCreate("select SUBSCRIBER from VALID_SUBSCRIBERS");
538
Desktop 7.1
consume {
list<drudr> reqList = listCreate(drudr);
radius.Radius r = (radius.Radius) input;
string err = udrDecode("Radius.Request_Dec",
reqList, r.requestMessage, true);
if (instanceOf(elem, Default.extendedRadius.Access_Request_Int)) {
Default.extendedRadius.Access_Request_Int req =
(Default.extendedRadius.Access_Request_Int) elem;
table rowFound = tableLookup( tmp_tab,
"SUBSCRIBER", "=", req.User_Name );
if (tableRowCount(rowFound) > 0) {
Default.extendedRadius.Access_Accept_Int resp =
udrCreate(Default.extendedRadius.Access_Accept_Int);
resp.Identifier = req.Identifier;
r.responseMessage =
udrEncode("Default.extendedRadius.Response_Enc", resp);
udrRoute( r );
} else {
Default.extendedRadius.Access_Reject_Int resp =
udrCreate(Default.extendedRadius.Access_Reject_Int);
resp.Identifier = req.Identifier;
r.responseMessage =
udrEncode("Default.extendedRadius.Response_Enc", resp);
udrRoute( r, "Response" );
}
} else {
debug("Invalid request type");
}
}
13.2.1.1. Prerequisites
The user of this information should be familiar with:
539
Desktop 7.1
13.2.2. Overview
The Diameter agents enable you to configure MediationZone® to act as a Diameter server, a Diameter
client, or as a Diameter Proxy agent, by applying the Diameter Base Protocol.
According to the RFC 6733, the Diameter Base Protocol alone does not offer much functionality. The
Diameter Base Protocol should be regarded as a standard transport and management interface for AAA
applications that provide a well-defined functionality subset. To increase functionality, predefined
AAA applications are added. An AAA application usually consist of new command code and AVP
definitions that map the semantics of the application. One example of a predefined application is the
Diameter Credit-Control. For further information, see RFC 4006.
• Diameter_Stack
• Diameter_Request
13.2.2.1.1.1. Diameter_Stack
The MediationZone® Diameter_Stack agent manages transport, decoding, and encoding of Diameter
input messages.
In order for a workflow to act as a Diameter server, you must use the the Diameter_Stack agent. The
Diameter_stack agent communicates with the workflow by using the UDR type called Request-
CycleUDR. When a request message arrives to the stack, the message is decoded, validated, and
turned into a UDR of the pre-generated UDR type, as specified in Section 13.2.3.1.2, “Commands
Tab”. This UDR is inserted into the RequestCycleUDRs Request field and routed through the
workflow. By using the APL function udrCreate the Answer field is populated with an appropriate
answer message, and then RequestCycleUDR is routed back to the stack agent, for transmission
of the answer.
It is possible to use multiple Diameter_Stack agents in a workflow if that is required in the business
logic. However, for the best possible performance, it is recommended to use one Diameter_Stack agent
per workflow.
540
Desktop 7.1
Note!
• AVPs (Attribute-Value pairs) from the Diameter Base Protocol are static, unchangeable, and
always available to MediationZone® .
13.2.2.1.1.2. Diameter_Request
In order for a workflow to act as a Diameter client, you must use both the Diameter_Request agent
and the Diameter_Stack agent. The Diameter_Request agent simply references a Diameter_Stack agent
that is suitable for the outgoing route.
A RequestCycleUDR with a populated Request field is routed into the Diameter_Request agent.
This agent then uses the selected stack to send the message. A RequestCycleUDR containing the
original Request field and a populated Answer field is then routed back into the workflow.
You view the UDR types that are created by default in the Diameter agents (based on RFC 6733), in
the UDR Internal Format Browser, in the Diameter folder. To open the browser, right-click in an
APL code area and select UDR Assistance.
541
Desktop 7.1
Each Command or AVP that is defined in the Diameter Application profile configuration will result
in a UDR type after it has been saved. Note that the base commands and AVPs in RFC 6733 are pre-
defined and will be included automatically.
Note! The BaseUDR and BaseCommand UDR types are internal and shall not be used in APL
code.
13.2.2.1.2.1. RequestCycleUDR
The Diameter_stack agent and Diameter_Request agent communicate with the workflow by using the
UDR type RequestCycleUDR. For more information, see Section 13.2.2.1.1.1, “Diameter_Stack” and
Section 13.2.2.1.1.2, “Diameter_Request”.
Field Description
Answer (BaseCom- This field is populated with an "answer message UDR", before routed
mand (Diameter)) back by the workflow to the Diameter_Stack agent. For the Diameter_Re-
quest agent it will work the reversed way: after the answer field has been
populated by the agent, the RequestCycleUDR is routed to the workflow.
542
Desktop 7.1
Field Description
AnswerReceivedTime A timestamp indicating when the client receives the answer in nano-
(long) seconds.
AnswerSentTime A timestamp indicating when the server sends the answer in nanoseconds.
(long)
Context (any) This is an internal working field that can be used in the workflow config-
uration to keep track of and use internal workflow information related to
the request, when processing the answer. An example for a proxy work-
flow including TCP/IP and Diameter agents: When sending the Request-
CycleUDR to the Diameter Request agent, the input TCPIPUDR is saved
in the Context field. When the response is received from the Diameter
agent, the TCPIPUDR can be read from the Context field and this
TCPIPUDR can be used to send back the response to the TCP/IP agent.
ExcludePeers You can populate this field with a list a list of peers, identified by their
(list<string>) hostnames. When Round Robin is selected as the Realm Routing
Strategy, these peers will be excluded from lookups in the Realm
Routing Table.
Request (BaseCom- The Diameter_Stack agent populates the Request field with the "request
mand (Diameter)) message UDR" before routing the RequestCycleUDR to the workflow.
For the Diameter_Request agent it will work the reversed way and request
messages will be transmitted from the workflow to the Diameter_Stack
agent by using this field.
543
Desktop 7.1
Field Description
RequestReceived- A timestamp indicating when the server receives the request in nano-
Time (long) seconds.
RequestSentTime A timestamp indicating when the client sends the request in nanoseconds.
(long)
Session_Id(string) A Diameter Session-Id value that will be read from the Request field in
the Diameter message. This is a read-only field.
Throttled(boolean) This flag indicates whether the UDR has been throttled or not. Default
is false, and if the UDR has been throttled, it will be set to true.
13.2.2.1.2.2. WrappedMZObject
The WrappedMZObject UDR does not map to a Diameter message but can be used to send data between
workflows. WrappedMZObject is added as a field in the RequestCycleUDR (request or answer).
Note! Since this is not a normal Diameter message, the receiver has to be another Medi-
ationZone® workflow.
Field Description
Data (any) The data to send to another workflow.
Destination_Host (string) The server host, the destination of the message.
Destination_Realm (string) The realm where to route the message.
Is_Request (boolean) Used to indicate if the message is a request or not.
Is_Unidirectional (boolean) To be used if when no reply is expected.
The Diameter_Stack agent produces three error answers with MediationZone® internal result codes.
• No suitable route
When there is no peer or realm in the Routing profile that matches the content of the AVPs that are
used for routing, a message with the error code 4997 is returned.
• Destination-Host
• Destination-Realm
• Acct-Application-Id
• Auth-Application-Id
• Vendor-Specific-Application-Id
544
Desktop 7.1
This error may also occur when all the peers of a realm are specified in the ExcludePeers field
of a RequestCycle UDR and Round Robin is the selected Realm Routing Strategy.
When connection with the peer that is receiving the request is not established, a message with the
error code 4998 is returned.
When a request is sent to a peer and no answer is received within a configurable timeout, a message
with the error code 4999 is returned.
13.2.2.2. SCTP
The Diameter agents support Transmission Control Protocol (TCP) and Stream Control Transmission
Protocol (SCTP) as transport protocols.
Even though there are similarities between these protocols, SCTP provides some capabilities that TCP
is lacking, including multistreaming and multihoming.
TCP transmits data in a single stream and guarantees that data will be delivered in sequence. If there
is data loss, or a sequencing error, delivery must be delayed until lost data is retransmitted or an out-
of-sequence message is received. SCTP's multistreaming allows data to be delivered in multiple, inde-
pendent streams, so that if there is data loss in one stream, delivery will not be affected for the other
streams.
The multihoming feature adds more redundancy benefits of having multiple network interfaces.
When a network interface of a TCP connection fails, the connection will time out as it cannot redirect
data using an alternate network interface that is available on the host. Instead, failover to another inter-
face must be handled in the application layer.
The number of transmissions, timeouts and any other parameters that determine when the failover
should occur must be set in the SCTP software specific to your operating system.
545
Desktop 7.1
On a system with SCTP installed you can bind multiple IP addresses to a hostname by editing the
hosts file. The location of this file is operating system specific but it can be found under /etc on
most Linux and Unix distributions.
Example 89.
127.0.0.1 localhost
192.168.1.111 server1
192.168.1.112 server1
When a Diameter_Stack agent in MediationZone® receives a connection request from a peer over
SCTP, it is not certain that its hostname will be resolved to the IP address of a particular network in-
terface. To ensure that a specific interface is used to setup the connection, you must specify the IP
address of the interface in the Primary Host text box in the Diameter_Stack agent. This can be useful
if the peer only uses a single static IP address to connect to the agent. Once the connection is established,
failover to an alternate interface is possible.
For further information about the Diameter_Stack agent, see Section 13.2.4, “Diameter_Stack Agent”.
TLS requires a keystore file that is generated by using the Java standard command keytool. For further
information about the keytool command, see the JDK product documentation.
546
Desktop 7.1
Example 90.
1. To Create a keystore:
Keytool prompts for required information such as identity details and password. Note that
the keystore password must be the same as the key password.
4. Enter the keystore path and the keystore password in the Diameter Stack configuration.
5. From the Peer Table, in the Diameter Routing Profile configuration, select the TCP/TLS
protocol for the peer with which you want to establish a secure connection.
You can control the handling of unrecognized certificates by setting a property in either the
common.xml file or in the executioncontext.xml file on the machine that the workflow ex-
ecutes on.
mz.diameter.tls.accept_all
If the property is set to false (default), the Diameter_Stack agent does not accept any non-trusted
certificates. If it is set to true, the Diameter_Stack agent accepts any certificate.
In either case any unrecognized certificate will be logged in an entry in the System Log (in PEM
format).
Check the certificate. If you trust it, import it into the keystore by using the Java standard keytool
command. For further information, see the standard Java documentation.
• Application Profile
• Routing Profile
The Diameter appication profile is loaded when you start a workflow that depends on it. Changes to
the profile become effective when you restart the workflow.
547
Desktop 7.1
To open the configuration, click the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select Diameter Application Profile from the menu.
From the main menu at the top of the configuration view, select Diameter to display available options
for import, export and AVPs.
The Diameter application profile enables you to import and export AVP and command specifications
in two supported formats:
1. From the Diameter menu select Import ABNF Specifications and the Select a File to Import
dialog opens.
2. Select an ABNF file and click Open to import the ABNF file to your Diameter application profile
configuration.
Note! For further information about the ABNF file, see Section 13.2.7.1, “ABNF Specification
Syntax”.
548
Desktop 7.1
If your ABNF file contains specifications that are already included in the Diameter profile, you are
prompted to select one of the alternatives to overwrite, rename or skip importing the file specification.
Example 91.
1. From the Diameter menu select Import XML Specifications and the Select a File to Import
dialog opens.
2. Select an XML file and click Open. The XML file is imported to your Diameter application profile
configuration.
Note!
For further information about the XML file, see Section 13.2.7.2, “XML Specification Syntax”.
For further information about handling specifications (XML or ABNF) that are already included
in the application profile, see Section 13.2.3.1.1.1.1, “Handling Duplicate Specification Files”.
549
Desktop 7.1
1. From the Diameter menu select Export ABNF Specifications and the Select a Target File for
Export dialog opens.
2. Select an ABNF file and click Save. The ABNF file (both AVPs and commands) is saved as an
export file.
Note! For further information about the ABNF file, see Section 13.2.7.1, “ABNF Specification
Syntax”.
1. From the Diameter menu select Export XML Specifications and the Select a Target File for
Export dialog opens.
2. Select an XML file and click Save. The XML file (both AVPs and commands) is saved as an export
file.
Note! For further information about the XML file, see Section 13.2.7.2, “XML Specification
Syntax”.
MediationZone®
From the Diameter menu select Clear AVP Specifications and click OK. All the AVP specifications
are deleted.
From the Diameter menu select Clear Command Specifications and click OK. All the Command
specifications are deleted.
The commands that you use in the Diameter application profile are predefined command sets of spe-
cific solutions.
The Commands tab in the Diameter application profile configuration enables you to create and edit
command sets that are customized according to your needs.
550
Desktop 7.1
The Add Diameter Command Specification dialog is displayed when clicking the Add icon in the
Commands tab.
Figure 390. The Diameter Commands Tab - Add Diameter Command Specification
551
Desktop 7.1
Select Error to mark that the message contains a protocol error (e-bit is set in
Diameter message header), so that the message will not conform to the ABNF
described for this command. This flag is typically used for test purposes.
If you want to send an error message answer from APL, it is recommended that
you use the UDR Diameter.Base.Error_Answer_Message.
Auto-Populate Click on this button to automatically fill out the AVP Layout table with data,
based on your Flags selection. The Category of AVP data is set to Required.
Note! To manually modify the data in the table cells double-click a cell.
Selecting the Request and Proxiable check boxes will auto-populate the AVP
Layout table with the following AVPs:
• Origin-Realm
• Origin-Host
• Destination-Realm
Selecting Proxiable only will auto-populate the AVP Layout table with the fol-
lowing AVPs:
• Origin-Realm
• Origin-Host
• Result-Code
Selecting Error will prevent the AVP Layout table from being auto-populated.
This table includes a list of all the AVPs in a specific command. From the table you can add, edit, and
remove AVPs.
To manually modify the data in the table cells double-click on a cell. Either a drop-down list button
will appear and enable you to select a different content, or the cell will become editable.
552
Desktop 7.1
• A Required AVP must be included, but may appear anywhere in the message.
UDR types are generated for the Diameter Application profile based on the command
configuration. When Max is set to 2 or <unbounded> the data type of the UDR field
for the AVP will be list<data type>.
AVPs carry the data payload in all Diameter messages. While MediationZone® recognizes all the
AVPs that are defined in the Diameter Base Protocol, it also recognizes your customized AVPs. In the
AVPs tab you can define your own customized AVPs.
Auto-Populate Click on this button to enter missing table entries in all the user defined AVPs of
a command in the table.
Address ipaddress
553
Desktop 7.1
DiameterIdentity string
Enumerated int
Grouped list<type>
Float32 float
Float64 double
IPFilterRule IPFilterRuleUDR(Diameter)
OctetString bytearray
Signed32 int
Signed64 long
Time date
Unsigned32 int
Unsigned64 long
UTF8String string
Vendor The numeric Vendor ID of the AVP. The vendor ID of all the IETF standard
Diameter applications is 0 (zero).
Show Base AVPs To display all predefined AVP types, check Show Base AVPs. These are the
AVPs specified in Diameter Base Protocol (RFC 6733).
To open the Add Diameter AVP Specification dialog, click on the Add icon at the bottom of the
AVP tab.
554
Desktop 7.1
Mandatory ('M') The M-bit allows the sender to indicate to the receiver whether or not understanding
Bit the semantics of an AVP and its content is mandatory. If the M-bit is set by the
sender and the receiver does not understand the AVP or the values carried within
that AVP, then a failure is generated. For further information about the M-bit, see
the Diameter Base Protocol (RFC 6733).
The following applies for incoming and outgoing messages that contains the
configured AVP:
You can change the value of the M-bit from APL if Mandatory ('M') Bit is set
to MAY or SHOULD.
Protection ('P') The P-bit bit is reserved for future usage of end-to-end security.
Bit
Enumera- This table is accessible for editing only when AVP Type is configured as Enu-
tion/Group Prop- merated or as Grouped. This table enables you to add, edit, or remove AVPs or
erties enumeration values.
For further information about the tables columns and entries, see Sec-
tion 13.2.3.1.2.1, “To Add a Diameter Command Specification”.
To open the Edit Diameter AVP Specification dialog, click on the Edit icon at the bottom of the
AVPs tab.
The Edit Diameter AVP view is identical to the Add Diameter AVP Specification view. The same
description applies for editing an AVP specification.
The identifiers in this tab define the advertised applications for the capabilities handshake. They are
used whenever the Diameter_Stack agent initiates or responds to a new transport connection, in order
to negotiate the compatible applications for the link.
For further information about Authentication and Accounting Applications, see Diameter Base Protocol
(RFC 6733).
555
Desktop 7.1
Auto-Popu- Click on this button to add Application IDs, that are used in any of the commands, to
late the Application ID table.
In the Vendor Specific Applications table, available Vendor IDs are extracted from
the AVPs tab into the Vendor ID column.
556
Desktop 7.1
Default Outgoing 'M' Bit Set to 1 When When this check box is selected and Mandatory ('M') Bit
Flag Rule MAY Is Selected is set to MAY in the AVPs tab, the M-bit will be set to 1
in outgoing messages.
The Diameter routing profile is loaded when you start a workflow that depends on it. Changes to the
profile become effective when you restart the workflow. It is also possible to make changes effective
while a workflow is running. For more information about this, see Section 13.2.3.2.3, “To Dynamically
Update the Diameter Routing Profile”.
To define a routing profile, click on the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select Diameter Routing Profile in the menu.
557
Desktop 7.1
A Diameter_Stack agent that uses the routing profile maintains transport connections with all the hosts
that are defined in the Peer table list. Connections and handshakes of hosts that are not in this list are
rejected with the appropriate protocol errors.
Note! MediationZone® will actively try to establish connections to any hosts that are included
in this list, unless the Do Not Create Outgoing Connections option is checked in the Diamet-
er_Stack agent.
Note! The content of the Origin-Host AVP in the answer commands from the
specified peer should be identical to this value. If the values do not match, the
MIM values published by the Diameter_Stack agent that contain counters are
not updated correctly. This may occur, for instance, if you have specified a
hostname in this text box but the Origin-Host AVP contains an IP address. It is
recommended that you consistently use either IP addresses or hostnames when
configuring the Diameter profiles and agents.
Port The port to connect to when initiating transport connections with a peer.
• TCP
• TCP/TLS
• SCTP
Note! SCTP must be installed on every EC host that uses the SCTP protocol.
For installation instructions, see your operating system documentation.
Throughput If throttling has been enabled for the peer, this text box will show the configured
Threshold threshold for when transmissions of request UDRs should be throttled. Throttled UDRs
will be routed back into the workflow.
For example: 1.000 (which means a maximum of 1.000 UDRs/second will be transmit-
ted).
558
Desktop 7.1
Note! Throttling will determine if and how the workflow will limit the number
of requests and UDRs sent out from the workflow. For information regarding
how to configure the Diameter agent to reject incoming requests or UDRs to
the workflow, see Section 13.2.4.1.2, “Diameter Too Busy Tab”.
1. In the Diameter Routing Profile, click on the Add button beneath the Peer Table.
2. Enter the hostname and port for the host in the Hostname and Port text boxes.
4. If you want to enable throttling for the peer, select the Enable Throttling check box, and then enter
the maximum number of request UDRs per second you want the Diameter_Stack agent to transmit
to the peer in the Throughput Threshold (UDR/s) text box.
Note! Ensure that you handle the throttled UDRs in your APL code in the workflow in order
to not loose any UDRs.
5. Click on the Add button and the host will be added in the Peer Table, and then click on the Close
button to close the dialog when you are finished adding hosts.
Realm-based routing is performed when the Destination-Host AVP is not set in a Diameter
message. All realm-based routing is performed based on lookups in the Realm Routing Table.
When the lookup matches more than one set of keys, the first result from the lookup will be used for
routing. For this reason, the order of the rows in the Realm Routing Table must be considered. You
can control the order of the rows by using the arrow buttons. Clicking on the table columns to change
the displayed sort order does not have any effect on the actual order of the rows in the Realm Routing
Table.
Realm Rout- Diameter requests are routed to peers in the realms in accordance with the selected
ing Strategy Realm Routing Strategy. The following settings are available:
• Failover: For each realm, Diameter requests are routed to the first specified peer
(primary) in the Hostnames cell, or the first host resolved by a DNS query. If the
connection to the first peer fails, requests to this realm are routed to the next peer
(secondary) in the cell, or next host resolved by a DNS query.
559
Desktop 7.1
• RoundRobin: Diameter requests are evenly distributed to all the specified peers in
the Hostnames cell, or peers resolved by DNS queries. If the connection to a peer
fails, the requests are distributed to the remaining hosts. This also applies when
UDRs are throttled due to the settings in the Peer Table.
The connections to the peers are monitored through the standard Diameter watchdog
as described in RFC 6733 and RFC 3539. Possible states of the connection are: OKAY,
SUSPECT, DOWN, REOPEN, INITIAL.
The table below contains examples of how Diameter requests are routed to the peers
of a realm, with the RoundRobin strategy, depending on the peer connection state:
Diameter requests are routed to peers with status REOPEN and SUSPECT as a last
resort, i e when there are no peers with status INITIAL or OKAY.
Diameter request are not routed to peers that are specified in the ExcludePeers
field of a RequestCycle UDR. For more information about the RequestCycle UDR,
see Section 13.2.2.1.2.1, “RequestCycleUDR”.
Enable Dy- Select this checkbox when you want to use DNS queries (Dynamic Peer Discovery) to
namic Peer find peer hosts in realms. The queried peer host information is buffered by the Diamet-
Discovery er_Stack agent according to the TTL (time to live) parameter in the DNS records. When
the TTL has expired, the agent will attempt to refresh the information. If the refresh
fails, the buffered information will be deleted.
When Enable Dynamic Peer Discovery is selected, DNS queries are performed at:
• Workflow start
Note!
• To make changes to this setting effective, you must restart the workflow(s).
• If the DNS service is unavailable (server available but service down) when
starting the workflow(s), the system log entry will indicate errors in realm
lookups. In order to resume lookups in DNS, you need to dynamically update
the routing table in the Diameter_Stack agent when the DNS is available again.
For information about how to dynamically update the routing table, see Sec-
tion 13.2.3.2.3, “To Dynamically Update the Diameter Routing Profile”.
For information about how to select DNS servers, see Section 13.2.3.2.2, “DNS Tab”.
560
Desktop 7.1
Realm The realm name (case sensitive). Realm is used as primary key in the routing table
lookup. The Diameter Stack agent compares this value with the Destination_Realm
AVP. If left empty, all the destination realms are valid for this route.
Applications The applications that this route serves. This entry is used as a secondary key field in
the routing table lookup. If left empty, all the applications are valid for this route.
When Node Discovery is set to Dynamic, you should leave this field empty.
Node Discov- The method of finding the peer hosts in the realm:
ery
• Static - The peer hosts are specified in the Hostnames field of the Realm Routing
Table.
• Dynamic - The Diameter_Stack agent uses DNS queries (Dynamic Peer Discovery)
to find the peer hosts. These queries may resolve to multiple IP addresses or host-
names.
Note! Entries in the Realm Routing Table that have the Dynamic setting are
ignored (not matched), unless Enable Dynamic Peer Discovery is selected.
When a DNS server resolves a realm to peer hosts, it may return fully qualified DNS
domain names with a dot at the end. These trailing dots are removed by the Diamet-
er_Stack agent.
1. In the Diameter Routing Profile, click on the Add button beneath the Realm Routing Table.
561
Desktop 7.1
3. If the realm serves specific applications, click on the Add button beneath the Applications list box
and specify the Application Id. Repeat this step for each application.
4. You should only perform this step if Node Discovery is set to Static and the peer host are to be
specified in the Realm Routing Table. Click on the Add button beneath the Hostname list box
and select a host from the drop-down list. Repeat this step for each host in the realm.
5. If you specified the peer hosts of the realm in the previous step, select Static from Node Discovery.
If you want to use Dynamic Peer Discovery instead, select Dynamic from this drop-down list.
6. Click on the Add button and the realm will be added in the Realm Routing Table, and then click
on the Close button to close the dialog when you are finished adding realms.
562
Desktop 7.1
You can use the DNS tab to configure the DNS settings used for looking up peer hosts of realms.
For information about how configure your DNS for Dynamic Peer Discovery, see the Diameter Base
Protocol (RFC 6733).
Note!
• To make changes to this tab effective, you must restart the workflow(s).
• Avoid configuring the same peer host in both DNS and the Peer Table, this may cause du-
plicate instances of Diameter peers.
• The host- and realm names in the Diameter_Stack agent are case sensitive.
Retry Interval Time Enter the time (in milliseconds) that Diameter_Stack agent must wait before
(ms) retrying a failed DNS connection.
Max Number Of Re- Enter the maximum number of times that Diameter_Stack agent should retry
tries to connect to the servers in the DNS Servers list before it gives up. When the
agent has attempted to connect to all servers (after an initial failed attempt),
it counts as retry.
DNS Servers Enter the hostname or IP address of the DNS servers that can be quieried.
The topmost available server will be used.
If the DNS Servers list is empty, the Diameter_Stack agent will use the file
/etc/resolv.conf on the Execution Context host to select the DNS
server. For information about how to configure resolv.conf, see your
operating system documentation.
You can refresh the routing table of a Diameter_Stack agent while a workflow is running. When the
agent refreshes the routing table, it reads the updated Peer Table, Realm Routing Table and Realm
Routing Strategy from the selected Diameter routing profile.
563
Desktop 7.1
The setting Enable Dynamic Peer Discovery in the Routing tab and the settings in the DNS tab are
not read from the Diameter Routing table at refresh. To make changes to these settings effective, you
must restart the workflow(s).
The routing table can be refreshed from the Workflow Monitor or from the Command Line Tool.
1. In the Workflow Monitor, double-click the Diameter_Stack agent to open the Workflow Status
Agent configuration.
2. In the Command tab, select the Update Routing Table button to refresh the routing table.
When Round Robin is the selected Realm Routing Strategy, you can reset the selection cycle by
running the following command:
You can use the Diameter_Stack MIM value Realm Routing Table to read the realm routing
table of a Diameter Stack_Agent from APL. The MIM value is of the map<string,map<string,
list<string>>> type and is defined as a global MIM context type.
The string values in the outer map contain the realm names (primary key). The string values of the
inner map contain the applications (secondary key). The lists in the map contain the hostnames of the
peers in the realm.
Asterisks (*) are used in the strings to denote unspecified realm name or unspecified applications.
The values in the inner and outer maps are sorted exactly as the Realm Routing Table of the selected
Diameter routing profile.
564
Desktop 7.1
Assume that the following realm routing table is defined for a Diameter_Stack agent:
initialize {
//Note the space between the angle brackets!
map<string, map<string, list<string> > > realmTable =
(map<string, map<string, list<string> > >)
mimGet("Stack1", "Realm Routing Table");
//Check the size of the table
if (mapSize(realmTable) != 2)
abort("Realm table incorrect size");
//Check that realms are included
if (mapKeys(realmTable) != listCreate(string, "dr", "*"))
abort("Wrong realms");
//Get the inner map for realm name "dr"
map<string, list<string> > drMap = mapGet(realmTable, "dr");
//Get the inner map for realm name "*" (unspecified realm)
map<string, list<string> > starMap = mapGet(realmTable, "*");
//Any Application Id
debug(mapGet(drMap, "*"));
//Application Id 100
debug(mapGet(starMap, "100"));
//Any Application Id
debug(mapGet(starMap, "*"));
}
The spaces between the angle brackets in the example above are required. If missing,
the APL will fail to compile.
For more information about MIM values published by the Diameter_Stack agent, see Section 13.2.4.3,
“Meta Information Model”.
565
Desktop 7.1
For further information about the agent's operation, see Section 13.2.2.1.1.1, “Diameter_Stack”.
13.2.4.1. Configuration
You open the Diameter_Stack agent configuration view from the workflow configuration by either
double-clicking the agent icon, or by right-clicking it and then selecting Configuration.
The General tab contains general Diameter settings that are needed for configuration of the agent.
Application Profile Click Browse to select a predefined Application Profile. The profile contains
details about advertised applications, as well as supported AVPs and command
codes.
Note! SCTP must be installed on every EC host that uses the SCTP
protocol. For installation instructions, see your operating system docu-
mentation.
Diameter Identity Select Hostname to manually enter the hostname (case sensitive) of this Dia-
meter agent. In case the Origin-Host AVP has been left unconfigured, the
Hostname value will be applied whenever a Diameter message is transmitted
from this agent.
If SCTP is configured as server protocol, all IP addresses that are resolved from
the Diameter Identity will be used as SCTP endpoints through multihoming.
Use DNS Hostname If enabled, the Diameter Identity of the local agent is automatically set by
looking up the DNS hostname that is associated with the local IP address. If
there are more than one network interface, the agent aborts on startup.
Realm Enter the Diameter realm (case sensitive) for this specific host. In case the
Origin-Realm AVP has been left unconfigured, the Realm value will be applied
in messages transmitted from this agent.
Listening Port Enter the port through which the Diameter agent should listen for transport
connections input.
566
Desktop 7.1
Primary Host When using SCTP, optionally enter the IP address of the network interface that
will be used to establish a transport connection. If left unconfigured, any IP
address that can be resolved from the Hostname will be selected.
The Diameter_Stack receives, decodes, and forwards UDRs asynchronously. An internal queue in the
workflow engine acts as a backlog for the workflow. When the load of messages gets too heavy to
process, you can either use the configurations in the Diameter Too Busy tab, in order to respond to
callers, or configure the Supervision Service with actions to take.
Note! The configurations described in this section will determine if and how the Diameter agent
will reject incoming requests or UDRs to the workflow. For information regarding how to limit
the number of requests and UDRs sent out from the workflow, see Section 13.2.3.2, “Diameter
Routing Profile”.
The Diameter Too Busy tab enables you to configure the agent with instructions to respond to callers.
Figure 401. The Diameter Stack Agent - Diameter Too Busy Tab
Enable Diameter Too Select this check box to enable the agent to automatically respond with
Busy DIAMETER_TOO_BUSY when the workflow is overloaded.
Maximum Workflow Enter the highest limit of the internal queue size. When this limit is reached
Queue Size(%) the agent sends "Too Busy" responses.
Note! You can change this value during processing from the
Workflow Monitor.
Throughput Threshold The Throughput Threshold is also a congestion control setting. With it,
(UDRs/s) you can make the agent reject some of the incoming UDRs.
When the load of requests per second exceeds the value of this property,
some of the requests will be rejected and the process sending the request
will get a Diameter Too Busy response.
Note! You can change this value during processing from the
Workflow Monitor.
567
Desktop 7.1
Time Between Log This property tells the agent how often it should write messages to system
Entries(s) log when it is in congestion prevention mode.
If you want to reject certain messages when the load gets too heavy, you can use the Supervision Service.
With this service you can select one of the following overload protection strategies:
For each strategy you can select if you want to reject 25, 50, or 100 % of the requests.
Diameter Answer Enter the period of time (in milliseconds) before a non-responded request is
Timeout (ms) handled as an error, i e an Error Answer Message is returned. See Sec-
tion 13.2.2.1.3, “Special Error Handling” for further information.
568
Desktop 7.1
Examples
Timeout Resolution Enter the interval at which the Answer Timeout should be checked.
(ms)
Enable Debug Events Select this check box to enable debug mode. Useful for testing purposes.
Enable Runtime Valid- Select this check box to enable runtime validation of the Diameter messages
ation against the command and AVP definitions in the Diameter application profile.
When runtime validation is selected, incoming messages that fail the valida-
tion are rejected by the Diameter_Stack agent and the appropriate result code
is applied in an error answer message.
569
Desktop 7.1
Connect Timeout (ms) Enter the timeout value for peer connection attempts. This setting is only
applicable for TCP connections.
Connect Interval (ms) Enter the minimum time interval between connection attempts when routing
messages from a workflow and the peer connection is not established. An
interval timer is started at the first connection attempt; subsequent connection
attempts to the same peer are then suppressed until the timer has expired.
When realm-based routing is used, the connect interval is applied only if all
configured peers in the realm are down.
13.2.4.2. Introspection
The introspection is the type of data an agent expects and delivers.
13.2.4.3.1. Publishes
Note! In order for the MIM counters in this section to publish correct values, the hostnames
specified in the Peer Table of the selected Diameter routing profile must be consistent with the
Origin-Host AVP in answer commands. For more information about the Peer Table, see Sec-
tion 13.2.3.2, “Diameter Routing Profile”.
570
Desktop 7.1
Communication Fail- This MIM parameter contains the number of connection problems detected
ure Network Layer on network level.
Diameter Too Busy Total Count is of the long type and is defined
as a global MIM context type.
DPA Count This MIM parameter contains the number of sent and received DPA (Discon-
nect-Peer-Answer) commands for each peer in the selected Diameter routing
profile.
571
Desktop 7.1
For an example of how to use this MIM value, see Section 13.2.3.2.3.3, “To
Read the Realm Routing Table in APL”.
Records in decoder This MIM parameter contains the current number of records in the queue for
queue decoding.
572
Desktop 7.1
Workflow Round Trip This MIM parameter contains the minimum workflow processing latency,
Latency Max since Workflow start.
13.2.4.3.2. Accesses
You can configure an event notification that is triggered whenever a state change occurs. For more
information about this event, see Section 5.5.19, “Diameter Peer State Changed Event”.
For more information about this event, see Section 5.5.5, “Diameter Dynamic Event”.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
The message is generated if the workflow is started with the agent in the passive mode.
573
Desktop 7.1
For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.
You apply a Diameter_Request agent to your workflow in order to transmit requests from the workflow.
13.2.5.1. Configuration
You open the Diameter_Request agent configuration view from the workflow configuration by either
double-clicking the agent icon, or by right-clicking it and then selecting Configuration.
Associated Diameter_Stack
From the drop-down list that includes all the Diameter_Stack agents in the workflow, select the Dia-
meter_Stack that you want requests to be sent from.
13.2.5.2. Introspection
The introspection is the type of data that an agent expects and delivers.
The example below demonstrates a call scenario where a client issues a Service-Authorization-
Request in order to authorize a user for a certain service. The Diameter_Stack agent sends back a
Service-Authorization-Answer that is assigned with the value yes (1), if authorization is
successful, or no (0), if authorization has failed.
574
Desktop 7.1
For information about how to configure a Diameter application profile, see Section 13.2.3.1, “Diameter
Application Profile”.
The commands used in this example are specified in ABNF format below:
The AVPs used by the commands are specified in ABNF format below:
575
Desktop 7.1
You only need to create one Diameter application profile in order to run this example on one system.
This is possible since several Diameter_Stack agents can share one profile. If you wish to run the
Diameter_Stack Workflow and Diameter_Request workflow on different systems, you may create two
profiles.
If you only create one (shared) Diameter routing profile, you must configure all peers and realms in
this profile. As it is not allowed to use duplicated ip addresses or hostnames in the peer table, you may
also need to update the host file of your operating system, in order to run two peers on the same machine.
The location of this file is operating system specific but it can be found under /etc on most Linux
and Unix distributions.
576
Desktop 7.1
127.0.0.1 localhost
172.16.207.1 dia1 dia2 dia3 dia4
For further information about the Diameter routing profile, see profile, see Section 13.2.3.2, “Diameter
Routing Profile”.
This Analysis agent contains the code that is needed to read the Service_Authorization_Request.
Normally it would then make the authorization by populating a request of an external Charging/Rating
System, but in this code example we create only a positive Service Authorization Answer:
import ultra.diameter_example.dia_app_prof;
initialize{
debug("wf started");
577
Desktop 7.1
consume {
//The Request is a subclass of RequestCycleUDR
if (instanceOf(input.Request,Service_Authorization_Request)){
Service_Authorization_Request request =
(Service_Authorization_Request)input.Request;
578
Desktop 7.1
}
}
This agent receives the comma separated UDR:s user,id. user is an IP Address and id is a Service
ID of a requested service.
external TCPext {
ascii user : terminated_by( "," );
ascii id : int(base10), terminated_by( 0xD );
};
internal TCPint :
extends_class( "com.digitalroute.wfc.tcpipcoll.TCPIPUDR" ) {
string session_Id;
};
import ultra.diameter_example.dia_app_prof;
long sessionNo;
579
Desktop 7.1
consume {
if (instanceOf(input,TCP_TI)){
TCP_TI tcp_req = (TCP_TI)input;
tcp_req.session_Id="session"+(string)sessionIncr();
debug("User:" + tcp_req.user);
debug("Id:" + tcp_req.id);
request.Session_Id=tcp_req.session_Id;
udrRoute(diam_req,"Diameter_req");
debug("Routed Request");
} else if (instanceOf(input,RequestCycleUDR)){
//Check if this is an answer UDR.
if (instanceOf(((RequestCycleUDR)input).Answer,
Service_Authorization_Answer)){
Service_Authorization_Answer answ =
(Service_Authorization_Answer)((RequestCycleUDR)input).Answer;
580
Desktop 7.1
else {
//Check if it is an error answer message.
if(instanceOf(((RequestCycleUDR)input).Answer,
Base.Error_Answer_Message)) {
Diameter.Base.Error_Answer_Message eam =
(Diameter.Base.Error_Answer_Message)
((RequestCycleUDR)input).Answer;
debug("Received an error message answer: " +
eam.Error_Message);
}
}
}
}
The Diameter_Stack must be included in the request workflow since the Diameter_Request agent can
not perform actions such as handshaking according to the Diameter Base protocol.
Configure the Diameter_Stack agent so that Diameter Identity, Port, and Realm are consistent with
the Diameter routing profile used by the stack in the Diameter_Stack workflow.
The Analysis agent connected with the Diameter_Stack does nothing. However, it is required since
all agents must be connected.
581
Desktop 7.1
Example 96. The Diameter Command ABNF Specification - Copied from RFC 6733:
Every Command Code that is defined must include a corresponding ABNF specification that is
used to define the AVPs. The following format is used in the definition:
command-name = diameter-name
application-id = 1*DIGIT
command-id = 1*DIGIT
The Command Code assigned to the command
582
Desktop 7.1
min = 1*DIGIT
The minimum number of times the element may be present.
The default value is zero.
max = 1*DIGIT
The maximum number of times the element may be present. The
default value is infinity. A value of zero implies the AVP
MUST NOT be present.
avp-spec = diameter-name
The AVP-spec has to be an AVP Name, defined in the base or
extended Diameter specifications.
13.2.7.2.1. XML
13.2.7.2.1.1. <diameter-protocol>
<diameter-protocol name='unknown'>
The name attribute is optional and specifies the Diameter protocol name.
<avp>
<command>
13.2.7.2.1.2. <avp>
583
Desktop 7.1
Each AVP tag requires an ID and a name. The ID is the AVP code allocated by IANA for this AVP.
The name identifies this AVP in grouped AVPs or commands. The vendor attribute is optional and
sets the ID of the AVP vendor.
<flag-rules>
<simple-type>
<enumeration>
<layout>
<may-encrypt>
13.2.7.2.1.3. <flag-rules>
The flag-rules tag is required for the avp tag, and defines the AVP flags with a number of flag-rule
definition tags.
<flag-rules>
<flag-rule name='mandatory' rule='must'/>
<flag-rule name='protected' rule='must_not'/>
</flag-rules>
13.2.7.2.1.4. <flag-rule>
The flag-rule tag defines the value for one of the valid AVP flags.
The flag-rule tag has two required attributes. The name attribute is the flag name, either mandatory
or protected. The rule attribute defines the flag value and can be must, may, should_not or must_not.
13.2.7.2.1.5. <may-encrypt>
584
Desktop 7.1
<may-encrypt/>
13.2.7.2.1.6. <simple-type>
<simple-type> with its name attribute defines the AVP type as any of the following types:
• Unsigned32
• Unsigned64
• Signed32
• Signed64
• Float32
• Float64
• DiameterIdentity
• UTF8String
• Address
• OctetString
• Time
• DiameterURI
• IPFilterRule
<simple-type name='Time'/>
13.2.7.2.1.7. <enumeration>
The enumeration tag defines AVPs of type Enumerated, and can have any number of <enumerator>
sub tags.
<enumeration>
<enumerator value='1' name='EVENT_RECORD'/>
<enumerator value='2' name='START_RECORD'/>
<enumerator value='3' name='INTERIM_RECORD'/>
<enumerator value='4' name='STOP_RECORD'/>
</enumeration>
585
Desktop 7.1
13.2.7.2.1.8. <enumerator>
The enumerator tag defines an element in an Enumerated AVP type. It has two required attributes
called name and value. For an example of the syntax see Section 13.2.7.2.1.7, “<enumeration>”.
13.2.7.2.1.9. <layout>
The layout tag defines AVPs of type Grouped. A grouped AVP consists of a sequence of AVPs. It is
also possible to nest grouped AVPs, that is to include a grouped AVP within a grouped AVP.
<layout>
<fixed>
<avp-ref name='Session-Id' min='0'/>
</fixed>
<required>
<avp-ref name='Origin-Host'/>
<avp-ref name='Origin-Realm'/>
<avp-ref name='Result-Code'/>
</required>
<optional>
<avp-ref name='Origin-State-Id'/>
<avp-ref name='Error-Reporting-Host'/>
<avp-ref name='Error-Message'/>
<avp-ref name='Proxy-Info' max='*'/>
<any-avp/>
</optional>
</layout>
<fixed>
<required>
<optional>
13.2.7.2.1.10. <fixed>
The fixed tag defines the fixed AVPs included in a grouped AVP. For an example of the syntax see
Example 104, “layout syntax”.
<avp-ref>
13.2.7.2.1.11. <required>
The required tag defines the required AVPs included in a grouped AVP. For an example of the syntax
see Example 104, “layout syntax”.
<avp-ref>
<any-avp>
586
Desktop 7.1
13.2.7.2.1.12. <optional>
The optional tag defines the optional AVPs included in a grouped AVP. For an example of the syntax
see Example 104, “layout syntax”.
<avp-ref>
<any-avp>
13.2.7.2.1.13. <avp-ref>
The avp-ref tag contains a reference to an AVP that should be included in a grouped AVP. The tag
has a required attribute called name. It holds the name of the referenced AVP. The optional attributes
min and max set the qualifiers for the AVP. For an example of the syntax, see Example 104, “layout
syntax”.
13.2.7.2.1.14. <any-avp>
The any-avp tag defines that a grouped AVP in the group list can have any number of AVPs of any
kind.
13.2.7.2.1.15. <command>
<command id='257'>
The required attribute id is the command code allocated by IANA for this command. The optional at-
tribute application sets the command application ID.
<answer>
<request>
13.2.7.2.1.16. <answer>
<answer name='Error-Answer-Message'>
<header-bits>
<layout>
13.2.7.2.1.17. <header-bits>
587
Desktop 7.1
<header-bits>
<header-bit name='request' value='0'/>
<header-bit name='proxiable' value='1'/>
<header-bit name='error' value='1'/>
</header-bits>
<header-bit>
13.2.7.2.1.18. <header-bit>
This tag defines a command header bit. For an example of the syntax, see Example 107, “header-bits
syntax”.
The header-bit tag has two required attributes. name is the header bit name and can be request,
proxiable or error. The value is the bit value (0 or 1). Any other value will cause the XML
aborter to abort with an error message.
13.2.7.2.2. DTD
588
Desktop 7.1
13.2.8.1. Limitations
While MediationZone® provides the capability of a Diameter Server and a Diameter Client, it does
not provide all the capabilities of a Diameter Agent as defined in the Diameter Base Protocol (RFC
6733), chapter 1.2 Terminology
• Transport security (TLS) is negotiated via the Inband-Security AVP in CER/CEA exchange and
not prior to the CER/CEA exchange as recommended in RFC 6733.
13.2.8.3. Failed-AVP
The AVP Failed-AVP is populated for the following values in the Result-Code AVP:
DIAMETER_INVALID_AVP_VALUE 5004
DIAMETER_MISSING_AVP 5005
DIAMETER_AVP_OCCURS_TOO_MANY_TIMES 5009
DIAMETER_UNABLE_TO_COMPLY 5012
DIAMETER_INVALID_AVP_LENGTH 5014
589
Desktop 7.1
The Kafka Forwarding agent is listed among the processing agents in Desktop while the Kafka Collec-
tion agent is listed among the collection agents.
13.3.1.1. Prerequisites
The reader of this document should be familiar with:
• Apache Kafka
13.3.2. Preparations
13.3.2.1. For a Quick Start of Embedded Kafka
To use the Kafka Collection and Forwarding agents, you are required to install a Kafka cluster. To
create a cluster embedded in MediationZone® , take the following steps:
1. Start all three of the predefined Service Contexts mapped to Zookeeper, zk1 zk2 zk3 , and all
three of the predefined Service Contexts mapped to Kafka, sc1 sc2 sc3. Then start the services
defined in $MZ_HOME/etc/custom-services.conf, in this case, Zookeeper and Kafka, as
Kafka requires Zookeeper to keep track of its cluster. To do this, use the following commands:
2. You must create a Kafka topic and one or more partitions to write to. You use the kafka command
to do this as described in the mzsh Command Line Tool document. Refer to the example below to
create a topic named mytopic with three partitions and a replication factor of two.
3. You can now create your Kafka profile in the MediationZone® Desktop. Refer to Section 13.3.4,
“Kafka Profile”. Enter the Kafka Topic which you have created. Select the Use Embedded Kafka
590
Desktop 7.1
check box, and enter 'kafka1' as the Kafka Service Key. When you proceed to creating Kafka
Forwarding and Collection agents, you can then refer to the profile you have created.
By default, the topic logs are stored in $MZ_HOME/storage/kafka. The customizable property
selecting this storage path is in $MZ_HOME/common/config/templates/kafka/<ver-
sion>/custom/template.conf. In addition, other Kafka broker properties, such as log retention
rules, are stored in $MZ_HOME/common/config/templates/kafka/<version>/cus-
tom/broker-defaults.properties.
13.3.3. Overview
The Kafka agents enable you to configure workflows in MediationZone® with improved scalability
and fault tolerence. As part of the data collection, data is written to Kafka to secure it from being lost
if a failure occurs, and each topic can be set up to be replicated across several servers.
591
Desktop 7.1
Note! If the platform is restarted, you must also restart the Service Contexts using the following
command:
13.3.3.3. Scaling
Using Kafka provides the capability to scale as required. One of the many ways to scale a Kafka cluster
is when you create your Kafka configuration. It is recommended that when creating your Kafka con-
figuration, you consider how many partitions you may eventually require, and add more than you may
currently require as this will make it easier to scale up at a later stage. If necessary, you can add partitions
later on using the kafka --alter option but it is a more complicated process. For information on
how to use the kafka --alter option, see the Command Line Tool document.
You can also refer to http://kafka.apache.org for guidance on scaling using partitions.
592
Desktop 7.1
The Kafka profile is loaded when you start a workflow that depends on it. Changes to the profile become
effective when you restart the workflow.
The Connectivity tab is displayed by default when creating or opening a Kafka profile.
Kafka Topic Enter the Kafka topic that you want to use for your configuration.
For information on how to create a Kafka topic, refer to the Command Line
Tool document.
Use Embedded Kafka If you want to use the Kafka Service which is the Kafka embedded in Me-
diationZone® , select this check box.
Kafka Service Key If you have selected to use Embedded Kafka, you must complete the Kafka
Service Key. To determine which service key to use for Kafka Services,
refer to $MZ_HOME/etc/custom-services.conf
Host If you are using external Kafka, enter the host name for Zookeeper.
Port If you are using external Kafka, enter the port for Zookeeper.
Kafka Brokers Use the Add button to enter the addresses of the Kafka Brokers that you
want to connect to.
593
Desktop 7.1
In the Advanced tab you can configure properties for optimizing the performance of the Kafka Producer
and Consumer. The Advanced tab contains two tabs: Producer and Consumer.
In the Producer tab, you can configure the properties of the Kafka Fowarding agent.
Figure 411. Kafka Profile Configuration - Producer tab in the Advanced tab
The property producer.abortunknown=true sets the agent to abort if the broker replies with
Unknown topic or partition. For further information on the other properties, see the text
in the Advanced producer properties field or refer to https://kafka.apache.org.
In the Consumer tab, you can configure the properties of the Kafka Collection agent.
Figure 412. Kafka Profile Configuration - Consumer tab in the Advanced tab
See the text in the Advanced consumer properties field for further information about the properties.
594
Desktop 7.1
Profile The name of the profile as defined in the Kafka Profile Editor (select Kafka
Profile after clicking the New Configuration button in the Desktop).
Route On Error Select this check box if you want a KafkaExceptionUDR, containing the error
message, to be routed from the producer agent when an error occurs.
Note! The emission of error UDRs is under flood protection, which means
only one unique error message UDR is issued per second to prevent flooding
of identical errors.
This section includes information about the Kafka Forwarding agent transaction behavior. For inform-
ation about the general MediationZone® transaction behavior, see Section 4.1.11.8.
13.3.5.1.1.1. Emits
If you select the Route On Error check box in the Kafka Forwarding agent configuration window,
the agent emits data in the KafkaExceptionUDR. For further information, refer to Section 13.3.5.1,
“Workflow Configuration”.
13.3.5.1.1.2. Retrieves
The agent retrieves data from the KafkaUDR.
13.3.5.1.2. Introspection
This section includes information about the data type that the agent expects and delivers.
595
Desktop 7.1
For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.
13.3.5.1.3.1. Publishes
13.3.5.1.3.2. Accesses
The agent accesses various resources from the workflow and all its agents to configure the mapping
to the Named MIMs (that is, what MIMs to refer to the collection workflow).
For information about the agent message event type, see Section 5.5.14, “Agent Event”.
596
Desktop 7.1
Profile The name of the profile as defined in the Kafka Profile Editor (select Kafka
Profile after clicking the New Configuration button in the Desktop).
All If enabled, messages will be collected from all of the partitions.
Range If enabled, messages will be collected from the range that you specify.
Specific If enabled, messages will be collected from the specified partition(s). This is a
comma separated list.
Start at beginning You must determine from which offset you want to start collecting. If enabled,
messages are collected from the first offset. If you select this option, there is a
risk that messages will be processed multiple times after a restart.
Start at end You must determine from which offset you want to start collecting. If enabled,
messages are selected from the last offset from when the workflow was started.
If you select this option, there is a risk that data can be lost after a restart.
This section includes information about the Kafka Collection agent transaction behavior. For inform-
ation about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.
13.3.6.1.1.1. Emits
13.3.6.1.1.2. Retrieves
The agent retrieves a message from the Kafka log and places it in a KafkaUDR.
13.3.6.1.2. Introspection
This section includes information about the data type that the agent expects and delivers.
For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.
13.3.6.1.3.1. Publishes
13.3.6.1.3.2. Accesses
For information about the agent message event type, see Section 5.5.14, “Agent Event”.
597
Desktop 7.1
The Kafka UDR types can be viewed in the UDR Internal Format Browser. To open the browser,
first open an APL Editor, and, in the editing area, right-click and select UDR Assistance.
13.3.7.1. KafkaUDR
KafkaUDR is the UDR that is populated via APL and routed to the Kafka Forwarding agent, which
in turn writes the data to the specified partition, and the topic set in the Kafka Profile. The Kafka
Collection agent consumes the data from the Kafka log, from the specified partition(s), and the topic
set in the Kafka Profile, and places it in a KafkaUDR.
Field Description
data (byte- Producer: This field holds data to be passed to the Kafka log by the Kafka
array) Forwarding Agent (producer).
Consumer: This field is populated with the data read from the Kafka log.
offset (long) This is a read only field, which is only relevant for the Kafka Forwarding
agent . This field is populated by the Kafka Collection agent and contains the
offset in the Kafka log from where the message was consumed.
partition Producer: This field holds the partition to which the Kafka Forwarding agent
(short) (producer) writes the message. If this field is not populated, the partition is
chosen randomly.
Consumer: This field holds the partition from which the message was con-
sumed by the Kafka Collection agent (consumer).
13.3.7.2. KafkaExceptionUDR
The KafkaExceptionUDR is used to return a message if an error occurs.
Field Description
message (string) This field provides a message with information on the error which has oc-
curred.
13.4.1.1. Prerequisites
The user of this information must be familiar with:
598
Desktop 7.1
13.4.2. Overview
13.4.2.1. SMPP Protocol
The Short Message Peer to Peer (SMPP) protocol is an open, industry standard protocol designed to
provide a flexible data communications interface for transfer of short message data between a Short
Message Service Centre (SMSC), or other type of Message center, and an SMS application system.
13.4.3. Agents
The SMPP agents, the Receiver that is located among the collection agents, and the Transmitter that
is located among the processing agents, can receive and submit SMs (Short Messages) using the store
and forward message mode.
Note! Outbind is not supported, which means that the agents can only connect to the SMSC,
and the SMSC cannot connect to the agents.
13.4.3.1. Configuration
The SMPP agents' configuration windows are displayed when double clicking on the the agents in a
workflow, or when right clicking on the agents and selecting Configuration....
Both agents' configuration dialogs contain three different tabs; SMSC, ESME, and Connection.
The SMSC tab contains configurations related to the SMSC to/from which the agent will send/receive
data.
Remote Host Enter the IP address or hostname of the SMSC with which the agent will communic-
ate in this field.
Remote Port Enter the port number on the SMSC with which the agent will communicate in this
field.
599
Desktop 7.1
System ID Enter the ID of the ESME system requesting to bind with the SMSC in
this field.
Password Enter the password used by the SMSC to authenticate the ESME in this
field.
System Type Enter the type of ESME system in this field, e g VMS (Voice Mail Sys-
tem), OTA (Over-The-Air Activation System), etc.
Type of Number Enter the type of number (TON) used in the SME address in this field, e
g International, National, Subscriber Number, etc.
Numbering Plan Indicator Enter the numbering plan indicator (NPI) used in the SME address in
this field, e g ISDN, Data, Internet, etc.
Address Range Enter the range of SME addresses used by the ESME in this field.
Reconnect Attempts Enter the number of reconnect attempts you want to allow in case a connection
goes down in this field.
600
Desktop 7.1
Note! If you use the default setting, 0, in this field, this will mean that
the number of reconnect attempts will be infinite, i e maxint.
Reconnect Interval Enter the time interval you want to pass before making a reconnect attempt
in this field.
Transaction Timer Enter the time interval allowed between an SMPP request and the correspond-
ing SMPP response in this field.
Enquire Link Timer Enter the time interval allowed between operations after which an SMPP
entity should interrogate whether its peer still has an active session in this
field. This setting determines how often the enquire_link operation should
be sent.
This timer may be active on either communicating SMPP entity (i.e. SMSC
or ESME).
13.4.3.4. Operations
For the Transmitter agent, the following operation pairs are supported:
• bind_transmitter - bind_transmitter_resp
• unbind - unbind_resp
• submit_sm - submit_sm_resp
• enquire_link - enquire_link_resp
For the Receiver agent, the following operation pairs are supported:
• bind_receiver - bind_receiver_resp
• unbind - unbind_resp
• deliver_sm - deliver_sm_resp
• enquire_link - enquire_link_resp
Note! Only one request - response operation pair can be handled simultaneously, which means
that a response must be sent for a pending request before the next request can be handled.
Note! The bind and unbind operations only occur when starting/stopping the workflow.
13.4.4. Introspection
The introspection is the type of data an agent expects and delivers.
The Receiver agent produces DELIVER_SM UDRs and delivers DELIVER_SM_RESP UDRs.
The Transmitter agent expects SUBMIT_SM UDRs and produces SUBMIT_SM_RESP UDRs.
601
Desktop 7.1
13.4.5.1. Publishes
Session State is of the string type and is defined as a global MIM context
type.
13.4.5.2. Accesses
The agent does not itself access any MIM resources.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
This message is displayed during the time interval set in the Reconnect Interval field when a recon-
nect attempt has failed.
• Setup successful.
This message is displayed when the number of attempts specified in the Reconnect attempts field
has been exceeded.
• DELIVER_SM / SUBMIT_SM
These messages are displayed when DELIVER_SMs and SUBMIT_SMs are received.
602
Desktop 7.1
Field Description
data_coding (int) This field defines the encoding scheme of the short message user
data, (3) Latin-1 (ISO-8859-1), or (8), USC2 (UTF-16BE).
dest_addr_npi (int) This field indicates the NPI (Numbering Plan Indicator) of the
destination address.
dest_addr_ton (int) This field indicates the TON (Type Of Number) of the destination
address.
destination_addr This field contains the destination address.
(string)
esm_class (int) This field is used for indicating special message attributes associ-
ated with the short message.
priority_flag (int) This field designates the priority level of the message.
protocol_id (int) This field contains the Protocol Identifier. This is a network specific
field.
registered_delivery This field indicates whether an SMSC delivery receipt or an SME
(int) acknowledgement is required or not.
replace_if_present_flag This field indicates whether a submitted message should replace
(int) an existing message or not.
schedule_delivery_time This field defines when the short message is to be scheduled by
(string) the SMSC for delivery. Set to NULL for immediate message deliv-
ery.
sequence_number (int) This field is used for correlating responses with requests.
603
Desktop 7.1
Field Description
short_message (byte- This field contains the actual SM (Short Message) which can consist
array) of up to 254 octets of user data.
Field Description
command_status (int) This field of an SMPP message response indicates the success
or failure of an SMPP request.
OriginalData (bytearray) This field contains the original data in bytearray format.
Field Description
data_coding (int) This field defines the encoding scheme of the short message user
data, (3) Latin-1 (ISO-8859-1), or (8), USC2 UTF-16BE).
dest_addr_npi (int) This field indicates the NPI (Numbering Plan Indicator) of the
destination address.
dest_addr_ton (int) This field indicates the TON (Type Of Number) of the destination
address.
destination_addr This field contains the destination address.
(string)
604
Desktop 7.1
Field Description
esm_class (int) This field is used for indicating special message attributes associ-
ated with the short message.
priority_flag (int) This field designates the priority level of the message.
protocol_id (int) This field contains the Protocol Identifier. This is a network specific
field.
registered_delivery This field indicates whether an SMSC delivery receipt or an SME
(int) acknowledgement is required or not.
replace_if_present_flag This field indicates whether a submitted message should replace
(int) an existing message or not.
schedule_delivery_time This field defines when the short message is to be scheduled by
(string) the SMSC for delivery. Set to NULL for immediate message deliv-
ery.
service_type (string) This field can be used to indicate the SMS Application service
associated with the message. Set to NULL for default SMSC set-
tings.
short_message (byte- This field contains the actual SM (Short Message) which can
array) consist of up to 254 octets of user data.
Field Description
command_status (int) This field of an SMPP message response indicates the success or
failure of an SMPP request.
message_id (string) This field contains the unique message identifier reference as-
signed by the SMSC to each submitted short message. It is an
opaque value and is set according to SMSC implementation.
605
Desktop 7.1
Field Description
submitSmUDR (SUBMIT_SM his field contains the SUBMIT_SM UDR for which this SUB-
(SMPP)) MIT_SM_RESP has been received.
OriginalData (bytearray) This field contains the original data in bytearray format.
13.4.9. Examples
This section contains one example each for the SMPP Receiver and Transmitter agents.
the SMPP receiver agent sends DELIVER_SM UDRs to the Analysis agent, which contains the fol-
lowing code:
consume {
DELIVER_SM_RESP deliver_sm_resp = udrCreate(DELIVER_SM_RESP);
if ((input.sequence_number % 2) == 0) {
deliver_sm_resp.command_status = 2;
} else {
deliver_sm_resp.command_status = 0;
}
udrRoute(deliver_sm_resp);
}
• Check whether the sequence number in the incoming DELIVER_SM UDR is even or odd.
• If the sequence number is even, the command_status field in the deliver_sm_resp UDR will be
set to 2, and if it is odd, the field will be set to 0.
• The deliver_sm_resp UDR will then be routed back to the SMPP receiver agent.
the TCP/IP agent sends TCP_TI UDRs into the workflow using a decoder that defines this UDR type.
606
Desktop 7.1
import ultra.SMPP;
consume {
if (instanceOf(input, TCP_TI)) {
TCP_TI tcp_udr = udrCreate(TCP_TI);
tcp_udr = (TCP_TI) input;
strToBA(tcp_udr.response, "message=" + tcp_udr.message + "\r\n");
SUBMIT_SM submit_sm = udrCreate(SUBMIT_SM);
bytearray sm;
strToBA(sm, "MESSAGE", "UTF-16BE");
submit_sm.short_message = sm;
submit_sm.data_coding = 8;
submit_sm.source_addr = "555123456";
submit_sm.destination_addr = "555987654";
udrRoute(tcp_udr, "OUT_TCP");
udrRoute(submit_sm, "OUT_SMPP");
}
}
}
which will:
• If the received UDR is of the TCP_TI type, the UDR will be named tcp_udr, and the response
field in the UDR will be populated with the text "message=<contents of the message field>" in
bytearray format.
• Create a bytearray object called sm, and populate this bytarray with the text "MESSAGE" in bytearray
format with UTF-16BE encoding.
• Populate the short_message field in the submit_sm UDR with the new bytearray.
• Set the data coding to 8, which equals the UTF-16BE encoding according to the specification.
• Set the source address to 555123456 and the destination address to 555987654.
• Route the submit_sm UDR to the SMPP transmitter agent, and the tcp_udr UDR to the TCP/IP
agent.
The SMPP transmitter agent will then send SUBMT_SM_RESP UDRs back to the Analysis agent
when receiving the corresponding SUBMIT_SM_RESP UDRs from the SMSC. The SUB-
MIT_SM_RESP UDRs contains the original SUBMIT_SM for which the SMSC has responded.
13.5.1.1. Prerequisites
The reader of this information should be familiar with:
607
Desktop 7.1
• APL
• Web Service
• WSDL
13.5.2. Overview
Web Service is a software system that supports interaction between computers over a network.
The MediationZone® Web Service agents communicate through SOAP in XML syntax, and use WSDL
files.
• WSDL 1.1
• XML 1.0
• SOAP 1.1
• HTTP 1.1
• HTTPS
You enable Web Service transactions in MediationZone® by defining a WS profile, or profiles, and
including the Web Service agents and their configurations in a workflow.
13.5.2.1. WS Profile
In WS profile you specify a WSDL file that mainly includes the following parts of a Web Service
definition:
• XML Schema: Defines information about the service either directly or via an XSD-file
The WS profile can include more than one WSDL file references.
The WS profile is loaded when you start a workflow that depends on it. Changes to the profile become
effective when you restart the workflow.
Saving a WS profile that is assigned with a WSDL file, maps data types that are specified in the WSDL
Schema section as UDR types for the MediationZone® workflow. For further information, see Sec-
tion 13.5.4, “UDR Type Structure”.
608
Desktop 7.1
The collecting agent works in the same way as a Service Provider, or server, in the sense that it receives
requests from a client, or clients, and transfers the requests to a MediationZone® workflow.
In a synchronous operation, when the collection agent receives a reply back from the workflow, it de-
livers the response to the requesting client.
In an asynchronous operation the collection agent does not receive any reply, and therefore does not
respond the client.
The processing agent works in the same way as a Service Requester, or a client, that sends a request
to a server, where a certain service is available.
In a synchronous operation, when the processing agent receives a reply, it delivers the reply to its
configured output.
In an asynchronous operation, the requester does not receive any reply and does not deliver one, either.
609
Desktop 7.1
To open the editor, click the New Configuration button in the upper left part of the MediationZone®
Desktop window, and then select WS Profile from the menu.
Note! Any restrictions on the WSDL format will be ignored by the outgoing web service
There is one menu that is specific for WS profile configurations, and it is described in the coming
section:
Item Description
Export WSDL... The export the original WSDL file to a directory on the local
workstation. Please refer to Section 13.5.3.3, “Configuration Tab”
for further information.
Export Transport Level Security The export the original Keystore file to a directory on the local
Keystore... workstation. Please refer to Section 13.5.3.5, “Security Tab” for
further information.
Export Web Service Security The export the original Keystore file to a directory on the local
Settings Keystore... workstation. Please refer to Section 13.5.3.5, “Security Tab” for
further information.
610
Desktop 7.1
Note! The Web Service Security settings are not populated automatically in accordance with
the policy you hold. You must go to the Security tab in the WS profile and complete the relevant
settings. The security settings completed in the Security tab determines your Security settings.
If you do not enter your settings, no Web Service Security is enabled. For further information,
refer to Section 13.5.3.5, “Security Tab”
Transport Protocol
Select the protocol with which the web service will be communicated: HTTP or HTTPS.
The WS Profile configuration can either handle single WSDL files or several WSDL files, which all
can be concatenated, using the concatenate WSDL file functionality.
Single WSDL The Import WSDL button is used to browse for, and import, a selected WSDL file.
File If the WSDL file is linked to adherent xsd files, all included files must be stored in
the same directory as the imported WSDL file. If present, they will be imported at
the same time as the WSDL file. If not, a validation error will occur.
611
Desktop 7.1
Basic validation of the WSDL file is performed before the file is imported. After the
file is imported, the content of the WSDL file and adherent files can be viewed in the
View WSDL Content tab.
Full validation of the WSDL file is performed when the profile is saved.
If you configure the Web Service agents with any value that contradicts the
WSDL file specifications, your configuration will override the WSDL file.
Concatenated Used when parts of several WSDL files are required. The functionality is only useful
WSDL File if operations defined in bindings in several WSDL files shall be published at the same
endpoint.
To add files to the Files list click on the Add button and browse for the correct WSDL
files in the Add WSDL File dialog box.
The list of WSDL files will be concatenated when the profile is saved. The concaten-
ation functionality concatenates and arranges all operations defined in the WSDL
binding element of several WSDL files and everything else that is needed for the
result to be a valid WSDL file.
The original WSDL file can be exported to a directory on the local workstation. Click on the Export
menu and select the Export WSDL option.
XML Binding
Enable JAXB Simple Binding When selected, XSD Schemas of loaded WSDL files are compiled
Mode into Java code using the experimental "Simple and better binding
mode". This is necessary for some complex types, for instance when
duplicate element names are used within the same complex type.
Disable JAXWS Wrapper When this check box is selected, the cycleUDR will contain request
Style Mode and response parameters that wrap all the arguments in request and
response UDRs.
Enable Processing of Implicit When selected, any SOAP headers defined in the binding section of
SOAP Headers WSDL files are compiled into Java code. This is necessary in order
to manipulate the SOAP headers in outgoing Web Service requests.
This drop-down list consists of the service port definitions that are included in the WSDL file. By se-
lecting a port you set the binding address.
• If the WSDL file consists of several concatenated files, only the first WSDL file service port
is applicable.
XXX:YYY(ZZZ).
612
Desktop 7.1
Figure 427. The Web Service Profile - View WSDL Content Tab
WSDL Definition When a WSDL file is successfully imported to the Web Service profile the WSDL
filename will be stated here. The View button will open a read-only view of the
file contents.
Included Files If the imported WSDL Definition contains references to other WSDL or xsd files
included in the configuration, they will be listed here.
View Selected If one of the files in the Included Files list is selected, this button will open a
read-only view of its imported content.
• Transport Level Security with Web Service Security standard with the option of enabling a
Timestamp
• Transport Level Security with Username Token and/or Addressing with the option of enabling
a Timestamp
• Transport Level Security with Web Service Security standard combined with Username Token
and/or Addressing with the option of enabling a Timestamp
613
Desktop 7.1
• Web Service Security standard with Username Token and/or Addressing with the option of enabling
a Timestamp
To apply Transport Level Security, select the transfer protocol HTTPS in the Configuration tab.
The Web Service agents provide Web Service Security by supporting XML-signature and encryption.
A TimeStamp records the time of messages. Username Token uses authentication tokens and Ad-
dressing provides unique message IDs.
Transport Level Security Applicable only when HTTPS is selected in the Configuration tab.
Keystore Click on the Import Keystore button and select the keystore JKS-file
that contains the private keys that you want to apply.
To export the original Keystore file, select Export from the main
menu of the Web Service profile configuration, and then select
Export Web Service Security Settings Keystore.
Keystore Password Enter the password that protects the keystore file.
Web Service Security Set- Applicable for any selected protocol in the Configuration tab.
tings
Enable Web Service Secur- When selected, Web Service security is used, and the other text boxes
ity For This Profile in the dialog are highlighted and must be completed. The Web Service
Security Settings and Username Token and Addressing check boxes
are also enabled for you to configure your Security settings. If you do
614
Desktop 7.1
not select any other check boxes in this tab, no Web Service Security
is enabled.
Keystore Alias The alias of the keystore entry that should be used.
Key Password Enter the password that is used to protect the private key that is associ-
ated with the Keystore alias.
Enable Encryption When selected, messages will be encrypted. If you select this option,
you must complete the text boxes in the Web Service Security Settings
dialog.
Enable Signing When selected, messages will be signed. If you select this option, you
must complete the text boxes in the Web Service Security Settings
dialog.
Enable TimeStamp When selected, messages will be recorded with the date and time.
Enable Username Token When selected, Username Token authentication is used, and the text
and Addressing boxes WS Token Username and WS Token Password are enabled
and must be completed.
Enable Addressing When selected, messages will be sent with a unique ID.
When the WS profile is saved, a number of UDRs are generated. They will be saved in a folder structure
based on the WS profile name. Therefore it is important to make sure the WS profile is saved in the
appropriate place with a suitable name. The UDR's folder structure will not automatically be adjusted
and saved along with the WS profile if a user decides to rename or move the profile to a new folder.
If at all possible, avoid renaming. In the event you must rename the profile it must be saved
again in its new location, regenerating the UDRs there. For further information about viewing
the UDR type structure, see Section 13.5.4.1, “The Folder Structure of the UDR Types”.
The UDR types that are created once you save a WS profile are:
The UDR type that might be created once you save a WS profile is:
• UDR type: describes the complex types that are defined in the XML Schema
• ws.QName: This UDR type matches a qname data type in an XML Schema. There can only be one
ws.QName UDR type under ws.
• XML Element: A wrapper type that is defined as "nillable" in the XML Schema.
615
Desktop 7.1
To open the editor, click the New Configuration button in the upper left part of the MediationZone®
Desktop window, and then select Alarm Detection from the menu.
Figure 429. The UDR Assistance Menu in the APL Code Editor
The AbstractWSCycle UDRs and WSCycleUDR are created and saved in a folder, created and
named according to the following structure:
WS.[Directory Name].[WSProfile Name].cycles.[WSCycleUDR Names]
616
Desktop 7.1
The UDR types related to the XML-schema are created and saved in a folder, created and named ac-
cording to the following structure:
WS.[Directory Name].[WSProfile Name].[alias].[complexTypeUDRs]
The alias is replaced with the name of the Target Name space (tns). The name is set in the XML
Schema part of the WSDL file. If the target name space has no name alias, the UDR will be saved in
the [WSProfile Name] folder.
When concatenating WSDL files, the aliases in the files can be identical, and if this occurs the
structure will be changed to avoid a name conflict. If the names include invalid characters, they
will be replaced with underscore characters.
If the WSDL filename includes invalid name characters these will be replaced with underscore
characters in the WSDL Filename.
The AbstractWSCycle UDR is used as a marker to connect all WSCycleUDRs belonging to the same
WS profile. It consists of the following parts:
• context
• errorMessage
• operation
617
Desktop 7.1
context (any) This field is used to store information about the context in which the
operation has been invoked, when it is needed.
errorMessage (string The error message field is set if an error occurred during sending or
optional) receiving of a message, or if a Soap Exception occurred at the com-
munication endpoint.
operation (string This is a constant string with the name of the operation as value. If
constant) the operation corresponding to this WSCycleUDR is operation-
Name, the field will be operationName.
The agent using a WS profile will have as many input and output types as operations in the web service
defined by the WS profile.
The number of fields and field types in a WSCycle UDR will be set based on how the Web Service
operation is defined in the WSDL file. Each WSCycle UDR contains the number of fields necessary
to hold the information needed when sending or receiving a request or response.
For example an operation with nothing in the input and output messages and no declared fault types
will have no other fields than the ones from the AbstractWSCycle UDR.
618
Desktop 7.1
The structure of the WSCycleUDR_operationName depends on the definition of the operation in the
WSDL file. The following table will present more information about possible fields.
param (type corres- This field exists only if the operation has at least one request message
ponding to the in- type in its input message declaration.
put type)
The param field type corresponds to the type defined in the XML schema,
simple or complex type. If it is a complex type, a UDR containing fields
corresponding to the types in the complex type will be created.
response (type cor- This field exists only if the operation has a response message that is not
responding to the empty.
output message of
the operation) The field shall be set before the WSCycleUDR is routed back to a Web
Service Provider agent to send back a response message to the requester.
The FaultTypeName part of the field name will be the name of the
declared fault type.
When a request arrives to the Web Service Provider it will first decode and validate it into a pre-gen-
erated UDR type, WSCycle UDR. WSCycleUDR is then routed through the workflow with the param
field set to the incoming message. If the client expects a response message the workflow is responsible
for populating the response field with an appropriate answer message (through the udrCreate APL
function). The WSCycleUDR must then be routed back to the Web Service Provider agent to transmit
the answer.
619
Desktop 7.1
Configurations made in the agent always overrides settings originating from the WSDL file.
13.5.5.1. Configuration
The Web Service Provider agent configuration dialog is displayed when right clicking on the Web
Service Provider agent and selecting the Configuration option, or when double-clicking on the agent
in the Workflow Editor.
Web Service Profile Click on the Browse button and select the appropriate user defined WS
profile.
Workflow Response Determines the number of milliseconds the Web Service Provider agent
Timeout (ms) will wait for a response from the Workflow before timeout.
13.5.5.1.1. HTTP
This tab is highlighted when the selected WS Profile is configured with either HTTP or HTTPS as
the transfer protocol.
Extract Profile Settings Click on this button to automatically fill in the settings from the Ser-
vice Port Definition in the profile.
HTTP Address Enter the complete URL address, including port, for the web service
used to connect to the information requesting client.
Enable Basic Access Authen- Select this check box to enable Basic Access Authentication.
tication
Username Enter the username that should be provided by the requesting client
when using Basic Access Authentication.
Password Enter the password that should be provided by the requesting client
when using Basic Access Authentication.
620
Desktop 7.1
When Basic Access Authentication is enabled, in order to perform a request, the client program
will have to provide credentials such as username and password. Otherewise, a HTTP 401 status
code will be returned.
13.5.5.2. Introspection
The agent emits and retrieves UDRs of the WSCycle_[operation name] UDR type. For further
information, see Section 13.5.4.3, “The WSCycle UDR Type”.
The agent does not publish nor access any MIM parameters.
13.5.6.1. Configuration
The Web Service Request agent configuration dialog is displayed when right-clicking on the Web
Service Request agent and selecting the Configuration option, or when double-clicking on the agent
in Workflow Editor.
Web Service Profile Click on the Browse button and select a predefined Web Service profile.
Response Timeout (ms) The timeout value specifies the maximum allowed response time in milli-
seconds back to the Request agent after a request has been sent to the service.
If the response time is exceeded, the Request agent times out and an error
621
Desktop 7.1
message will be logged in the System Log and the WSCycleUDR will be
routed out from the agent with the errorMessage field set.
Support CDATA encap- If activated, content encapsulated with CDATA tag will always be sent
sulated content without escape characters in the SOAP message.
13.5.6.1.1. HTTP
Extract Profile Settings Click on this button to automatically fill in the settings from the
Service Port Definition in the profile.
HTTP Address Enter the complete URL address for the web service used to connect
to the information provider.
Enable Basic Access Authentic- Select this check box to enable use of Basic Access Authentication.
ation
Username Enter the username that should be used when making a request with
Basic Access Authentication.
Password Enter the password that should be used when making a request with
Basic Access Authentication.
13.5.6.2. Introspection
The agent emits and retrieves UDRs of the WSCycle_[operation name] UDR type. For further
information, see Section 13.5.4.3, “The WSCycle UDR Type”.
The agent does not publish nor access any MIM parameters.
about to send Reported right before a request is sent to a Web Service provider.
done Reported after a response has been received from a Web Service provider. It is
also reported when a timeout occurs.
13.5.7. Example
The following example demonstrates a configuration of a simplified use of a Premium-SMS payment
procedure, performed with the MediationZone® Web Service agents.
• Defining a WS Profile
622
Desktop 7.1
1. Click the New Configuration button in the upper left part of the MediationZone® Desktop window,
and then select WS Profile from the menu.
2. In the Configuration tab, click on the Import WSDL button, and select the WSDL file you want
to import.
You can now see the file contents on the View WSDL Contents tab.
3. At the bottom of the Configuration tab, select the SOAP: Charger (Charger_SOAPBinding) in
the Service Port Definition drop-down list.
4. In the WS profile configuration, click on the File menu and select the Save As... option.
5. In the Save as dialog box select a folder and type Example in the Name text box.
6. Click OK.
7. Check the WS directory in the APL Code Editor and see the data structure that your WS profile
just generated. The APL Code Editor is opened by clicking on the New Configuration button in
MediationZone® Desktop, and then selecting WS Profile from the menu.
8. Right-click on the text pad and select the UDR Assistance ... option.
9. Scroll down to the WS directory and expand it to see where data is stored once you save your WS
profile.
<wsdl:types>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
xmlns:tns="http://example.com/webservice/charger/types"
elementFormDefault="qualified"
targetNamespace="http://example.com/webservice/charger/types">
<complexType name="ChargingEvent">
<sequence>
<element name="id" type="string"/>
<element name="amount" type="float"/>
</sequence>
623
Desktop 7.1
</complexType>
<element name="Charge">
<complexType>
<sequence>
<element name="serviceType" type="string"/>
<element name="chargingEvent" type="tns:ChargingEvent"/>
</sequence>
</complexType>
</element>
<element name="ChargeResult">
<complexType>
<sequence>
<element name="success" type="boolean"/>
<element name="message" type="string"/>
</sequence>
</complexType>
</element>
<element name="FaultDetail">
<complexType>
<sequence>
<element name="reason" type="int"/>
<element name="message" type="string"/>
</sequence>
</complexType>
</element>
</schema>
</wsdl:types>
<wsdl:message name="ChargingRequest">
<wsdl:part name="in" element="x1:Charge" />
</wsdl:message>
<wsdl:message name="ChargingRespone">
<wsdl:part name="in" element="x1:ChargeResult" />
</wsdl:message>
<wsdl:message name="chargeFault">
<wsdl:part name="faultDetail" element="x1:FaultDetail"/>
</wsdl:message>
<wsdl:portType name="Charger">
<wsdl:operation name="charge">
<wsdl:input name="chargingRequest" message="tns:ChargingRequest"/>
<wsdl:output name="chargingResponse" message="tns:ChargingRespone"/>
<wsdl:fault name="chargingFault" message="tns:chargeFault"/>
</wsdl:operation>
</wsdl:portType>
<wsdl:operation name="charge">
<soap:operation soapAction="" style="document"/>
<wsdl:input name="chargingRequest">
624
Desktop 7.1
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output name="chargingResponse">
<soap:body use="literal"/>
</wsdl:output>
<wsdl:fault name="chargingFault">
<soap:fault name="chargingFault" use="literal"/>
</wsdl:fault>
</wsdl:operation>
</wsdl:binding>
<wsdl:service name="Charger_Service">
<wsdl:port binding="tns:Charger_SOAPBinding" name="Charger">
<soap:address location="http://localhost:8080/charge"/>
</wsdl:port>
</wsdl:service>
</wsdl:definitions>
1. In the Web Service Provider configuration view, click on the Browse button.
4. In the HTTP tab, assign the HTTP Address with a value by clicking on the Extract Profile Settings
button.
5. Click OK.
import ultra.ws.example.charge.cycles;
import ultra.ws.example.charge.x1;
625
Desktop 7.1
consume {
// Verifying that the UDR type is matches the
// UDR definition generated by the WS profile.
if (instanceOf(input, WSCycle_charge)) {
// Cast the input type to the WSCycle_charge UDR.
// debug
debug("The ServiceType is " + udr.param.serviceType);
debug("The amount to charge is "
+ udr.param.chargingEvent.id);
debug("The amount to charge is "
+ udr.param.chargingEvent.amount);
The APL code first verifies that the UDRs that enter the workflow are of the WSCycle_charge type.
If so, the UDRs are casted from the abstractWScycle type to the WSCycle_charge type.
If the compilation fails check that the name of the folder in which you saved the WS profile is the
same as in the path that the APL code specifies.
9. Click OK.
626
Desktop 7.1
• Analysis_1: Creates the UDR type WSCycle_charge and routes it to the Web Service Requester
• Web Service Requester (Processing): Sends a request to a Web Service server. Once the web
server replies, Web Service Requester forwards the reply to Analysis_2.
• Analysis_2: Receives the Web Service reply. In this example we use this agent for output demon-
stration.
This port number should not be the same one that the Web Service Provider is configured
with.
2. In the Analysis_1 agent configuration view, enter the following APL code into the text pane:
import ultra.ws.example.charge.cycles;
import ultra.ws.example.charge.x1;
consume {
// Create a WSCycle_charge UDR
WSCycle_charge udr = udrCreate(WSCycle_charge);
// Creaete a Charger UDR as parameter
udr.param = udrCreate(Charge);
// Populate the parameter with data
udr.param.chargingEvent = udrCreate(ChargingEvent);
udr.param.chargingEvent.amount = 0.50;
udr.param.chargingEvent.id = "0123456789";
udr.param.serviceType = "SMS";
// Route the UDR
udrRoute(udr);
}
4. In the Web Service agent Configuration tab, click on the Browse button to enter the WS profile.
5. To automatically set the HTTP address click on the Extract Profile Settings button.
627
Desktop 7.1
In this example the HTTP address is http://localhost:8080/charge. This is the same address
that the Web Service Provider is assigned with. Using the same address both in the Provider
agent as well as in the Requester agent, enables the Web Service Requester workflow to act
as a client of the Web Service Provider workflow.
6. In the Analysis_2 Configuration view, enter the following APL code into the text pane:
consume {
debug(input);
}
1. In the Workflow Monitor view, select the Debug option in the Edit.
The text "Debug Active (Event)" will appear at the bottom left corner of the Workflow Monitor.
3. Once both workflows are running, to establish a connection port, run the following command from
a command line view:
4. From the telnet view enter data to the Provider workflow. To trigger the Analysis agent and have
it route WSCycle_charge UDRs to the Web Service Requester agent, press ENTER repeatedly and
expect the following:
• Prior to every request, the debug event message "About to send" appears at the bottom of the
Workflow Monitor view, and the Requester then sends the request to the Web Service Provider.
• In the Web Service Provider workflow, the Analysis agent first generates a debug message with
the content of the param field, and then creates a response.
• The Web Service Requester generates the debug event "Done" and routes the WSCycle_charge
UDR to Analysis_2.
• In the debug event pane, Analysis_2 announces the contents of the WSCycle_charge UDR .
628
Desktop 7.1
13.6.1.1. Prerequisites
The reader of this information should be familiar with:
13.6.2. Overview
The Workflow Bridge agents act as a bridge for communication between real-time workflows, or
between a batch and a real-time workflow, within the same MediationZone® system.
The Workflow Bridge agents do not use any storage server to manage the data. Data is instead stored
in memory cache when executing on the same EC, or streamed directly from one agent to another,
over TCP/IP, when executing on different ECs. This provides for efficient transfer of data, especially
from batch to real-time workflows.
The Forwarding and Collection workflows communicate by using a dedicated set of UDRs:
In the ConsumeCycleUDR there are also fields that enable broadcasting and load balancing.
Broadcasting, i e sending the same UDR to several different workflows in the collecting workflow
configuration, can be made to a configurable number of workflows. Load balancing enables you to
configure to which workflow that each UDR should be sent, based on critera of your choice.
Note! If any other UDR than ConsumeCycleUDR is routed to the forwarding agent, the bridge
will only support one collector, which means that it will not be possible to broadcast or load
balance.
• Each state of a Forwarding workflow is sent in a separate WorkflowState UDR to the Real-time
Collection agent. The Batch Forwarding workflow sends all states from initialize to deinitialize,
while a Real-time Forwarding workflow only sends the initialize state. The deinitialize state is sent
by the Real-time Collection workflow if the connection goes down between the Collection workflow
and a Forwarding workflow (Batch or Real-time).
For more information regarding the workflow execution states, see Section 4.1.11.6, “Workflow
Execution State”
• User defined action UDRs can be sent from the Collection workflow to communicate actions back
to the Forwarding workflow. Refer to Section 13.6.6.4, “User Defined Action UDRs” for further
information.
In the collecting workflow, the APL has to be configured to communicate responses for the UDRs to
the Workflow Bridge Collection agent. When both the forwarding and collecting workflows are real
time workflows, only responses for WorkflowState UDRs have to be configured. However, when the
629
Desktop 7.1
forwarding workflow is a batch workflow, responses for ConsumeCycle UDRs have to be configured
as well.
Responses for WorkflowState UDRs are always communicated back to the forwarding workflow. If
you want to communicate responses for ConsumeCycle UDRs back to the forwarding workflow as
well, the Send Reply Over Bridge option has to be selected in the Workflow Bridge profile, see
Section 13.6.3.3, “Workflow Bridge Profile Configuration” for further information.
Workflow Bridge has two essential features; Session Context and Bulk Forwarding.
Note! The SessionContext field is only writable in the InitializeCycleUDR and BeginBatchUDR
and the session context is only available in a Collection workflow.
Refer to Section 13.6.6, “Workflow Bridge UDR Types” for more information about the Workflow
Bridge UDR types.
The bulk is created by the Workflow Bridge Forwarding agent after a configured number of UDRs
has been reached, or after a configured timeout. This is specified in the Workflow Bridge profile, see
Section 13.6.3.3, “Workflow Bridge Profile Configuration” for more information.
Bulking of data can only be performed for data being sent in the ConsumeCycleUDR and not for the
states that are sent in the state specific UDRs.
Bulk forwarding is not performed when the Workflow Bridge agents are executing on the same EC.
The Workflow Bridge Collection agent requires that all Workflow State UDRs are returned before the
next Workflow State UDR are forwarded into the workflow. The only exception to this is that the
Collection agent accepts to have several ConsumeCycleUDRs outstanding at the same time. However,
all ConsumeCycleUDRs must have been returned before the next type of Workflow State UDRs are
forwarded. If the forwarding workflow is a batch workflow, this means that all ConsumeCycleUDRs
must be returned before the DrainCycleUDR is forwarded into the workflow.
For more information regarding the workflow execution states, see Section 4.1.11.6, “Workflow Exe-
cution State”.
630
Desktop 7.1
Load balancing can be used to direct different UDRs to different workflows. Each workflow in the
collection workflow configuration is assigned a LoadId by adding this field to the workflow table in
the Workflow Properties, and then entering specific Ids for each workflow. In the APL code in the the
forwarding workflow configuration, you can then determine which UDRs should be routed to which
LoadId.
You configure the number of workflows you want to send UDRs to in the Workflow Bridge profile.
The Workflow Bridge profile is loaded when you start a workflow that depends on it. Changes to the
profile become effective when you restart the workflow.
There is one menu item that is specific for Workflow Bridge profile configurations, and it is described
in the coming section:
Item Description
External References To Enable External References in a Workflow Bridge profile. This can be
used for configuring Number of Collectors, Bulk Size and Bulk Timeout.
The Workflow Bridge configuration contains two tabs; General and Advanced.
The General tab is displayed by default when creating or opening a Workflow Bridge profile.
631
Desktop 7.1
Send Reply over Check this if the Collection workflow shall send a reply back to the Forwarding
Bridge workflow each time a ConsumeCycleUDR has been received. If this is not checked
only ConsumeCycleUDRs with an wfbActionUDR will be sent back.
Note! This only applies for the ConsumeCycleUDR, since the WorkflowState
UDRs are always acknowledged.
Force Serializa- The Force Serialization option is enabled by default and applies to situations
tion where the Workflows are running on the same EC. It can be disabled for a perform-
ance increase, if it can be assured that no configurations will be changed during
the running of these Workflows.
Response This is the time (in seconds) that the Workflow Bridge Forwarding agent will wait
Timeout (s) for a response for a WorkflowState UDR from the Workflow Bridge Real-time
Collection agent. After the specified time, the Workflow Bridge Forwarding agent
will time out and abort the workflow. The default value is "60".
Bulk Size Bulk Size is configured if data should be bulked by the Workflow Bridge Forward-
ing agent before it is sent to the collection side.
Configure the number of UDRs that should be bulked. The default value is "0",
which means that the bulk functionality will not be used.
632
Desktop 7.1
Bulk Timeout This is the time (in milliseconds) that the Workflow Bridge Forwarding agent will
(ms) wait in case the bulk size criteria is not fulfilled. Default value is "0" which is an
infinite timeout.
Number of Col- If you want to configure broadcasting or load balancing, i e if you want several
lectors different Workflow Bridge collecting workflows to be able to receive data from
the forwarding workflow, you enter the number of collecting workflows you want
to use in this field.
The number of collecting workflows connected to the workflow bridge must not
exceed the limit set by this value. In case of a batch forwarding workflow, it must
be started after the specified Number of Collectors are running or it will abort.
Collecting workflows that are started after the limit has been reached will also
abort.
Real-time forwarding workflows do not require that any collectors running when
started, and the Number of Collectors represents an upper limit only.
Note! After a Workflow Bridge profile has been changed and saved, all running workflows that
are connected to this profile must be restarted.
In the Advanced tab you can configure additional properties for optimizing the performance of the
Workflow Bridge.
One example of properties that can be configured under the advanced tab is forwardingQueueSize,
which controls the number of UDRs that can be queued by the Workflow Bridge Forwarding Agents.
See the text in the Properties field for further information about the properties.
633
Desktop 7.1
Profile This is the profile to use for communication between the workflows. For information about
how to configure a Workflow Bridge profile see, Section 13.6.3.3, “Workflow Bridge Profile
Configuration”.
The Workflow Bridge supports one Batch Forwarding workflow connected to one or several
Collection workflows.
All workflows in the same workflow configuration can use separate profiles. For this to
work, the profile must be set to Default in the Workflow Table tab found in the Workflow
Properties dialog. For further information on the Workflow Table tab, refer Section 4.1.7,
“Workflow Table”.
To select a profile, click on the Browse... button, select the profile to use, and then click
OK.
13.6.4.1.1.1. Emits
The agent emits data in the ConsumeCycleUDR and the WorkflowState UDRs.
13.6.4.1.1.2. Retrieves
The agent retrieves ConsumeCycleUDRs and WorkflowState UDRs from the Collection workflow.
13.6.4.1.2. Introspection
The agent consumes bytearray types and any UDRs, as configured in the profile. Please refer to
Section 13.6.3.3, “Workflow Bridge Profile Configuration” for more information.
Note! For information about the MediationZone® MIM and a list of the general MIM parameters,
see Section 2.2.10, “Meta Information Model”.
13.6.4.1.3.1. Publishes
634
Desktop 7.1
consume {
map<int, int> queueMap = (map<int, int>)
mimGet("Workflow_Bridge_1", "Forwarding Queue Utilization");
int loadId = 1;
int queueSize = mapGet(queueMap, loadId);
debug("Queue Utilization: " + queueSize);
wfb.ConsumeCycleUDR ccUDR = udrCreate(wfb.ConsumeCycleUDR);
ccUDR.Data = input;
udrRoute(ccUDR);
}
13.6.4.1.3.2. Accesses
For information about the agent message event type, see Section 5.5.14, “Agent Event”.
For information about the agent message event type, see the MediationZone® Desktop User's Guide.
Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
This message is displayed when a connection has been established with a Workflow Bridge Real-
time Collection agent.
This message is displayed each time a transaction has been finished, that is, after endBatch.
• Disconnected
635
Desktop 7.1
Profile This is the profile to use for communication between the workflows. For information about
how to configure a Workflow Bridge profile, see Section 13.6.3.3, “Workflow Bridge Profile
Configuration”.
All workflows in the same workflow configuration can use separate profiles. For this to
work, the profile must be set to Default in the Workflow Table tab found in the Workflow
Properties dialog. For further information on the Workflow Table tab, refer to Section 4.1.7,
“Workflow Table”.
To select a profile, click on the Browse... button, select the profile to use, and then click
OK.
For information about the general MediationZone® transaction behavior, see Section 4.1.11.8,
“Transactions”.
13.6.4.2.1.1. Emits
The agent emits data in the ConsumeCycleUDR and the WorkflowState UDRs.
13.6.4.2.1.2. Retrieves
The agent retrieves ConsumeCycleUDRs and ErrorCycleUDRs from the Collection workflow.
13.6.4.2.2. Introspection
The agent consumes bytearray types and any UDRs, as configured in the profile, refer to Sec-
tion 13.6.3.3, “Workflow Bridge Profile Configuration” for more information.
Note! For information about the MediationZone® MIM and a list of the general MIM parameters,
see Section 2.2.10, “Meta Information Model”.
13.6.4.2.3.1. Publishes
636
Desktop 7.1
consume {
map<int, int> queueMap = (map<int, int>)
mimGet("Workflow_Bridge_1", "Forwarding Queue Utilization");
int loadId = 1;
int queueSize = mapGet(queueMap, loadId);
debug("Queue Utilization: " + queueSize);
wfb.ConsumeCycleUDR ccUDR = udrCreate(wfb.ConsumeCycleUDR);
ccUDR.Data = input;
udrRoute(ccUDR);
}
13.6.4.2.3.2. Accesses
For information about the agent message event type, see Section 5.5.14, “Agent Event”.
Debug messages are dispatched in debug mode. During execution the messages are displayed in the
Workflow Monitor.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
This message is displayed when a connection has been established with a Collection agent.
in the executioncontext.xml file and enter the host you want to the agent to connect to as value.
637
Desktop 7.1
Profile This is the profile to use for communication between the workflows. For information about
how to configure a Workflow Bridge profile, see Section 13.6.3.3, “Workflow Bridge Profile
Configuration”.
All workflows in the same workflow configuration can use separate profiles. For this to
work, the profile must be set to Default in the Workflow Table tab found in the Workflow
Properties dialog. For further information on the Workflow Table tab, refer to Section 4.1.7,
“Workflow Table”.
To select a profile, click on the Browse... button, select the profile to use, and then click
OK.
Port This is the default port that the collecting server will listen to for incoming requests. A valid
port value is between 1 and 65535.
If you have a collecting workflow configuration with several workflows, you have to open
the Workflow Properties, and set the WFB Collector - Port field to Default. Then you can
enter the different ports you want to use for the workflows in the workflow table, as each
one need to listen to a separate port.
Note! If both the collection and forwarding workflows are executing on the same
execution context, an ephermal port will be used regardless of the value set in this
field.
In the Batch Forwarding to Real-time Collection scenario, the Workflow Bridge Real-time Collection
agent routes the states retrieved from the Workflow Bridge Batch Forwarding agent to the Collection
workflow. These are the states:
• initialize
• beginBatch
• drain
• endBatch
• commit
638
Desktop 7.1
• deinitialize
• cancelBatch
• rollback
The Collection workflow must handle all the states and send a reply to the batch Forwarding workflow
by returning the corresponding WorkflowState UDR. For more information regarding the states, see
Section 4.1.11.6, “Workflow Execution State”.
13.6.5.1.2. Introspection
Note! For information about the MediationZone® MIM and a list of the general MIM parameters,
see Section 2.2.10, “Meta Information Model”.
13.6.5.1.3.1. Publishes
13.6.5.1.3.2. Accesses
For information about the agent message event type, see Section 5.5.14, “Agent Event”.
Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
639
Desktop 7.1
In order to enable load balancing, you have to enter the Workflow Properties and configure the load
ID field to Default.
A loadID field will then be added in the workflow table. Configure separate Load IDs for each workflow
and save. In the forwarding workflow you can then use these load IDs in the APL for directing certain
UDRs to certain workflows.
In case you want to specify which host the collecting agent should connect to, you can add the following
property:
in the executioncontext.xml file and enter the host you want to the agent to bind to as value.
The Workflow Bridge UDR types can be viewed in the UDR Internal Format Browser in the 'wfb'
folder. To open the browser, first open an APL Editor, and, in the editing area, right-click and select
UDR Assistance.
13.6.6.1. ConsumeCycleUDR
The ConsumeCycleUDR is the UDR that the Workflow Bridge Forwarding agent populates with data
and routes to the Workflow Bridge Real-time Collection agent. The ConsumeCycleUDR must always
be acknowledged and sent back from an Analysis agent to the Workflow Bridge Real-time Collection
agent if the ConsumeCycleUDR was initially sent by a Workflow Bridge Batch Forwarding agent,
see Section 13.6.7.1.4.2, “Analysis” for an example. This is not needed if the Workflow Bridge For-
warding agent is of type Real-time.
If the Send reply over bridge setting has been configured in the profile, the ConsumeCycleUDR is
sent the whole way back to the Workflow Bridge Batch Forwarding agent. Refer to Section 13.6.3.3,
“Workflow Bridge Profile Configuration” for more information.
The ConsumeCycleUDRs can be sent in a bulk from the Workflow Bridge Forwarding agent, for a
more efficient transfer of the data. This is further described in Section 13.6.2.2, “Bulk Forwarding of
Data”.
Field Description
Action (wfbActio- This field is used by the user to communicate actions back to the Forwarding
nUDR) workflow. It can be populated with a user defined action UDR, see Sec-
tion 13.6.6.4, “User Defined Action UDRs”.
AgentId (string) This field includes the agent id that is created for the Workflow Bridge For-
warding agent each time a workflow is started. The id is unique per Workflow
Bridge Forwarding agent and workflow execution.
Broadcast This field indicates whether broadcast should be enabled (true) or not (false).
(boolean) If broadcast is enabled, the forwarded UDRs will be sent to all the configured
workflows in the collecting workflow configuration.
Data (any) This field can be populated with anything and contains the UDRs or bytearrays
that are sent from the Forwarding workflow.
640
Desktop 7.1
Field Description
LoadId (int) If you have configured the number of collectors to be more than 1 in the
Workflow Bridge profile, LoadIds can be used for determining how data
should be distributed. Each workflow is assigned a specific LoadId, and then
you can use this field in the ConsumeCycle UDR to indicate which LoadId,
i e which workflow, the UDR should be routed to.
SessionContext This field might contain data that has been populated in the Initialize-
(any) CycleUDR or BeginBatchCycleUDR by the Workflow Bridge Real-time
Collection agent. For more information about session context, refer to Sec-
tion 13.6.2.1, “Session Context”. This field is only readable in this UDR.
The Workflow Bridge Real-time Collection agent always has to acknowledge a workflow state change
by sending back the WorkflowState UDR to the Workflow Bridge Forwarding agent. For Consume-
CycleCDRs, this behavior can be controlled using the Send Reply over Bridge setting in the Workflow
Bridge Profile.
For more information regarding the states, see Section 4.1.11.6, “Workflow Execution State”.
The following fields are common for all of the workflow state UDRs:
Field Description
AgentId (string) This field includes the agent id that is created for the Workflow Bridge For-
warding agent each time a workflow is started. The id is unique per Workflow
Bridge Forwarding agent and workflow execution.
SessionContext This field might contain data that has been populated in the Initialize-
(any) CycleUDR or BeginBatchCycleUDR by the Workflow Bridge Real-time
Collection agent. For more information about session context, refer to Sec-
tion 13.6.2.1, “Session Context”. This field is only readable in this UDR.
The following field is included for all Workflow Execution State UDRs that are specific for batch
forwarding workflows:
Field Description
TxnId (long) This field includes the id for the batch transaction of the batch forwarding workflow.
13.6.6.2.1. WorkflowStateUDR
The WorkflowStateUDR defines the common attributes and behaviors for any WorkflowState UDRs.
13.6.6.2.2. InitializeCycleUDR
This UDR is sent when the forwarding workflow enters the initialize execution state.
13.6.6.2.3. BeginBatchCycleUDR
This UDR is sent when the batch forwarding workflow enters the beginBatch execution state.
13.6.6.2.4. ConsumeCycleUDR
This is the UDR that contains the data that is being collected from the forwarding workflow. For more
information about ConsumeCycleUDRs, refer to Section 13.6.6.1, “ConsumeCycleUDR”.
641
Desktop 7.1
13.6.6.2.5. DrainCycleUDR
This UDR is sent when the batch forwarding workflow enters the drain execution state.
13.6.6.2.6. EndBatchCycleUDR
This UDR is sent when the batch forwarding workflow enters the endBatch execution state.
13.6.6.2.7. CommitCycleUDR
This UDR is sent when the batch forwarding workflow enters the commit execution state.
In addition to the common UDR fields the CommitCycleUDR also includes the following field:
Field Description
IsRecovery (boolean) This field includes information on recovery status, to be able to know
if a rollback shall be committed.
13.6.6.2.8. DeinitializeCycleUDR
This UDR is sent when the forwarding workflow enters the deinitialize execution state.
13.6.6.2.9. CancelBatchCycleUDR
This UDR is sent when the batch forwarding workflow enters the cancelBatch execution state.
13.6.6.2.10. RollbackCycleUDR
This UDR is sent when the batch forwarding workflow enters the rollback execution state.
In addition to the common UDR fields the CommitCycleUDR also includes the following field:
Field Description
IsRecovery (boolean) This field includes information on recovery status, to be able to know
if a rollback shall be committed.
13.6.6.3. ErrorCycleUDR
In the real-time to real-time case, The ErrorCycleUDR is used for returning the original UDR in case
the connection between the forwarding and collection workflow is lost. ErrorCycleUDRs are also
generated if the connection is not yet established due to the starting order of workflows, or if the
forwardingQueueSize is exceeded. For futher information about the queue size, see Sec-
tion 13.6.3.3.2, “Advanced configurations”.
Note! In Workflow Bridge real time forwarding and collecting workflow functions for handling
the ErrorCycleUDRs have to be added. Either you can select to route back the ErrorCycleUDR
to the previous Analysis agent, if this agent contains error handling, or you can route it to a
separate Analysis agent dedicated for error handling.
In addition to the common field AgentId, the ErrorCycleUDR also includes the following field:
Field Description
OriginalUDR (WorkflowBridgeUDR) This field contains the original WorkflowBridgeUDR.
642
Desktop 7.1
The Action UDR is created using the Ultra Format Definition Language (UFDL). This is the classname
you need to extend in UFDL:
com.digitalroute.workflowbridge.transport.ultra.WfbActionUDR
Example 110.
An Action UDR in Ultra Format
};
internal WFBActionUDR :
extends_class(
"com.digitalroute.workflowbridge.transport.ultra.WfbActionUDR" ) {
};
in_map ACTION_inMap :
external( my_ext ),
internal( WFBActionUDR ),
target_internal( my_ACTION_TI ) {
automatic;
};
For further information about the Ultra Format Editor and the UFDL syntax, refer to the Ultra Format
Management User's Guide.
13.6.7. Examples
This section gives two examples of how to setup Workflow Bridge workflows, in a batch to real-time
and a real-time to real-time scenario. The examples are simple and intended to be used as a base for
further development.
• An Ultra format
643
Desktop 7.1
A simple Ultra Format needs to be created both for the incoming UDRs as well as for the user defined
WfbActionUDR. For more information about the Ultra Format Editor and the UFDL syntax, refer to
the Ultra Format Management User's Guide.
internal WFBActionUDR :
extends_class( "com.digitalroute.workflowbridge.transport.ultra.WfbActionUDR"
int type;
ascii action;
};
// Decoder mapping
in_map inputMap :
external( my_input ),
target_internal( my_internal_TI ) {
automatic;
};
644
Desktop 7.1
The profile is used to connect the two workflows. See Section 13.6.3.3, “Workflow Bridge Profile
Configuration” for information how to open the Workflow Bridge Profile editor.
• The Send reply over bridge is not selected which means that only responses for WorkflowStateUDRs
and UDRs with an Action UDR attached to the response will be returned to the forwarding workflow.
• Force serialization is not used since there will be no configuration changes during workflow exe-
cution.
• The Workflow Bridge Real-time Collection agent must always respond to the WorkflowState UDRs.
The Response timeout (s) has been set to "60" and this means that the forwarding workflow that
is waiting for a WorkflowState UDR reply will timeout and abort (stop) after 60 seconds if no reply
has been received from the Collection workflow.
• The Bulk size has been set to "0". This means that the UDRs will be sent from the Workflow Bridge
Forwarding agent one by one, and not in a bulk. Enter the appropriate bulk size if you wish to use
bulk forwarding of UDRs.
• The Bulk timeout (ms) has been set to "0" since there will be no bulk forwarding. Enter the appro-
priate bulk timeout if you wish to use bulk forwarding of UDRs. Bulk timeout can only be specified
if the bulk functionality has been enabled in the Bulk size setting.
• The Number of Collectors are set to 1 since only one collector since there will be a one-to-one
connection in this example.
• Set the UDR type to my_internal_TI by clicking on the Add button. To remove a UDR type
from the UDR Types list, select the UDR type click the Remove button.
645
Desktop 7.1
In this workflow, a Disk agent collects data that is forwarded to an Analysis agent. The data is routed
by the Decoder agent to the Workflow Bridge Forwarding agent, which in turn forwards the data in a
ConsumeCycleUDR to a Workflow Bridge Real-time Collection agent. Each time the Workflow Bridge
Batch Forwarding workflow changes state, a WorkflowState UDR is sent to the Workflow Bridge
Real-time Collection agent as well.
For more information regarding the states a workflow can have, see Section 4.1.11.6, “Workflow Ex-
ecution State”.
Since the Send reply over bridge option has not been configured, only ConsumeCycleUDRs with an
Action UDR attached are returned from the Workflow Bridge Real-time Collection agent and routed
to an Analysis agent in the Batch Forwarding workflow.
The workflow consists of a Disk Collection agent named Disk, a Decoder agent named Decoder, a
Workflow Bridge Batch Forwarding agent named Workflow_Bridge_FW and an Analysis agent
named Actions.
13.6.7.1.3.1. Disk
Disk is a Collection agent that collects data from an input file and forwards it to the Decoder agent.
Double-click on the Disk agent to display the configuration dialog for the agent:
646
Desktop 7.1
• The agent is configured to collect data from the /home/trunk/in directory, which is stated in
the Directory field. Enter the path to the directory where the file you want to collect is located.
13.6.7.1.3.2. Decoder
The Decoder agent receives the input data from the Disk agent, translates it into UDRs and forwards
them to the Workflow_Bridge_Forw agent. Double-click on the Decoder agent to display the config-
uration dialog.
In this dialog, choose the Decoder that you defined in your Ultra Format.
13.6.7.1.3.3. Workflow_Bridge_FW
Workflow_Bridge_FW is the Workflow Bridge Batch Forwarding agent that sends data to the
Workflow Bridge Real-time Collection agent. Each incoming UDR will be included in the Data field
of a ConsumeCycleUDR which is sent to the realtime workflow. Double-click on Work-
flow_Bridge_Forw to display the configuration dialog for the agent.
• The agent has been configured to use the profile that was defined in Section 13.6.7.1.2, “Define a
Profile”.
13.6.7.1.3.4. Analysis
The Analysis agent is an analysis agent that receives the responses from the Workflow_Bridge_Forw
agent. Since the profile does not have the Send Reply Over Bridge checked, the agent will only receive
responses with an Action UDR. Double-click on the Analysis agent to display the configuration dialog.
647
Desktop 7.1
In this dialog, the APL code for handling input data is written. In the example, there will be a debug
priontout of the UDRs with an Action UDR connected. Adapt the code according to your requirements
You can also see the UDR type used in the UDR Types field, in this example it is a ConsumeCycleUDR.
In this workflow, a Workflow Bridge Real-time Collection agent collects the data that has been sent
in a ConsumeCycleUDR from the Workflow Bridge Batch Forwarding agent. It also collects the
WorkflowState UDRs that inform about state changes in the batch forwarding workflow.
An Analysis agent returns all ConsumeCycleUDRs to the Workflow Bridge Real-time Collection
agent, to let the agent know when to send the DrainCycleUDR. The Analysis agent also replies to all
WorkflowState UDRs, so that the Workflow Bridge Batch Forwarding agent will know when to move
forward to the next Agent Execution State. For more information regarding the workflow execution
states, see Section 4.1.11.6, “Workflow Execution State”.
13.6.7.1.4.1. Workflow_Bridge_C
Workflow_Bridge_C is the Workflow Bridge Real-time Collection agent that receives the data that
the Workflow Bridge Batch Forwarding agent has sent over the bridge. Double-click on the Work-
flow_Bridge_C agent to display the configuration dialog for the agent.
648
Desktop 7.1
• The agent has been configured to use the profile that was defined in Section 13.6.7.1.2, “Define a
Profile”.
• The port that the collector server will listen on for incoming requests has been set to default value
"3299". However, if the two workflows will execute on the the same execution context, an ephem-
eral port is used instead.
13.6.7.1.4.2. Analysis
The Analysis agent is the Analysis agent that receives and analyses the data originally sent from the
Workflow Bridge Batch Forwarding agent in the ConsumeCycleUDR, as well as the workflow state
information delivered in the WorkflowState UDRs.
This agent will also look for the UDR that has its Id set to 2 and create an Action UDR for this.
649
Desktop 7.1
consume {
if (instanceOf(input, wfb.WorkflowStateUDR)) {
udrRoute((wfb.WorkflowStateUDR) input);
} else if (instanceOf(input, wfb.ConsumeCycleUDR)) {
wfb.ConsumeCycleUDR ccUDR = (wfb.ConsumeCycleUDR) input;
//validate content of the incoming UDR
WFBridge.UltraFormat.my_internal_TI myUDR =
(WFBridge.UltraFormat.my_internal_TI) ccUDR.Data;
if (myUDR.myId == 2) {
//Create an action UDR
WFBridge.UltraFormat.WFBActionUDR myAction =
udrCreate( WFBridge.UltraFormat.WFBActionUDR);
myAction.type = 44;
myAction.action = "The second UDR will be returned
to the WF";
ccUDR.Action = myAction;
}
udrRoute((wfb.ConsumeCycleUDR) ccUDR);
} else {
debug(input);
}
}
In this example, a reply is sent back to the Workflow_Bridge_Coll agent, by routing back the Work-
flowStateUDR and ConsumeCycleUDRs. Adapt the code according to your requirements.
Note! Since WorkflowState UDRs have to be routed back to the Workflow Bridge Collection
agent in order to be returned to the forwarding workflow, a "response" route have to be added
from the Analysis agent to the Workflow Bridge Collection agent.
You can see the UDR types used in the UDR Types field, i. e. WorkflowStateUDR and Consume-
CycleUDR.
• An Ultra Format
650
Desktop 7.1
A simple Ultra Format needs to be created in order to forward the incoming data and enable the Col-
lection workflows to populate it with more information. For more information about the Ultra Format
Editor and the UFDL syntax, refer to the Ultra Format Management User's Guide.
internal myInternal {
string inputValue;
string executingWF;
};
The profile is used to connect the forwarding workflow towards the three collection workflows. See
Section 13.6.3.3, “Workflow Bridge Profile Configuration” for information how to open the Workflow
Bridge Profile editor.
651
Desktop 7.1
• The Send Reply Over Bridge is selected which means that all ConsumeCycleUDRs will be returned
to the Workflow Bridge forwarding agent.
• Force serialization is not used since there will be no configuration changes during workflow exe-
cution.
• The Workflow Bridge Real-time Collection agent must always respond to the WorkflowState UDRs.
The Response Timeout (s) has been set to "60" and this means that the Workflow Bridge Real-time
Forwarding agent that is waiting for a WorkflowState UDR reply will timeout and abort (stop) after
60 seconds if no reply has been received from the Real-time Collection workflow.
Enter the appropriate timeout value to set the timeout for the Workflow Bridge Real-time Forwarding
agent.
• The Bulk Size has been set to "0". This means that the UDRs will be sent from the Workflow Bridge
Real-time Forwarding agent one by one, and not in a bulk. Enter the appropriate bulk size if you
wish to use bulk forwarding of UDRs.
• The Bulk Timeout (ms) has been set to "0" since there will be no bulk forwarding. Enter the appro-
priate bulk timeout if you wish to use bulk forwarding of UDRs. Bulk timeout can only be specified
if the bulk functionality has been enabled in the Bulk size setting.
• Since the UDRs in this example will be split between three different workflows, the Number of
Collectors has been set to "3".
In this workflow, a TCP/IP agent collects data that is forwarded to an Analysis agent. The Analysis
agent will define the receiving Real-time Collection workflow before the ConsumeCycleUDR is sent
to the Workflow Bridge Forwarding agent. The Workflow Bridge Forwarding agent will distribute
the UDRs to the correct collection workflow and forward the returning ConsumeCycleUDR to another
Analysis agent for further execution.
652
Desktop 7.1
The workflow consists of a TCP/IP agent, an Analysis agent named Analysis, a Workflow Bridge
Real-time Forwarding agent named Workflow_Bridge_FW and a second Analysis agent named
Result.
13.6.7.2.3.1. TCP/IP
TCP/IP is a Collection agent that collects data using the standard TCP/IP protocol and forwards it to
the Analysis agent.
Double-click on the TCP_IP agent to display the configuration dialog for the agent:
• Host has been set to "10.46.20.136". This is the IP address or hostname to which the TCP/IP agent
will bind.
• Port has been set to "3210". This is the port number from which the data is received.
• Allow Multiple Connections has been selected and Number of Connections Allowed has been
set to "2". This is the number of TCP/IP connections that are allowed simultaneously.
13.6.7.2.3.2. Analysis
The Analysis agent is an Analysis agent that receives the input data from the TCP/IP agent. It defines
which Real-time Collection workflow should be chosen and forwards the ConsumeCycleUDR to the
Workflow_Bridge_FWD agent. Double-click on the Analysis agent to display the configuration
dialog.
653
Desktop 7.1
consume {
wfb.ConsumeCycleUDR ccUDR = udrCreate(wfb.ConsumeCycleUDR);
WFBridge.myFormat.myInternal data =
udrCreate(WFBridge.myFormat.myInternal);
data.inputValue = baToStr(input);
debug("First character is: " +
strSubstring(data.inputValue,0,1));
if (strStartsWith(data.inputValue,"1") ||
strStartsWith(data.inputValue,"2")) {
int wfId;
strToInt(wfId,strSubstring(data.inputValue,0,1));
ccUDR.LoadId = wfId;
} else {
ccUDR.LoadId = 3;
}
ccUDR.Data = data;
udrRoute(ccUDR);
}
In this dialog, the APL code for handling input data is written. In the example, the incoming data is
analyzed and depending on the first character in the incoming data, the receiving Real-time Collection
workflow is chosen by setting the LoadId in the ConsumeCycleUDR, which is sent to the Work-
flow_Bridge_FWD agent. Adapt the code according to your requirements.
13.6.7.2.3.3. Workflow_Bridge_FWD
Workflow_Bridge_FWD is the Workflow Bridge Real-time Forwarding agent that sends data to the
Workflow Bridge Real-time Collection agent. Double-click on Workflow_Bridge_FWD to display
the configuration dialog for the agent.
654
Desktop 7.1
• The agent has been configured to use the profile that was defined in Section 13.6.7.2.2, “Define a
Profile”.
13.6.7.2.3.4. Result
The Result agent is an Analysis agent that receives the returning ConsumeCycleUDRs and potential
ErrorCycleUDRs from the Workflow_Bridge_FWD agent. Double-click on the Analysis agent to
display the configuration dialog.
consume {
if (instanceOf(input, wfb.ErrorCycleUDR)) {
debug("Something went wrong");
} else if (instanceOf(input, wfb.ConsumeCycleUDR)) {
wfb.ConsumeCycleUDR ccUDR = (wfb.ConsumeCycleUDR)input;
WFBridge.myFormat.myInternal data =
(WFBridge.myFormat.myInternal)ccUDR.Data;
string msg = ("Value " + data.inputValue +
" was executed by " + data.executingWF);
debug(msg);
}
}
In this dialog, the APL code for further handling of the UDRs is written. In the example, only simple
debug messages are used as output. Adapt the code according to your requirements.
655
Desktop 7.1
In this workflow, a Workflow Bridge Real-time Collection agent collects the data that has been sent
in a ConsumeCycleUDR from the Workflow Bridge Real-time Forwarding agent and returns an updated
ConsumeCycleUDR.
13.6.7.2.4.1. Workflow_Bridge_Coll
Workflow_Bridge_Coll is the Workflow Bridge Real-time Collection agent that receives the data
that the Workflow Bridge Real-time Forwarding agent has sent over the bridge. Double-click on the
Workflow_Bridge_Coll agent to display the configuration dialog for the agent.
• The agent has been configured to use the profile that was defined in Section 13.6.7.2.2, “Define a
Profile”.
• The default port that the collector server will listen on for incoming requests has been set to default
value "3299".
13.6.7.2.4.2. Analysis
The Analysis agent is the Analysis agent that receives and analyses the data originally sent from the
Workflow Bridge Real-time Forwarding agent in the ConsumeCycleUDR, as well as the workflow
state information delivered in the WorkflowStateUDR.
656
Desktop 7.1
consume {
if (instanceOf(input, WorkflowStateUDR )) {
udrRoute((WorkflowStateUDR)input, "response");
} else if (instanceOf(input, ConsumeCycleUDR)) {
wfb.ConsumeCycleUDR ccUDR = (wfb.ConsumeCycleUDR)input;
WFBridge.myFormat.myInternal data =
(WFBridge.myFormat.myInternal)ccUDR.Data;
debug("Incoming data: " + data.inputValue);
data.executingWF =
(string)mimGet("Workflow","Workflow Name");
ccUDR.Data = data;
udrRoute(ccUDR, "response");
} else {
debug(input);
}
In this example, each ConsumeCycleUDR will populate the data field executingWF with the name
of the executing workflow. Also WorkflowStateUDRs are routed back. Adapt the code according to
your requirements.
Since this example will load balance between three workflows, additional workflows is added in the
workflow table.
Right-click in the workflow template and choose Workflow Properties to display the Workflow
Properties dialog.
657
Desktop 7.1
• Workflow_Bridge_Coll - WEB_Collector - loadID has Per Workflow set, which means that the
value must be specified in the Workflow Table.
• Number of Workflows to Add has been set to "2", since one is already existing and the example
needs two additional workflows.
The Workflow Table will contain three workflows, that all will communicate with the Real-time for-
warding workflow.
Populate the Workflow Table with correct settings for each workflow:
• Each workflow needs a unique port for communication with the Workflow_Bridge_FWD agent.
• The loadID need to correspond with the APL code and should be "1", "2" and "3" in this example.
658
Desktop 7.1
The MediationZone® Database agent is supported for use only with the following databases:
• Oracle
• Sybase
Unless specified otherwise, Oracle is the MediationZone® standard and default database.
14.1.1.1. Prerequisites
The reader of this information should be familiar with:
When the workflow is executed the agent will create a query in SQL, based on the user configuration
and retrieve all rows matching the statement. For each row a UDR is created and populated according
to the assignments in the configuration window.
The agent use and require a transaction ID column to utilize a rollback functionality. Additionally,
based on configurations, the agent deletes the data in the table after it has been inserted into the
workflow. When all the matching data has been successfully processed, the agent stops to await the
next activation, scheduled or manual initialization.
14.1.2.1. Configuration
The Database Collection agent configuration window is displayed when a database agent in a workflow
is double-clicked or right-clicked, selecting Configuration....
659
Desktop 7.1
The Source tab contains configurations related to the placement and handling of the source database
table and its data, as well as the UDR type to be created and populated by the agent.
Refresh must be selected if changes have been made in the customer database.
This will update the presented information in the Source tab.
The Database Collection agent does not support Fast Connection Failover
(FCF) used when using an Oracle RAC enabled database for the database
agent.
Use Default Data- Check this to use the default database schema for the chosen database and user.
base Schema
This is not applicable for all database types. Use Default Database Schema
is available for selection only when accessing Oracle databases.
Tables within the default schema will be listed without schema prefix.
Table Name Name of the working table in the selected Database, in which the data to be
collected resides. The list is populated each time a new Database is selected.
For further information and an example of a working table, see Section 14.1.4.3.1,
“Working Table”
Transaction ID Name of the column in the selected Table, which is utilized for the transaction
column ID. The list is populated each time a Table Name is selected. The column must
be of the data type number, with at least twelve digits.
Remove If enabled, this option will remove the collected data rows from the working
table.
Mark as Collected If enabled, this option will assign the value -1 to the Transaction ID column
for all the collected rows.
660
Desktop 7.1
Run SP If enabled, this option executes a user defined stored procedure that is responsible
for the handling, most often removal, of the collected data.
It is important that this procedure actually deletes the data or sets the
Transaction ID to -1, to avoid the data being recollected.
For further information and an example of such a stored procedure, see Sec-
tion 14.1.4.3.2, “After Collection Stored Procedure”.
Ignore Select to have the collected data remain in the table even after collection. Note
that while the data state remains unchanged after collection, the Transaction ID
value is updated.
By keeping the data in the table you can collect it repeatedly while designing
and testing a workflow, for example.
The Assignment tab contains the mapping of column values to UDR fields. The content and use of
this tab is described in detail in Section 14.1.4.2.1, “Assignments”.
If the Source tab is correctly configured and the Assignment tab is selected, the table will automatically
be populated, as if Refresh was clicked. If assignments already exist in the Assignment tab, then
Refresh must be manually clicked for the assignments to be updated with the configurations in the
Source tab.
Potential changes in the database table will not be visible until the Refresh button for the data-
base, in the Source tab, has been clicked.
Only the value types UDR Field, To UDR and NULL, described in Section 14.1.4.2.2, “Value Types”,
are available for selection.
661
Desktop 7.1
In the Condition tab, query constraints may be added to limit the selection of data. The statement must
follow the standard SQL WHERE-clause syntax, except for the initial where and the final semi-colon
(;) which are automatically appended to the entered condition statement. It is, for instance, possible
to include an order by statement to get the rows sorted.
The condition statement may contain dynamic parameters, represented by question marks that in
run-time will be replaced by a value. If the text area contains question marks, Assign Parameters...
must be selected, to be able to assign values to these parameters. The assignments are made in the
Parameter Editor dialog.
Figure 467. Database Collection agent configuration window, Parameter Editor window.
In this dialog each parameter, represented as a question mark in the condition statement, appears as
one row. The value types available are MIM Entry and Constant. Since constant values are also
possible to be given directly in the condition statement, MIM Entry is most likely to be used here.
662
Desktop 7.1
The Advanced tab contains a setting for performance tuning and allows viewing of the generated SQL
statement, based on the configuration in the Source, Assignment and Condition tabs.
Commit Window The number of UDRs (rows) to be removed between each database commit
Size command. This value is used to tune the performance. If tables are small and
contain no Binary Objects, the value may be set higher than the default. Default
is 1000. The window size can be set to any value between 1-60000, where setting
1 means that commit is performed after 1 UDR, and setting 60000 means that
commit is performed after 60000 UDRs.
In order for the statement to appear, the Source and Assignment tabs have to
be properly configured. If not, information about the first detected missing or
erroneous setting is displayed.
2. The pending transaction table is queried for all pending Transaction IDs, to be compared with the
transaction IDs in the working table, from which the agent will collect.
3. The SQL query is built and executed, and all matching rows are collected. In addition to the user
defined condition, the agent adds some conditions to the query, to ensure that pending data, cancelled
data and data marked as collected is not collected.
4. For each row that has been successfully converted to a UDR, the agent updates its Transaction ID
column to the Transaction ID retrieved in bullet 1.
5. When all rows matching the query have been successfully collected, the After Collection config-
uration in the Source tab, is used.
663
Desktop 7.1
a. If Remove, all rows with the given Transaction ID are removed in batches of the size configured
as the Commit Window Size, in the Advanced tab.
b. If Mark as Collected, all rows with the given Transaction ID are updated with the reserved
Transaction ID value -1.
c. If Run SP, the user defined stored procedure is executed. For further information, see Sec-
tion 14.1.4.3.2, “After Collection Stored Procedure”.
14.1.2.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted after the SQL select statement execution.
End Batch Emitted after the SQL select statement execution, when all possible matching rows
have been successfully inserted as UDRs in the workflow.
If the SQL select statement does not return any data, Begin and End Batch will not be emitted.
Not even if Produce Empty Files is selected in a Forwarding Disk agent.
14.1.2.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch All rows with the current Transaction ID are updated with the reserved Transaction
ID -2. If these rows are to be recollected, the Transaction ID column must first be
set to 0(zero). If set to NULL this row cannot be collected.
The database row that issued the Cancel Batch request is written to the System Log.
Hint End Batch An End Batch call will be issued, causing the original batch returned by the SQL
query to be split at the current UDR. The database commit command is executed,
followed by a new select statement to fetch the remaining UDRs from the table.
14.1.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
The agent produces the UDR types selected from the UDR Type list.
664
Desktop 7.1
14.1.2.4.1. Publishes
Database is of the string type and is defined as a global MIM context type.
Table This MIM parameter contains the name of the working table the agent is collect-
ing from.
Table is of the string type and is defined as a global MIM context type.
Source Filename This MIM parameter contains the name of the currently processed file, as defined
at the source.
14.1.2.4.2. Accesses
The agent does access MIM resources, if MIM parameter assignments are set in the Parameter Editor
in the Condition tab.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
Reported, along with the name of the working table, when all rows are collected from it.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
• Start collecting
Indicates that possible cleanup procedures are finalized and that the actual collection begins.
Reported, along with the name of the working table, if no rows have been selected for collection.
Reported, along with a list of Transaction IDs, if the constructed SQL select statement finds any
pending Transaction IDs in the pending transaction table. Rows marked with these transaction IDs
will be excluded by the query.
665
Desktop 7.1
Reported when collected rows are removed after collection, if Remove is selected in After Collection.
Subevent to the Deleting collected data event, stating the number of rows removed by
each SQL commit command. The maximum number depends on the Commit Window Size.
Reported when collected rows are marked, if Mark as Collected is selected in After Collection.
Subevent to the Marking collected data event, stating the number of rows marked as col-
lected by each SQL commit command. The maximum number depends on the Commit Window
Size.
Reported, along with the name of the stored procedure, when it is called after collection, if Run SP
is selected in After Collection.
The agent does not only map and forward the data. A special column in the target table is also assigned
a unique Transaction ID, generated for each batch. In relation to this, a pending transaction table is
utilized to indicate that a batch is open. The Database Collection agent also utilizes this table to prevent
problems if collecting data from the target table.
14.1.3.1. Configuration
The Database Forwarding agent configuration window is displayed when the database agent in a
workflow is double-clicked or right-clicked, selecting Configuration....
666
Desktop 7.1
The Target tab contains configurations related to the target database table and the UDR Type that will
populate it with data.
Select Refresh if changes have been made in the customer database, to update the
presented information in the Target tab.
The Database Forwarding agent does not support Fast Connection Failover
(FCF) used when using an Oracle RAC enabled database for the database
agent.
Use Default Check this to use the default database schema for the chosen database and user.
Database
Schema
This is not applicable for all database types. Use Default Database Schema
is available for selection only when accessing Oracle databases.
Tables within the default schema will be listed without schema prefix.
Access Type Determines if the insertion of data is to be performed directly into the target table,
or via a stored procedure.
Table Name or Depending on the selected Access Type, the target database table name, or the
SP Name stored procedure name, is selected. The list is populated each time a new Database
or Access Type is selected.
667
Desktop 7.1
For further information and an example of a working table, see Section 14.1.4.3.1,
“Working Table”. For further information about the stored procedure, see Sec-
tion 14.1.4.3.3, “Database Forwarding Target Stored Procedure”.
Transaction ID Name of the column in the selected table, or the parameter from the selected stored
column procedure, which is utilized for the Transaction ID. The list is populated each time
a Table Name or SP Name is selected.
The column must be of the data type number, with at least twelve digits.
Cleanup SP If the selected Access Type is Stored Procedure, the agent does not automatically
clean up the target table, in case of a workflow abortion (Cancel Batch). If that is
the case, the customer must supply a stored procedure that manages the clean up.
The list is populated each time a new Database is selected.
For further information and an example of a Cleanup Stored Procedure, see Sec-
tion 14.1.4.3.4, “Cleanup Stored Procedure”.
SP Target Table Name of the target table for the stored procedure. This field is only enabled if the
Access Type is Stored Procedure. The list is populated each time a new Database
is selected.
If this agent is chained with a Database collection agent in another workflow, both
agents need to be aware of the mutual table. In the collection agent, a table to collect
from is always selected. However, in the forwarding agent, it is possible to select
the update of the table to be done via a stored procedure. If that is the case, the target
table for the stored procedure must be selected here. For further information, see
Section 14.1.4.1.1, “Pending Transaction Table”.
The correct name of the SP Target Table must be selected, or else a Database
collection agent will be able to collect pending data that is not supposed to
be collected. This may cause data duplication.
Run SP If enabled, this option causes a user defined stored procedure to be called when the
forwarding process terminates. It will then receive the transaction ID for the forwar-
ded rows as input.
This option is used for transaction safety when the table is read from another system,
to ensure no temporary rows are read. Rows are classified as temporary until End
Batch is reached. In case of a crash before End Batch is reached, the workflow
needs to be restarted for the temporary rows to be expunged.
MediationZone® specific database tables from the Platform database must never be utilized as
targets for output. This may cause severe damage to the system in terms of data corruption that
in turn may make the system unusable.
668
Desktop 7.1
The Assignment tab contains the assignment of values to each column or stored procedure parameter.
The content and use of this tab is described further in Section 14.1.4.2.1, “Assignments”.
The Column Name column does not necessarily contain column names. If Stored Procedure is selected
as the Access Type, this column will hold the names of all incoming parameters that the stored procedure
expects.
If the Target tab is correctly configured and the Assignment tab is selected, the table will automatically
be populated, as if Refresh was clicked. If assignments already exist in the Assignment tab, then
Refresh must be manually selected, for the assignments to be updated with the configurations in the
Target tab.
Potential changes in the database table will not be visible until the Refresh for the database, in
the Target tab, has been selected.
All Value Types, described in Section 14.1.4.2.2, “Value Types”, except for To UDR, are available
for selection.
When using Function as Value Type, it is not allowed to use question marks embedded in
strings. MediationZone® will interpret a question mark as a parameter.
The Advanced tab contains a setting for performance tuning and viewing the generated SQL statement,
based on the configuration in the Target and Assignment tabs.
Commit Win- The number of UDRs (rows) to be inserted or removed between each database
dow Size commit command. This value may be used to tune the performance. If tables are
small and contain no Binary Objects, the value may be set to a higher value than
the default. Default is 1000. The window size can be set to any value between 1-
60000, where setting 1 means that commit is performed after 1 UDR, and setting
60000 means that commit is performed after 60000 UDRs.
Rows are inserted for each UDR that is fed to the agent. All UDRs are stored in
memory between each database commit command, to enable rollback. Rows are
removed at the next workflow startup in case of a crash recovery.
669
Desktop 7.1
General SQL In this window, the SQL statement that will be used to populate the database, is
Statement shown. This field may not be edited, however, it is useful for debug purposes or
for pure interest.
In order for the statement to appear, the Target and Assignment tabs have to be
properly configured, or else information about the first detected missing or erroneous
setting is displayed.
A. To make sure that inserted (distributed) rows are removed in case the batch is cancelled. This to
avoid duplicated rows. To handle this, the agent inserts its batch Transaction ID in the assigned
Transaction ID column. If the batch is cancelled, all rows matching the batch Transaction ID will
be removed again.
If a stored procedure is used to populate the table, the configured Cleanup SP must be able to do
the same, or something similar, to avoid duplicates. For further information and an example of a
cleanup stored procedure, see Section 14.1.4.3.4, “Cleanup Stored Procedure”.
B. To make sure that a potential Database Collection agent does not collect rows from the target table,
before the current batch is closed. To handle this, the agent populates a pending transaction table
with the current Transaction ID, database and table name in the beginning of the batch and removes
the entry in the end of the batch. For a detailed description of this behavior, see Section 14.1.4.1,
“Inter-Workflow Communication, Using Database Agents”.
14.1.3.2.1. Emits
None.
14.1.3.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Begin Batch Retrieves a Transaction ID and inserts an entry in the pending transaction table.
End Batch Deletes the pending Transaction ID row.
Cancel Batch Removes the distributed rows with the current Transaction ID or calls the configured
Cleanup SP. The pending Transaction ID row is deleted.
14.1.3.3. Introspection
The introspection is the type of data an agent expects and delivers.
14.1.3.4.1. Publishes
670
Desktop 7.1
Database is of the string type and is defined as a global MIM context type.
Table This MIM parameter contains the name of the working table or the stored pro-
cedure the agent is distributing to.
Table is of the string type and is defined as a global MIM context type.
14.1.3.4.2. Accesses
Various resources, if MIM parameter or function assignments are made in the Assignment tab.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
Reported when a stored procedure starts running, if Run SP is selected in the Database Forwarding
Target tab.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
Reported when the agent receives a Cancel Batch or when recovering after system abortion.
Subevent to the Rollback transaction data event, stating the number of rows removed
by each SQL commit command. The maximum number depends on the Commit Window Size.
14.1.4. General
14.1.4.1. Inter-Workflow Communication, Using Database Agents
Data may propagate between workflows or MediationZone® systems by combining a Database for-
warding agent with a Database collection agent, where the exchange point is a mutual database table.
671
Desktop 7.1
When using the same table, the collection agent must make sure that it does not collect data that the
forwarding agent is simultaneously feeding with data from its current batch.
Transfer of UDRs between workflows is ideally handled using the Inter Workflow agents. The
Database agent approach is useful in case of wanting to change the content of the UDRs. Another
use is when wanting to pass on MIM values and merge batches at the same time. In the Inter
Workflow agent case, only the MIM values for the first (Header MIMs) and last batch (Trailer
and Batch MIMs) are considered. Using the Database agents, MIM values may be mapped into
database columns.
The MediationZone® database, hosts a table where pending transactions are registered. A pending
transaction is an ongoing population of a table by a Database Forwarding agent. The pending transaction
continues from a Begin Batch to an End Batch. The purpose of this table is for Database Collection
agents to avoid collecting pending data from the table that a Database Forwarding agent is currently
distributing to.
The pending transaction table holds database names and table names. Thus, before a collection session
starts, the collector evaluates if there are any pending Transaction IDs registered for the source database
and table. If there are, rows matching the Transaction IDs will be excluded.
In the following figure, the Database Collection agent will exclude all rows with transaction ID 187.
A Database Forwarding agent may be configured to target a stored procedure, instead of a table directly.
In such cases the user must specifically select the table that the stored procedure will populate (SP
Target Table). The reason for that is that the pending transaction table must contain the table name,
not the SP name, so that the selected table name in the Database Collection agent can be matched.
All MediationZone® UDRs have a special field named Storable. This field contains the complete
UDR description and all its data. If UDRs, having many fields or a complex structure to be exchanged,
it could be suitable to store the content of the Storable field in the database. In that way the table
would only need one column. The database type of that column must be a RAW, LONG RAW or a
BLOB.
The data capacity of the column types RAW, LONG RAW and BLOB differs. Consult the
database documentation. For performance reasons it is advised to use the smallest type possible
that fits the UDR content.
When configuring the Database Forwarding agent, the Storable field from the UDR is be assigned to
the table column in a straight forward fashion. However, when collecting that type of data the column
assignment must not be made to the Storable field. Instead To UDR is selected in the Value Type
field.
672
Desktop 7.1
When the Database Collection agent detects a mapping of type To UDR, the selected UDR type is not
consulted for what UDR type to create. The information about the UDR type will be found in the data
of the column itself. Thus, if the UDR stored in the column is of another type than selected in the
Source tab, the type to be distributed by the Database Collection agent is the type actually found.
14.1.4.2. Configuration
14.1.4.2.1. Assignments
The Database agents are designed to either collect data from a database column and assign it to a UDR
field, or vice versa. In their configuration they share the Assignment tab, where these mappings are
configured. Due to the resemblance this configuration is described here.
Refresh Updates the table with all the columns or parameters from the selected table or stored
procedure (Database Forwarding agent, only).
Potential changes in the database table will not be visible until Refresh for the
database in the Source tab, has been selected.
If rows already exist in the table, the refresh operation will preserve the configuration
for all rows with a corresponding column or parameter name. Thus, if a table has been
extended with a new column, the old column configurations will be left untouched and
the new column will appear when Refresh is selected.
The value type on each new column that appears in the table is automatically set to
UDR Field.
Auto assignment:
All rows with no value assigned and with a value type of UDR Field will be targeted
for auto assignment in the end of the refresh process. If the selected UDR type contains
a field whose name matches the column name, the field will be automatically assigned
in the Value column. Matching is not case-sensitive and is done after stripping both
the column and field names from any characters, except a-z and 0-9.
Column Displays a list of all columns or stored procedure parameters (Database Forwarding
Name agent, only) for the selected table or stored procedure, except the Transaction ID
column.
Column Displays the data type for each column as declared in the database table. If the column
Type does not accept NULL this is displayed as: (NOT NULL).
673
Desktop 7.1
Note! If using Oracle and assigning a value of type bigint, the column type
VARCHAR should be used. Setting a full range of the bigint value type could
otherwise lead to a wrong value being inserted, due to a limitation in the JDBC
interface.
Value Type Allows the user to select what type of value to be assigned to the column, or vice versa.
For further information, see Section 14.1.4.2.2, “Value Types”.
Value The value to be assigned to the column, or vice versa. The technique of selecting a
value depends on the selected Value Type.
Note! It is important that the data type of the selected value corresponds to the
data type of the column. Most incompatibilities will automatically be detected,
however, there are situations where validation is not possible.
The Database agents offer six different types of values that may be assigned to a column, or vice versa.
Depending on the agent, not all value types are applicable and will therefore not be available in the
list.
UDR Field If selected, a UDR browser will be launched when the corresponding Value cell is selected.
When a UDR field has been selected in the browser it will appear in the Value cell.
To save the user from launching the UDR browser for every cell to be assigned,
the browser window may be kept on display. When a UDR field is selected and
Apply is selected or if a UDR field is double-clicked, the field will go into the
Value cell of the selected row, provided that this row has a value type of UDR
Field.
The same rule applies when OK is selected in the browser, however the browser
will be dismissed. It is possible to change target (Value cell) by selecting the desired
row in the Assignment tab in the configuration window, while still keeping the
UDR browser window open.
Whether data types of the selected UDR and the database column are compatible or not,
is validated when the configuration dialog is confirmed.
MIM Only applicable for the Database Forwarding agent.
Entry
If selected, a MIM browser will be launched when the corresponding Value cell is clicked.
When a MIM resource has been selected in the browser it will appear in the Value cell.
The previous Note for the UDR Field applies to this browser as well.
Whether data types of the selected UDR and the database column are compatible or not,
is validated when the configuration window is confirmed.
Constant Only applicable for the Database Forwarding agent.
If selected, a text entry field will be available in the Value cell where any constant to be
assigned to the column may be entered. The agent automatically appends possible quotes
needed in the SQL statement, based on the data type of the column.
Function Only applicable for the Database Forwarding agent.
674
Desktop 7.1
If selected, a text entry field will be available in the Value cell where any database related
function to be called may be entered. If the function takes parameters, these must be
marked as question marks. Selecting a cell containing question marks will display the
Function Editor window where each question mark is represented by a row.
The selection of parameter values follows the same procedures as for the assignment of
column values however Constant, UDR Field and MIM Entry are the only available
value types.
If constants are entered in the Function Editor they must be quoted correctly since
the agent has no way of knowing what data types they must have.
NULL If selected, no value may be entered. In Database Collection agents, NULL must be se-
lected for all columns whose values are not mapped into a UDR Field. In Database For-
warding agents, NULL must be selected for columns populated with a NULL value or
columns that, when inserted, will be populated by internal database triggers.
From Only applicable for the Database Forwarding agent.
UDR
It is selected if a complete UDR is to be stored in a binary column to later be collected
by a Database Collection agent. The Database Forwarding agent must populate the column
from the special field Storable, available in all UDR types. This is only applicable for
column types RAW, LONG RAW or BLOB.
To UDR Only applicable for the Database Collection agent.
Following list holds information to be taken into consideration when creating the database table that
a Database Collection agent collects from, or a Database Forwarding agent distributes to.
• The table must have a Transaction ID column, dedicated to the Database agent's internal use. The
column could be named arbitrary however it must be numeric with at least twelve digits. It must
also not allow NULL.
• Reading from or writing to columns defined as BLOB will have a negative impact on performance
for both Database agents.
675
Desktop 7.1
• Entries with Transaction ID column set to -1 (Mark as Collected) or -2 (Cancel Batch) must be
attended to manually at regular intervals.
The following example shows a working table with a Transaction ID column named txn_id.
Example 115.
If a Database Collection agent has been configured to call a stored procedure after collection, it will
be called when each batch has been successfully collected and inserted into the workflow.
The procedure is expected to take one (1) parameter. The parameter must be declared as a NUMBER
and the agent will assign the current Transaction ID to the parameter. The procedure must ensure that
the rows with the supplied transaction ID are removed from the table, or their Transaction ID column
is set to -1.
The following example shows such a procedure that moves the rows to another table.
Example 116.
It is recommended for the previously described stored procedure to use an internal cursor with
several commits, not to overflow the rollback segments.
If a Database Forwarding agent has been configured to use a stored procedure as the Access Type the
agent will call this procedure for each UDR that is to be distributed. The stored procedure must be
defined to take the parameters needed, often including a parameter for the Transaction ID. In the dialog
these parameters are assigned their values. When the procedure is called, the agent will populate each
parameter with the assigned value.
The following example shows a stored procedure that selects the number of calls made by the
a_number subscriber from another table, calls_tab, and uses that value to populate the target
table.
676
Desktop 7.1
Example 117.
If a Database Forwarding agent uses a stored procedure to populate the target table, a cleanup stored
procedure must be defined, that will remove all inserted entries in case of a Cancel Batch in the
workflow. The procedure is expected to take one parameter. The parameter must be declared as a
NUMBER and the agent will assign the current Transaction ID to the parameter.
The following example shows such a procedure that removes all the entries with the current Transaction
ID.
Example 118.
The following example shows a stored procedure that marks the row as safe to read by another system.
677
Desktop 7.1
Example 119.
The billing system must avoid reading rows that contains 'UNSAFE' in the txn_safe_indicator
column, to ensure no data is read that could be rolled back later on.
• Columns of type UNIQUEIDENTIFIER must be set with a function. Hence, map it to NULL in the
agent.
For MS SQL, the column type timestamp is not supported in tables accessed by MZ. Use
column type datetime instead.
See the System Administration Guide for information about time zone settings.
678
Desktop 7.1
14.2.1.1. Prerequisites
The reader of this information must be familiar with:
When a file has been successfully processed by the workflow, the agent offers the possibility of moving,
renaming, removing or ignoring the original file. The agent can also be configured to keep files for a
set number of days. In addition, the agent offers the possibility of decompressing compressed (gzip)
files after they have been collected. When all the files are successfully processed, the agent stops to
await the next activation, whether it is scheduled or manually initiated.
14.2.2.1. Configuration
The Disk collection agent configuration window is displayed when the agent in a workflow is right-
clicked, selecting Configuration... or double-clicked. Part of the configuration may be done in the
Filename Sequence or Sort Order service tab described in Section 4.1.6.2.2, “Filename Sequence
Tab” and Section 4.1.6.2.3, “Sort Order Tab”.
The Disk tab contains configurations related to the placement and handling of the source files to be
collected by the agent.
679
Desktop 7.1
Collection If there are more than one collection strategy available in the system a Collection Strategy
Strategy drop down list will also be visible. For more information about the nature of the collection
strategy please refer to Section 15, “Appendix VII - Collection Strategies”.
Directory Absolute pathname of the source directory on the local file system, where the source
files reside. The pathname might also be given relative to the $MZ_HOME environment
variable.
Note! Even if a relative path is defined, for example, input, the value of MIM
parameter Source Pathname (see Section 14.2.2.4.1, “Publishes”) will include
the whole absolute path; /$MZHOME/input.
Filename Name of the source files on the local file system. Regular expressions according to Java
syntax applies. For further information, see:
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Example 120.
Compres- Compression type of the source files. Determines if the agent will decompress the files
sion before passing them on in the workflow.
680
Desktop 7.1
Move to If enabled, the source files will be moved to the automatically created subdirectory
Temporary DR_TMP_DIR in the source directory, prior to collection. This option supports safe
Directory collection of a source file reusing the same name.
Append Enter the suffix that you want added to the file name prior to collecting it.
Suffix to
Filename
Important! Before you execute your workflow, make sure that none of the file
names in the collection directory include this suffix.
Inactive If the specified value is greater than zero, and if no file has been collected during the
Source specified number of hours, the following message is logged:
Warning
(hours) The source has been idle for more than <n> hours, the last
inserted file is <file>.
Move to If enabled, the source files will be moved from the source directory (or from the directory
DR_TMP_DIR, if using Move Before Collecting) to the directory specified in the Des-
tination field, after the collection.
If the Prefix or Suffix fields are set, the file will be renamed as well.
It is possible to move collected files from one file system to another however it
causes negative impact on the performance. Also, the workflow will not be
transaction safe, because of the nature of the copy plus delete functionality.
• It is not always possible to move collected files from one file system to another.
• Moving files between different file systems usually cause worse performance
than having them on the same file system.
• The workflow will not be transaction safe, because of the nature of the copy
plus delete functionality.
Rename If enabled, the source files will be renamed after the collection, remaining in the source
directory from which they were collected (or moved back from the directory
DR_TMP_DIR, if using Move Before Collecting).
Remove If enabled, the source files will be removed from the source directory (or from the direct-
ory DR_TMP_DIR, if using Move Before Collecting), after the collection.
Ignore If enabled, the source files will remain in the source directory after collection.
Destination Absolute pathname of the directory on the local file system of the EC into which the
source files will be moved after collection. The pathname might also be given relative
to the $MZ_HOME environment variable.
681
Desktop 7.1
If Rename is enabled, the source files will be renamed in the current directory
(source or DR_TMP_DIR). Be sure not to assign a Prefix or Suffix, giving files
new names, still matching the filename regular expression, or else the files will
be collected over and over again.
Search and
To apply Search and Replace, select either Move to or Rename.
Replace
• Search: Enter the part of the filename that you want to replace.
Search and Replace operate on your entries in a way that is similar to the Unix sed
utility. The identified filenames are modified and forwarded to the following agent in
the workflow.
• Use regular expression in the Search entry to specify the part of the filename that you
want to extract.
A regular expression that fails to match the original file name will abort the
workflow.
• Enter Replace with characters and meta characters that define the pattern and content
of the replacement text.
• Search: .new
• Replace: .old
• Search: ([A-Z]*[0-9]*)_([a-z]*)
• Replace: $2_DONE
Note that the search value divides the file name into two parts by using brackets.
The replace value applies the second part by using the place holder $2.
Keep Number of days to keep source files after the collection. In order to delete the source
(days) files, the workflow has to be executed (scheduled or manually) again, after the configured
number of days.
Note, a date tag is added to the filename, determining when the file may be removed.
This field is only enabled if Move to or Rename is selected.
682
Desktop 7.1
Route Select this check box if you want to forward the data to an SQL Loader agent. See the
FileRefer- description of the SQL Loader agent for further information.
enceUDR
14.2.2.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted before the first part of each collected file is fed into a workflow.
End Batch Emitted after the last part of each collected file has been fed into the system.
14.2.2.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
Hint End Batch If a Hint End Batch message is received, the collector splits the batch at the end of
the current block processed (32 kB), If the block end occurs within a UDR, the batch
will be split at the end of the preceding UDR.
After a batch split, the collector emits an End Batch message, followed by a Begin
Batch message (provided that there is data in the subsequent block).
14.2.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
14.2.2.4.1. Publishes
683
Desktop 7.1
Source File Size is of the long type and is defined as a header MIM
context type.
Source Filename This MIM parameter contains the name of the currently processed file, as defined
at the source.
When the agent collects from multiple directories, the MIM value is
cleared after collection of each directory. Then, the MIM value is updated
with the listing of the next directory.
Source File Count is of the long type and is defined as a global MIM
context type.
Source Pathname This MIM parameter contains the path to the directory where the file currently
under processing is located.
Note! Even if a relative path was defined when configuring the Disk
Collection agent (see Section 14.2.2.1.1, “Disk Tab”), for example,
input, the value of this parameter will include the whole absolute path;
/$MZHOME/input.
Source Files Left This parameter contains the number of source files that are yet to be collected.
This is the number that appears in the Execution Manager backlog.
Source Files Left is of the long type and is defined as a header MIM
context type.
14.2.2.4.2. Accesses
684
Desktop 7.1
For further information about the agent message event type please refer to Section 5.5.14, “Agent
Event”.
Reported along with the name of the source file that has been collected and inserted into the workflow.
Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted; refer to Section 14.2.2.2, “Transaction Behavior” for
further information.
To ensure that downstream systems will not use the files until they are closed, they are stored in a
temporary directory until the End Batch message is received. This behavior also applies to Cancel
Batch messages. If a Cancel Batch is received, file creation is cancelled.
14.2.3.1. Configuration
The Disk Forwarding agent configuration window is displayed when the agent in a workflow is double-
clicked, or right-clicked, selecting Configuration....
685
Desktop 7.1
Input Type The agent can act on two input types. Depending on which one the agent is configured
to work with, the behavior will differ.
The default input type is bytearray, that is the agent expects bytearrays. If nothing
else is stated the documentation refer to input of bytearray.
If the input type is MultForwardingUDR, the behavior is different. For further in-
formation about the agent's behavior in MultiForwardingUDR input, refer to Sec-
tion 14.2.3.1.3, “MultiForwardingUDR Input”.
Directory Absolute pathname of the target directory on the local file system of the EC, where
the forwarded files will be stored.
Compression Compression type of the target files. Determines if the agent will compress the files
or not.
At this point the temporary file is created and closed, however the final file-
name has not yet been created.
Arguments This field is optional. Each entered parameter value has to be separated from the
preceding value with a space.
The temporary filename is inserted as the second last parameter, and the final filename
is inserted as the last parameter, automatically. This means that if, for instance, no
parameter is given in the field, the arguments will be as follows:
$1=<temporary_filename> $2=<final_filename>
If three parameters are given in the field Arguments, the arguments are set as:
686
Desktop 7.1
$1=<parameter_value_#1>
$2=<parameter_value_#2>
$3=<parameter_value_#3>
$4=<temporary_filename>
$5=<final_filename>
Produce Empty If you require to create empty files, check this setting.
Files
The names of the created files are determined by the settings in the Filename Template tab.
For a detailed description of the Filename Template tab, see Section 4.1.6.2.4, “Filename Template
Tab”.
When the agent is set to use MultiForwardingUDR input, it accepts input of the UDR type MultiFor-
wardingUDR declared in the package FNT. The declaration follows:
internal MultiForwardingUDR {
// Entire file content
byte[] content;
// Target filename and directory
FNTUDR fntSpecification;
};
Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.
A UDR of the type MultiForwardingUDR which target filename is not identical to its precedent is
saved in a new output file.
After a target filename that is not identical to its precedent is saved, you cannot use the first fi-
lename again. For example: Saving filename B after saving filename A, prevents you from using
A again. Instead, you should first save all the A filenames, then all the B filenames, and so forth.
Non-existing directories will be created if the Create Non-Existing Directories checkbox under the
Filename Template tab is checked. If not checked a runtime error will occur if a previously unknown
directory exists in the FNTUDR of an incoming MultiForwardingUDR. Every configuration option
referring to bytearray input is ignored when MultiForwardingUDRs are expected.
687
Desktop 7.1
Example 122.
This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.
import ultra.FNT;
MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){
MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;
return multiForwardingUDR;
}
consume {
bytearray file1Content;
strToBA (file1Content, "file nr 1 content");
bytearray file2Content;
strToBA (file2Content, "file nr 2 content");
The Analysis agent mentioned previous in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the root directory. The Create Non-Existing Directories check box under the Filename
Template tab in the configuration of the forwarding agent must be checked if the directories do
not previously exist.
14.2.3.2.1. Emits
None.
688
Desktop 7.1
14.2.3.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Begin Batch When a Begin Batch message is received, the temporary directory DR_TMP_DIR is
first created in the target directory, if not already created. Then a target file is created
and opened in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is first closed
and then the Command, if specified in After Treatment, is executed. Finally, the file
is moved from the temporary directory to the target directory.
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.
14.2.3.3. Introspection
The agent consumes bytearray or MultiForwardingUDR types.
14.2.3.4.1. Publishes
This parameter is of the string type and is defined as a batch MIM context
type.
File Transfer This MIM parameter contains a timestamp, indicating when the target file
Timestamp was created in the temporary directory.
14.2.3.4.2. Accesses
This MIM parameter contains various resources from the Filename Template configuration are accessed
to construct the target filename.
689
Desktop 7.1
Reported along with the name of the target file when it has been successfully stored in the target
directory. If an After Treatment Command is specified, the message also indicate that it has been
executed.
14.3.1.1. Prerequisites
The reader of this information should be familiar with:
When activated, the collector establishes an FTP session towards the remote host. On failure, additional
hosts are tried if so configured. On success, the source directory on the remote host is scanned for all
files matching the current Filename settings, which are located in the Source tab. In addition, the Fi-
lename Sequence service may be used to further control the matching files. All files found will be
fed one after the other into the workflow.
The agent also offers the possibility to decompress compressed (gzip) files after they have been collected,
before they are inserted into the workflow. When all the files are successfully processed, the agent
stops to await the next activation, scheduled or manually initiated.
14.3.2.1. Configuration
The FTP Collection agent configuration window is displayed when right-clicking on the agent in a
workflow and selecting Configuration..., or when double-clicking on the agent. Part of the configur-
ation may be done in the Filename Sequence or Sort Order service tabs described in Section 4.1.6.2.2,
“Filename Sequence Tab” and Section 4.1.6.2.3, “Sort Order Tab”.
690
Desktop 7.1
The Connection tab contains configuration data that is relevant to a remote server.
Server Informa- If your MediationZone® system is installed with the Multi Server functionality
tion Provider you can configure the FTP agent to collect from more than one server. For further
information, see the Multi Server File user's guide.
Host Primary host name or IP address of the remote host to be connected. If a connec-
tion cannot be established to this host, the Additional Hosts specified in the
Advanced tab, are tried.
Note! The FTP Agent supports both IPv4 and IPv6 addresses.
Username Username for an account on the remote host, enabling the FTP session to login.
Password Password related to the Username.
Transfer Type Data transfer type to be used during file retrieval.
691
Desktop 7.1
Enable Collection Select this check box to enable repetitive attempts to connect and start a file
Retries transfer.
When this option is selected, the agent will attempt to connect to the host as many
times as is stated in the Max Retries field described below. If the connection
fails, a new attempt will be made after the number of seconds entered in the Retry
Interval (s) field described below.
Retry Interval (s) Enter the time interval in seconds, between retries.
If a connection problem occurs, the actual time interval before the first attempt
to reconnect will be the time set in the Timeout field in the Advanced tab plus
the time set in the Retry Interval (s) field. For the remaining attempts, the actual
time interval will be the number seconds entered in this field.
Max Retries Enter the maximum number of retries to connect.
In case more than one connection attempt has been made, the number of used
retries will be reset as soon as a file transfer is completed successfully.
Note! This number does not include the original connection attempt.
Enable RESTART Select this check box to enable the agent to send a RESTART command if the
Retries connection has been broken during a file transfer. The RESTART command
contains information about where in the file you want to resume the file transfer.
Before selecting this option, ensure that the FTP server supports the RESTART
command.
When this option is selected, the agent will attempt to re-establish the connection,
and resume the file transfer from the point in the file stated in the RESTART
command, as many times as is entered in the Max Retries field described below.
When a connection has been re-established, a RESTART command will be sent
after the number of seconds entered in the Retry Interval (s) field described be-
low.
Note! The RESTART Retries settings will not work if you have selected
to decompress the files in the Source tab, see Section 14.3.2.1.2, “Source
Tab”.
If a connection problem occurs, the actual time interval before the first attempt
to send a RESTART command will be the time set in the Timeout field in the
Advanced tab plus the time set in the Retry Interval (s) field. For the remaining
attempts, the actual time interval will be the number seconds entered in this field.
692
Desktop 7.1
Max Retries Enter the maximum number of restarts per file you want to allow.
In case more than one attempt to send the RESTART command has been made,
the number of used retries will be reset as soon as a file transfer is completed
successfully.
The Source tab contains configurations related to the remote host, source directories and source files.
The following text describes the configuration options available when no custom strategy has been
chosen.
Collection If there are more than one collection strategy available in the system, a Collection
Strategy Strategy drop down list will also be visible. For further information about the nature of
the collection strategy, see Section 15, “Appendix VII - Collection Strategies”.
Directory Absolute pathname of the source directory on the remote host, where the source files
reside. If the FTP server is of UNIX type, the path name might also be given relative to
the home directory of the User Name account.
Filename Name of the source files on the remote host. Regular expressions according to Java
syntax applies. For further information, see:
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Example 123.
693
Desktop 7.1
Note! When collecting files from VAX file systems, the names of the source files
include both path and filename, which has to be considered when entering the
regular expression.
Compres- Compression type of the source files. Determines if the agent will decompress the files
sion before passing them on in the workflow.
Move to If enabled, the source files will be moved to the automatically created subdirectory
Temporary DR_TMP_DIR in the source directory, before collection. This option supports safe col-
Directory lection when source files repeatedly uses the same name.
Append Enter the suffix that you want added to the file name prior to collecting it.
Suffix to
Filename
Important! Before you execute your workflow, make sure that none of the file
names in the collection directory include this suffix.
Inactive If enabled, when the configured number of hours have passed without any file being
Source available for collection, a warning message (event) will appear in the System Log and
Warning Event Area:
(h)
The source has been idle for more than <n> hours, the last
inserted file is <file>.
Move to If enabled, the source files will be moved from the source directory (or from the directory
DR_TMP_DIR if using Move Before Collecting), to the directory specified in the Des-
tination field, after collection.
Note!
The Directory has to be located in the same file system as the collected files at
the remote host. Also, absolute pathnames must be defined (relative pathnames
cannot be used).
If a file with the same filename, but with a different content, already exists in the
target directory, the workflow will abort.
If a file with the same file name, AND the same content, already exists in the target
directory, this file will be overwritten and the workflow will not abort.
Rename If enabled, the source files will be renamed after the collection, and remain (or moved
back from the directory DR_TMP_DIR if using Move Before Collecting) in the source
directory from which they were collected.
694
Desktop 7.1
Note!
When the File System Type for VAX/VMS is selected, some issues must be
considered. If a file is renamed after collection on a VAX/VMS system, the file-
name might become too long. In that case the following rules will apply:
• <extension>: 39 characters
• <version>: 5 characters
If the new filename turns out to be longer than 39 characters, the agent will move
part of the filename to the extension part. If the total sum of the filename and ex-
tension part exceeds 78 characters, the last characters are truncated from the ex-
tension.
An example:
A_VERY_LONG_FILENAME_WITH_MORE_THAN_39_
CHARACTERS.DAT;5
A_VERY_LONG_FILENAME_WITH_MORE_THAN_39_.
CHARACTERSDAT;5
Note! Creating a new file on the FTP server with the same file name as the original
file, but with another content, will cause the workflow to abort.
Creating a new file with the same file name AND the same content as the original
file, will cause the file to be overwritten.
Remove If enabled, the source files will be removed from the source directory (or from the direct-
ory DR_TMP_DIR, if using Move Before Collecting), after the collection.
Ignore If enabled, the source files will remain in the source directory after the collection. This
field is not available if Move Before Collecting is enabled.
Destination Full pathname of the directory on the remote host into which the source files will be
moved after the collection. This field is only available if Move to is enabled.
Prefix and Prefix and/or suffix that will be appended to the beginning and the end of the name of
Suffix the source files, respectively, after the collection. These fields are only available if Move
to or Rename is enabled.
Warning! If Rename is enabled, the source files will be renamed in the current
(source or DR_TMP_DIR) directory. Be sure not to assign a Prefix or Suffix,
giving files new names still matching the Filename regular expression. That will
cause the files to be collected over and over again.
695
Desktop 7.1
Search and Select either Move to or Rename option to enable Search and Replace.
Replace
• Search: Enter the part of the filename that you want to replace.
Search and Replace operate on your entries in a way that is similar to the Unix sed
utility. The identified filenames are modified and forwarded to the following agent in
the workflow.
• Use regular expression in the Search entry to specify the part of the filename that you
want to extract.
Note! A regular expression that fails to match the original file name will abort
the workflow.
• Enter Replace with characters and meta characters that define the pattern and content
of the replacement text.
• Search: .new
• Replace: .old
• Search: ([A-Z]*[0-9]*)_([a-z]*)
• Replace: $2_DONE
Note that the search value divides the file name into two parts by using parentheses.
The replace value applies to the second part by using the place holder $2.
Keep Number of days to keep moved or renamed source files on the remote host after the
(days) collection. In order to delete the source files, the workflow has to be executed (scheduled
or manually) again, after the configured number of days.
Note! A date tag is added to the filename, determining when the file may be re-
moved. This field is only available if Move to or Rename is enabled.
Route Select this check box if you want to forward the data to an SQL Loader agent. See the
FileRefer- description of the SQL Loader agent for further information.
enceUDR
The Advanced tab contains configurations related to the use of the FTP service.
696
Desktop 7.1
For example, in case the used FTP server does not return the file listed in a well-defined format the
Disable File Detail Parsing option can be useful. For information refer to that section.
Command Port The value in this field defines which port number the FTP service will use on the
remote host.
Timeout (s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Passive Mode Must be enabled if FTP passive mode is used for data connection.
(PASV)
In passive mode, the channel for data transfer between client and server is initiated
by the client instead of by the server. This is useful when a firewall is situated
between the client and the server.
Disable File Detail Disables parsing of file detail information received from the FTP server. This
Parsing enhances the compatibility with unusual FTP servers but disables some function-
ality.
If file detail parsing is disabled, file modification timestamps will not be available
to the collector. The collector does not have the ability to distinguish between
directories and simple files, sub directories in the input directory must for that
reason not match the filename regular expression. The agent assumes that a file
named DR_TMP_DIR is a directory because a directory named DR_TMP_DIR
is used when Move to Temporary Directory under the Source tab is activated.
Therefore, it is not allowed to name a regular file in the collection directory
DR_TMP_DIR.
Note! When collecting files from a VAX file system, this option has to be
enabled.
Additional Hosts List of additional host names or IP addresses that may be used to access the source
directory, from which the source files are collected. These hosts are tried, in se-
quence from top to bottom, if the agent fails to connect to the remote host, set in
the Connection tab.
697
Desktop 7.1
Note! The FTP Agent supports both IPv4 and IPv6 addresses.
Use the Add, Edit, Remove, Move up and Move down buttons to configure the
order of the hosts in the list.
14.3.2.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted before the first byte of each collected file is fed into a workflow.
End Batch Emitted after the last byte of each collected file has been fed into the system.
14.3.2.2.2. Retrieves
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
Note! If the Cancel Batch behavior defined on workflow level (set in the
workflow properties) is configured to abort the workflow, the agent will never
receive the last Cancel Batch message. In this situation ECS will not be in-
volved, and the file will not be moved.
APL code where Hint End Batch is followed by a Cancel Batch will always
result in workflow abort. Make sure to design the APL code to first evaluate
the Cancel Batch criteria to avoid this sort of behavior.
Hint End Batch If a Hint End Batch message is received, the collector splits the batch at the end of
the current block processed (32 kB), provided that no UDR is split. If the block end
occurs within a UDR, the batch will be split at the end of the preceding UDR.
After a batch split, the collector emits an End Batch Message, followed by a Begin
Batch message (provided that there is data in the subsequent block).
14.3.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
14.3.2.4.1. Publishes
698
Desktop 7.1
File Retrieval This MIM parameter contains a timestamp, indicating when the file transfer
Timestamp started.
Note! When collecting files from a VAX file system, the name of the
source file will contain both path and filename.
Source Filenames This MIM parameter contains a list of file names of the files that are about to
be collected from the current collection directory.
Note! When the agent collects from multiple directories, the MIM value
is cleared after collection of each directory. Then, the MIM value is up-
dated with the listing of the next directory.
Note! When collecting files from a VAX file system, the name of the
source file will contain both path and filename.
Source File Count This MIM parameter contains the number of files that were available to this
instance for collection at startup. The value is static throughout the execution
of the workflow, even if more files arrive during the execution. The new files
will not be collected until the next execution.
Source File Count is of the long type and is defined as a global MIM
context type.
Source Files Left This parameter contains the number of source files that are yet to be collected.
This is the number that appears in the Execution Manager in Running
Workflows tab in the Backlog column.
Source Files Left is of the long type and is defined as a header MIM
context type.
Source File Size This parameter provides the size of the file that is about to be read. The file is
located on the server.
Source File Size is of the long type and is defined as a header MIM
context type.
Source Host This MIM parameter contains the name of the host from which files are collected,
as defined in the Source or Advanced tabs.
Source Host is of the string type and is defined as a global MIM context
type.
Source Pathname This MIM parameter contains the path name, as defined in the Source tab.
699
Desktop 7.1
14.3.2.4.2. Accesses
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
Reported, along with the name of the source file, when the file has been collected and inserted into
the workflow.
Reported, along with the name of the current file, when a Cancel Batch message is received. This
assumes the workflow is not aborted when a Cancel Batch message is received, see Section 14.3.2.2,
“Transaction Behavior” for further information.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
• Command trace
A printout of the control channel trace either in the Workflow Monitor or in a file.
To ensure that downstream systems will not use the files until they are closed, they are maintained in
a temporary directory on the remote host until the End Batch message is received. This behavior is
also used for Cancel Batch messages. If a Cancel Batch is received, file creation is cancelled.
14.3.3.1. Configuration
The FTP Forwarding agent configuration window is displayed when the agent is right-clicked, selecting
Configuration... or double-clicked.
700
Desktop 7.1
See description in Figure 476, “The FTP Collection Agent Configuration - Connection Tab”
701
Desktop 7.1
Input Type The agent can act on two input types. Depending on which one the agent is
configured to work with, the behavior will differ.
The default input type is bytearray, that is the agent expects bytearrays. If
nothing else is stated the documentation refer to input of bytearray.
Compression Compression type of the target files. Determines if the agent will compress the
files before storage or not.
Produce Empty If you require to create empty files, check this setting.
Files
Handling of Select the behavior of the agent when the file already exists, the alternatives are:
Already Existing
Files • Overwrite - The old file will be overwritten and a warning will be logged in
the System Log.
• Add Suffix - If the file already exists the suffix ".1" will be added. If this file
also exists the suffix ".2" will be tried instead and so on.
• Abort - This is the default selection and is the option used for upgraded con-
figurations, that is workflows from an upgraded system.
Use Temporary If this option is selected, the agent will move the file to a temporary directory
Directory before moving it to the target directory. After the whole file has been transferred
to the target directory, and the endBatch message has been received, the tem-
porary file is removed from the temporary directory.
Use Temporary If there is no write access to the target directory and, hence, a temporary directory
File cannot be created, the agent can move the file to a temporary file that is stored
directly in the target directory. After the whole file has been transferred, and the
endBatch message has been received, the temporary file will be renamed.
702
Desktop 7.1
The temporary filename is unique for every execution of the workflow. It consists
of a workflow and agent ID, and a file number.
Abort Handling Select how to handle the file in case of cancelBatch or rollback, either Delete
Temporary File or Leave Temporary File.
Note! When a workflow aborts, the file will not be removed until the next
time the workflow is run.
Command Port The value in this field defines which port number the FTP service will use on
the remote host.
Timeout(s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Passive Mode Must be enabled if FTP passive mode is used for data connection.
(PASV)
In passive mode, the channel for data transfer between client and server is
initiated by the client instead of by the server. This is useful when a firewall
is situated between the client and the server.
Additional Hosts List of additional host names or IP addresses that may be used to access the
target directory for file storage. These hosts are tried, in sequence from top to
bottom, if the agent fails to connect to the remote host set in the Connection
tab.
Note! The FTP Agent supports both IPv4 and IPv6 addresses.
Use the Add, Edit, Remove, Move up and Move down buttons to configure
the host list.
703
Desktop 7.1
Note! The names of the created files are determined by the settings in the Filename Template
tab. The use and setting of private threads for an agent, enabling multi-threading within a
workflow, is configured in the Thread Buffer tab. For further information, see Section 4.1.6.2.1,
“Thread Buffer Tab”.
The Backlog tab contain configurations related to backlog functionality. If the backlog is not enabled,
the files will be moved directly to their final destination when an end batch message is received. If the
backlog however is enabled, the files will first be moved to a directory called DR_READY and then to
their final destination. Refer to Section 14.3.3.4.2, “Retrieves” for further information about transaction
behavior.
When backlog is initialized and when backlogged files are transferred a note is registered in the System
Log.
704
Desktop 7.1
Note that this global backlog memory buffer is used and shared by this and any other forwarding agent
that transfers files to a remote server. The same memory buffer is used for all ongoing transactions on
the same execution context.
When several workflows are scheduled to run simultaneously, and the forwarding agents are assigned
with the backlog function, there is a risk that the buffer may be too small. In such case, it is recommen-
ded that you increase the size of this property.
Example 125.
If no property is set the default value of 10 MB will be used. The amount allocated will be printed out
in the Execution Context's log file. This memory will not affect the Java heap size and is used by the
agent when holding a copy of the file being transferred.
internal MultiForwardingUDR {
// Entire file content
byte[] content;
// Target filename and directory
FNTUDR fntSpecification;
};
Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.
A UDR of the type MultiForwardingUDR which target filename is not identical to its precedent is
saved in a new output file.
Note! After a target filename that is not identical to its precedent is saved, you cannot use the
first filename again. For example: Saving filename B after saving filename A, prevents you
from using A again. Instead, you should first save all the A filenames, then all the B filenames,
and so forth.
Non-existing directories will be created if the Create Non-Existing Directories check box on the Fi-
lename Template tab is checked.
705
Desktop 7.1
If not checked a runtime error will occur if a previously unknown directory exists in the FNTUDR of
an incoming MultiForwardingUDR. Every configuration option referring to bytearray input is ignored
when MultiForwardingUDRs are expected.
For further information about Filename Template, see Section 4.1.6.2.4, “Filename Template Tab”.
Example 126.
This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.
import ultra.FNT;
MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){
MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;
return multiForwardingUDR;
}
consume {
bytearray file1Content;
strToBA (file1Content, "file nr 1 content");
bytearray file2Content;
strToBA (file2Content, "file nr 2 content");
The Analysis agent mentioned previous in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the root directory. The Create Non-Existing Directories check box under the Filename
Template tab in the configuration of the forwarding agent must be checked if the directories do
not previously exist.
706
Desktop 7.1
14.3.3.4.1. Emits
14.3.3.4.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Begin Batch When a Begin Batch message is received, the temporary directory DR_TMP_DIR is
first created in the target directory, if not already created. Then, a target file is created
and opened in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is closed
and, finally, the file is moved from the temporary directory to the target directory.
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.
14.3.3.5. Introspection
The agent consumes bytearray or MultiForwardingUDRtypes.
14.3.3.6.1. Publishes
This parameter is of the string type and is defined as a batch MIM context
type.
File Transfer This MIM parameter contains a timestamp, indicating when the target file is
Timestamp created in the temporary directory.
Target File Size is of the long type and is defined as a trailer MIM
context type.
Target Hostname This MIM parameter contains the name of the target host, as defined in the
Target or Advanced tab of the agent.
707
Desktop 7.1
14.3.3.6.2. Accesses
Various resources from the Filename Template configuration to construct the target filename.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
Reported, along with the name of the target file, when the file is successfully written to the target
directory.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
• Command trace
A printout of the control channel trace either in the Workflow Monitor or in a file.
14.4.1.1. Prerequisites
The reader of this information must be familiar with:
708
Desktop 7.1
• Amazon S3
14.4.2. Preparations
As there are several different distributions of Hadoop available, you may have to create your own mzp
package containing the specific Hadoop jar files to be used, and commit this package into your Medi-
ationZone® system in order to start using the Hadoop File System agents. This is required when you
are using a different distribution than the one that is available at hadoop.apache.org. The included mzp
package has been tested with Apache Hadoop version 2.2.0 and Amazon S3.
1. Copy the set of jar files for the Hadoop version you want to use to the machine that MediationZone®
is running on.
Depending on the file structure, the files may be located in different folders, but typically they will
be located in a folder called hadoop, or hadoop-common, where the hadoop-common.jar
file is placed in the root directory, and the rest of the jar files are placed in a subdirectory called
/lib.
Example 127.
This example shows how this is done for the Cloudera Distribution of Hadoop 4.
FILES="\
file=commons-configuration-1.6.jar \
file=commons-io-2.1.jar \
file=hadoop-auth-2.0.0-cdh4.4.0.jar \
file=hadoop-common-2.0.0-cdh4.4.0.jar \
file=hadoop-hdfs-2.0.0-cdh4.4.0.jar \
file=protobuf-java-2.4.0a.jar"
Note! These files are version specific, which means that the list in the example will not work
for other versions of Hadoop.
709
Desktop 7.1
Example 128.
This example shows how this could look like for the Cloudera Distribution of Hadoop 4.
When a file has been successfully processed by the workflow, the agent offers the possibility of moving,
renaming, removing or ignoring the original file. The agent can also be configured to keep files for a
set number of days. In addition, the agent offers the possibility of decompressing compressed (gzip)
files after they have been collected. When all the files are successfully processed, the agent stops to
await the next activation, whether it is scheduled or manually initiated.
14.4.3.1. Configuration
The Hadoop FS collection agent configuration window is displayed when you double-click on the
agent, or if you right-click on the agent and select the Configuration... option. Part of the configuration
may be done in the Filename Sequence or Sort Order service tab described in the Filename Sequence
Tab and the Sort Order Tab sections in the Desktop user's guide.
The Hadoop FS tab contains configurations related to the placement and handling of the source files
to be collected by the agent.
710
Desktop 7.1
File System Select the file system type used in this drop-down-list; Distributed File System or
Type Amazon S3. See Section 14.4.3.1.2, “File System Type Settings” for further information.
Replication The replication factor per file. See the Apache Hadoop Project documentation for inform-
ation about the replication factor. This setting has no effect in the Hadoop FS collection
agent.
Collection If there are more than one collection strategy available in the system a Collection Strategy
Strategy drop down list will also be visible. For more information about the nature of the collection
strategy please refer to Section 15, “Appendix VII - Collection Strategies”.
Directory Absolute pathname of the directory on the remote file system, where the source files
reside.
Filename Name of the source files on the local file system. Regular expressions according to Java
syntax applies. For further information, see:
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Example 129.
711
Desktop 7.1
Compres- Compression type of the source files. Determines if the agent will decompress the files
sion before passing them on in the workflow.
Move to If enabled, the source files will be moved to the automatically created subdirectory
Temporary DR_TMP_DIR in the source directory, prior to collection. This option supports safe
Directory collection of a source file reusing the same name.
Append Enter the suffix that you want added to the file name prior to collecting it.
Suffix to
Filename
Important! Before you execute your workflow, make sure that none of the file
names in the collection directory include this suffix.
Inactive If the specified value is greater than zero, and if no file has been collected during the
Source specified number of hours, the following message is logged:
Warning
(hours) The source has been idle for more than <n> hours, the last
inserted file is <file>.
Move to If enabled, the source files will be moved from the source directory (or from the directory
DR_TMP_DIR, if using Move Before Collecting) to the directory specified in the Des-
tination field, after the collection.
If the Prefix or Suffix fields are set, the file will be renamed as well.
Note! It is possible to move collected files from one file system to another however
it causes negative impact on the performance. Also, the workflow will not be
transaction safe, because of the nature of the copy plus delete functionality.
• It is not always possible to move collected files from one file system to another.
• Moving files between different file systems usually cause worse performance
than having them on the same file system.
• The workflow will not be transaction safe, because of the nature of the copy
plus delete functionality.
Rename If enabled, the source files will be renamed after the collection, remaining in the source
directory from which they were collected (or moved back from the directory
DR_TMP_DIR, if using Move Before Collecting).
Remove If enabled, the source files will be removed from the source directory (or from the direct-
ory DR_TMP_DIR, if using Move Before Collecting), after the collection.
Ignore If enabled, the source files will remain in the source directory after collection.
712
Desktop 7.1
Destination Absolute pathname of the directory on the local file system of the EC into which the
source files will be moved after collection. The pathname might also be given relative
to the $MZ_HOME environment variable.
Note! If Rename is enabled, the source files will be renamed in the current direct-
ory (source or DR_TMP_DIR). Be sure not to assign a Prefix or Suffix, giving
files new names, still matching the filename regular expression, or else the files
will be collected over and over again.
Search and
Replace Note! To apply Search and Replace, select either Move to or Rename.
• Search: Enter the part of the filename that you want to replace.
Search and Replace operate on your entries in a way that is similar to the Unix sed
utility. The identified filenames are modified and forwarded to the following agent in
the workflow.
• Use regular expression in the Search entry to specify the part of the filename that you
want to extract.
Note! A regular expression that fails to match the original file name will abort
the workflow.
• Enter Replace with characters and meta characters that define the pattern and content
of the replacement text.
713
Desktop 7.1
• Search: .new
• Replace: .old
• Search: ([A-Z]*[0-9]*)_([a-z]*)
• Replace: $2_DONE
Note that the search value divides the file name into two parts by using brackets.
The replace value applies the second part by using the place holder $2.
Keep Number of days to keep source files after the collection. In order to delete the source
(days) files, the workflow has to be executed (scheduled or manually) again, after the configured
number of days.
Note, a date tag is added to the filename, determining when the file may be removed.
This field is only enabled if Move to or Rename is selected.
Depending on which type of file system you have selected there in the File System Type list, the settings
for the File System will vary.
When you have selected Distributed File System, you have the following settings:
Figure 484.
Host Enter the IP address or hostname of the NameNode in this field. See the Apache Hadoop
Project documentation for further information about the NameNode.
Port Enter the port number of the NameNode in this field.
14.4.3.1.2.2. Amazon S3
When you have selected Amazon S3, you have the following settings:
714
Desktop 7.1
Figure 485.
Access Key Enter the access key for the user who owns the Amazon S3 account in this field.
Secret Key Enter the secret key for the stated access key in this field.
Bucket Enter the name of the Amazon S3 bucket in this field.
14.4.3.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted before the first part of each collected file is fed into a workflow.
End Batch Emitted after the last part of each collected file has been fed into the system.
14.4.3.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
Hint End Batch If a Hint End Batch message is received, the collector splits the batch at the end of
the current block processed (32 kB), If the block end occurs within a UDR, the batch
will be split at the end of the preceding UDR.
After a batch split, the collector emits an End Batch message, followed by a Begin
Batch message (provided that there is data in the subsequent block).
14.4.3.3. Introspection
The introspection is the type of data an agent expects and delivers.
715
Desktop 7.1
14.4.3.4.1. Publishes
Source File Count is of the long type and is defined as a global MIM
context type.
Source File Size This MIM parameter contains the file size, in bytes, of the source file.
Source File Size is of the long type and is defined as a header MIM
context type.
Source Filename This MIM parameter contains the name of the currently processed file, as
defined at the source.
Note! When the agent collects from multiple directories, the MIM value
is cleared after collection of each directory. Then, the MIM value is
updated with the listing of the next directory.
Source Files Left is of the long type and is defined as a header MIM
context type.
Source Pathname This MIM parameter contains the path to the directory where the file currently
under processing is located.
716
Desktop 7.1
14.4.3.4.2. Accesses
For further information about the agent message event type please refer to Section 5.5.14, “Agent
Event”.
Reported along with the name of the source file that has been collected and inserted into the workflow.
Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted; refer to Section 14.4.3.2, “Transaction Behavior” for
further information.
To ensure that downstream systems will not use the files until they are closed, they are stored in a
temporary directory until the End Batch message is received. This behavior also applies to Cancel
Batch messages. If a Cancel Batch is received, file creation is cancelled.
14.4.4.1. Configuration
The Hadoop FS forwarding agent configuration window is displayed when the agent in a workflow is
double-clicked, or right-clicked, selecting Configuration....
717
Desktop 7.1
Input Type The agent can act on two input types. Depending on which one the agent is con-
figured to work with, the behavior will differ.
The default input type is bytearray, that is the agent expects bytearrays. If nothing
else is stated the documentation refer to input of bytearray.
718
Desktop 7.1
Compression Compression type of the target files. Determines if the agent will compress the files
or not.
Note! At this point the temporary file is created and closed, however the final
filename has not yet been created.
Arguments This field is optional. Each entered parameter value has to be separated from the
preceding value with a space.
The temporary filename is inserted as the second last parameter, and the final file-
name is inserted as the last parameter, automatically. This means that if, for instance,
no parameter is given in the field, the arguments will be as follows:
$1=<temporary_filename> $2=<final_filename>
If three parameters are given in the field Arguments, the arguments are set as:
$1=<parameter_value_#1>
$2=<parameter_value_#2>
$3=<parameter_value_#3>
$4=<temporary_filename>
$5=<final_filename>
Depending on which type of file system you have selected there in the File System Type list, the settings
for the File System will vary.
When you have selected Distributed File System, you have the following settings:
719
Desktop 7.1
Figure 487.
Host Enter the IP address or hostname of the NameNode in this field. See the Apache Hadoop
Project documentation for information about the NameNode.
Port Enter the port number of the NameNode in this field.
14.4.4.1.2.2. Amazon S3
When you have selected Amazon S3, you have the following settings:
Figure 488.
Access Key Enter the access key for the user who owns the Amazon S3 account in this field.
Secret Key Enter the secret key fo the stated access key in this field.
Bucket Enter the name of the Amazon S3 bucket in this field.
The names of the created files are determined by the settings in the Filename Template tab.
For a detailed description of the Filename Template tab, see Section 4.1.6.2.4, “Filename Template
Tab”.
When the agent is set to use MultiForwardingUDR input, it accepts input of the UDR type MultiFor-
wardingUDR declared in the package FNT. The declaration follows:
internal MultiForwardingUDR {
// Entire file content
byte[] content;
// Target filename and directory
FNTUDR fntSpecification;
};
Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.
A UDR of the type MultiForwardingUDR which target filename is not identical to its precedent is
saved in a new output file.
720
Desktop 7.1
Note! After a target filename that is not identical to its precedent is saved, you cannot use the
first filename again. For example: Saving filename B after saving filename A, prevents you
from using A again. Instead, you should first save all the A filenames, then all the B filenames,
and so forth.
Non-existing directories will be created if the Create Non-Existing Directories checkbox under the
Filename Template tab is checked. If not checked a runtime error will occur if a previously unknown
directory exists in the FNTUDR of an incoming MultiForwardingUDR. Every configuration option
referring to bytearray input is ignored when MultiForwardingUDRs are expected.
Example 131.
This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.
import ultra.FNT;
MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){
MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;
return multiForwardingUDR;
}
consume {
bytearray file1Content;
strToBA (file1Content, "file nr 1 content");
bytearray file2Content;
strToBA (file2Content, "file nr 2 content");
The Analysis agent mentioned previous in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the root directory. The Create Non-Existing Directories check box in the Filename
Template tab in the configuration of the forwarding agent must be selected if the directories
do not previously exist.
721
Desktop 7.1
14.4.4.2.1. Emits
14.4.4.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Begin Batch When a Begin Batch message is received, the temporary directory DR_TMP_DIR is
first created in the target directory, if not already created. Then a target file is created
and opened in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is first closed
and then the Command, if specified in After Treatment, is executed. Finally, the file
is moved from the temporary directory to the target directory.
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.
14.4.4.3. Introspection
The agent consumes bytearray or MultiForwardingUDR types.
14.4.4.4.1. Publishes
This parameter is of the string type and is defined as a batch MIM context
type.
Target Filename This MIM parameter contains the name of the target filename, as defined in
Filename Template.
722
Desktop 7.1
14.4.4.4.2. Accesses
This MIM parameter contains various resources from the Filename Template configuration are accessed
to construct the target filename.
Reported along with the name of the target file when it has been successfully stored in the target
directory. If an After Treatment Command is specified, the message also indicate that it has been
executed.
14.5.1.1. Prerequisites
The reader of this document should be familiar with:
14.5.2. Overview
The Inter Workflow agents allow files to be distributed between workflows within the same Medi-
ationZone® system. It is especially useful when transferring data from real-time workflows to batch
workflows.
The Inter Workflow agents use an Inter Workflow storage server to manage the actual data storage.
The storage server can either run on an Execution Context or on the Platform. The storage server and
base directory to use is configured in an Inter Workflow profile.
723
Desktop 7.1
Figure 489. The Inter Workflow agents distribute files from one workflow to another.
Several forwarding workflows may be configured to distribute batches to the same profile, however
only one collection workflow at a time can be activated to collect from it.
It is safe to accumulate a lot of data in the storage server directory. When the initial set of directories
has been populated with a predefined number of files, new directories are automatically created to
avoid problems with file system performance.
The Inter Workflow profile is loaded when you start a workflow that depends on it. Changes to the
profile become effective when you restart the workflow.
Note! Files collected by the Inter Workflow agent depend on, and are connected with, the Inter
Workflow profile in use. If an Inter Workflow profile is imported to the system, files left in the
storage connected to the old profile will be unreachable.
There is one menu item that is specific for Inter Workflow profile configurations, and it is described
in the coming section.
Item Description
724
Desktop 7.1
External References To Enable External References in an agent profile field. Please refer to Sec-
tion 9.5.3, “Enabling External References in an Agent Profile Field”.
Storage Host From the drop-down list, select either Automatic, Platform, or an activated Exe-
cution Context.
Using Automatic means that the storage will use the Execution Context where the
first workflow accessing this profile is started. Following workflows using the same
profile will use the same Execution Context for storage until the first workflow
acessing the profile is stopped. The Execution Context where the next workflow
accessing this profile is started will then be used for storage. The location of the
storage will therefore vary depending on the start order of the workflows.
725
Desktop 7.1
Example 132.
1. Workflow 2 is started.
2. Workflow 1 is started.
3. Workflow 1 is stopped.
4. Workflow 2 is stopped.
5. Workflow 1 is started.
Note! The workflow must be running on the same Execution Context as its
storage resides. If the storage is configured to be Automatic, its corresponding
directory must be a shared file system between all the Execution Contexts.
Root Directory Absolute pathname of the directory on the storage handler where the temporary
files will be placed.
Max Bytes An optional parameter stating the limit of the space consumed by the files stored
in the Root Directory. If the limit is reached, any Inter Workflow forwarding agent
using this profile will abort.
Max Batches An optional parameter stating the maximum number of batches stored in the Root
Directory. If the limit is reached, any Inter Workflow forwarding agent using this
profile will abort.
Compress inter- Select this check box if you want to compress the data sent between the Inter
mediate data Workflow agents.
The data will be compressed into *.gzip format with compression level 5.
Named MIMs A list of user defined MIM names. These variables do not have any values assigned.
They are populated with existing MIM values from the Inter Workflow forwarding
agent. This way, MIMs from the forwarding workflow can be passed on to the
collecting workflow.
726
Desktop 7.1
Note! An Inter Workflow profile cannot be used by more than one Inter Workflow collection
agent at the time. A workflow trying to use an already locked profile will abort.
Note! In a batch workflow, the collecting Inter Workflow agent will hand over the data, in UDR
form, to the next agent in turn, one at the time.
In a realtime workflow on the other hand, the collecting Inter Workflow agent routes the UDRs
into the workflow, one batch at a time.
Note! The minimum value is 32000 bytes, and even if a lower value is configured, 32000 will
apply.
Every batch file that the agent routes to the workflow is preceded with a special UDR that is called
NewFileUDR, and contains the name of the batch file.
Figure 491. The Inter Workflow collection agent in Batch and Real-Time Workflows
727
Desktop 7.1
Profile The name and most recent version of the Inter Workflow profile (select Inter
Workflow Profile after clicking the New Configuration button in the
Desktop).
All workflows in the same workflow configuration can use separate Inter
Workflow profiles, if that is preferred. In order to do that the profile must
be set to Default in the Workflow Table tab found in the Workflow Prop-
erties dialog. After that each workflow in the table can be appointed different
profiles.
Deactivate on Idle If enabled, the agent will deactivate the workflow if it has no more batches
to collect.
No Merge If enabled, each incoming batch will generate one outgoing batch.
Merge Batches Based If enabled, decides if the incoming batches will be merged into larger entities,
on Criteria as soon as any of the merge criteria defined in the Merge Definition, are
met.
Merge All Available If enabled, all incoming batches will be inserted into one outgoing batch.
Batches
Number of Bytes The size of the batches produced by the collection agent. The incoming files
are never split. For instance, if 300 is entered and the source files are 200
bytes each, the produced batches will be 400 bytes.
Number of Batches The number of incoming batches to merge.
Age of Oldest Batch Indicates for how long (in seconds) the agent will wait after the first incoming
(sec) batch. When this time has expired, an outgoing batch will be produced, re-
gardless if the Number of Bytes/Batches criteria has been fulfilled or not.
728
Desktop 7.1
This section includes information about the Inter Workflow collection agent transaction behavior. For
information about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transac-
tions”.
14.5.4.1.1.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Emitted before the first byte of the collected file(s) are fed into a workflow.
End Batch Emitted after the last byte of last collected file has been fed into the system.
14.5.4.1.1.2. Retrieves
The agent retrieves commands from other agents, and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If Never Abort has been configured on workflow level, in Workflow Properties, and
a Cancel Batch message is received, the agent sends the batch (the UDRs that have
been successfully read) to ECS. The batch will be closed and moved immediately, re-
gardless of the criteria defined in the Merge Definition and the Workflow will continue
executing.
If, on the other hand, the Cancel Batch behavior has been configured to abort the
workflow (default), the batch will not be sent to ECS.
14.5.4.1.2. Introspection
For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.
14.5.4.1.3.1. Publishes
729
Desktop 7.1
Source Files Left is of the long type and is defined as a header MIM
context type.
<any> Any named MIM in the Inter Workflow Profile Editor.
All imported MIMs are automatically converted to the type string, regardless
of the original type. APL provides functions to convert strings to other data
types. For further information about conversion functions, see the APL Refer-
ence Guide.
For information about how to add and map named MIMs, see Section 14.5.3.3, “Profile Configuration”
.
14.5.4.1.3.2. Accesses
Reported when a batch is finished and then sent on to the subsequent agent.
Reported upon activation, and shortly before the merging of a new batch is started.
730
Desktop 7.1
Profile Click the Browse button and select a profile that you want assigned to the agent.
All workflows in the same workflow configuration can use separate Inter Workflow profiles,
if that is preferred. In order to do that the profile must be set to Default in the Workflow
Table tab found in the Workflow Properties dialog. After that each workflow in the table
can be appointed different profiles.
Although the real-time Inter Workflow collecting agent is not transaction safe, it checks if the workflow
queue is empty prior to removing the current batch from the storage. If the agent is stopped while there
is still data in the workflow queue, the last batch will be collected again once the agent becomes active.
14.5.4.2.2. Introspection
For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.
14.5.4.2.3.1. Publishes
All imported MIMs are automatically converted to the type string, regardless of the original
type. APL provides functions to convert strings to other data types. For further information
about conversion functions, see the APL Reference Guide.
For information about how to add and map named MIMs, see Section 14.5.3.3, “Profile
Configuration” .
14.5.4.2.3.2. Accesses
An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
The real-time Inter Workflow collecting agent will not start to collect a batch that has been forwarded
by a batch workflow to the storage until the batch is completely forwarded. This message will be
sent when the batch has been completely forwarded.
731
Desktop 7.1
Profile The name and most recent version of the Inter Workflow profile configuration
(select Inter Workflow Profile after clicking the New Configuration button
in the Desktop).
All workflows in the same workflow configuration can use separate Inter
Workflow profiles, if that is preferred. In order to do that the profile must be
set to Default in the Workflow Table tab found in the Workflow Properties
dialog. After that each workflow in the table can be appointed different profiles.
Named MIM The user defined MIM names according to the definitions in the selected profile.
MIM Resource Selected, existing MIM values of the workflow that the Named MIMs are
mapped to. This way, MIM values from this workflow are passed on to the
collection workflow.
Produce Empty If enabled, empty files will be created even if no UDRs are forwarded from a
Batches batch.
14.5.5.1.1.1. Emits
14.5.5.1.1.2. Retrieves
14.5.5.1.2. Introspection
732
Desktop 7.1
For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.
14.5.5.1.3.1. Publishes
14.5.5.1.3.2. Accesses
The agent accesses various resources from the workflow and all its agents to configure the mapping
to the Named MIMs (that is, what MIMs to refer to the collection workflow).
Reported when a file has been closed in the target directory (hence ready for collection). The message
is only used in batch workflows.
Profile The name and most recent version of the Inter Workflow profile configuration
(select Inter Workflow Profile after clicking the New Configuration button
in the Desktop.
All workflows in the same workflow configuration can use separate Inter
Workflow profiles, if that is preferred. In order to do that, the profile must be
set to Default in the Workflow Table tab found in the Workflow Properties
dialog. After that each workflow in the table can be appointed different profiles.
Named MIM The user defined MIM names according to the definitions in the selected profile.
733
Desktop 7.1
MIM Resource Selected, existing MIM values of the workflow that the Named MIMs are
mapped to. This way, MIM values from this workflow are passed on to the
collection workflow.
Volume (bytes) When the file size has reached the number of bytes entered in this field, the file
will be closed as soon as the current bytearray has been included, and stored in
the storage directory. This means that the file size may actually be larger than
the set value since MediationZone® will not cut off any bytearrays. If nothing
is entered, this file closing criteria will not be used.
Volume (UDRs) When the file contains the number of UDRs entered in this field, the file will
be closed and stored in the storage directory. If nothing is entered, this file
closing criteria will not be used.
Timer (sec) When the file has been open for the number of seconds entered in this field, the
file will be closed and stored in the storage directory. If nothing is entered, this
file closing criteria will not be used.
Enable Worker Select this check box to enable worker thread functionality, allowing you to
Thread configure a queue size in order to improve performance and reduce the risk for
blocking during heavy I/O.
Queue Size Enter the queue size you wish to have for the Worker Thread in this field.
Note! Since there are no natural batch boundaries within a real-time workflow, Volume and/or
Timer criteria must be set to enable the file outputting data to be closed and a new one opened.
If several file closing criteria have been selected, all will apply, using a logical OR.
If the workflow is deactivated before any of the file closing criteria has been fulfilled, the UDRs
currently stored in memory will be flushed, that is flushed to the current batch without being
processed. Hence, the size of the last file cannot be predicted. In case of a crash, the content of
the last batch cannot be predicted. The error handling is taken care of by the Inter Workflow
collection agent. If the file is corrupt, it will be thrown away and a message is logged in the
System Log. The collector will automatically continue with the batch next in order.
For information about the general MediationZone® transaction behavior, see Section 4.1.11.8,
“Transactions”.
14.5.5.2.1.1. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Begin Batch Creates and opens a target file in a temporary directory.
End Batch Moves the file from the temporary directory to the target directory.
Cancel Batch Deletes the current file from the temporary directory.
14.5.5.2.2. Introspection
For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.
734
Desktop 7.1
14.5.5.2.3.1. Publishes
14.5.5.2.3.2. Accesses
The agent accesses various resources from the workflow and all its agents to configure the mapping
to the Named MIMs (that is, what MIMs to refer to the collection workflow).
An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor. For further information about the agent message event type, see Section 5.5.14, “Agent
Event”.
Reported when a file has been closed in the target directory (hence ready for collection). The message
is only used in batch workflows.
14.6.1.1. Prerequisites
The reader of this information should be familiar with:
14.6.2. Overview
The SCP protocol is intended for use with SSH servers that do not support the SFTP protocol. SCP is
applied by issuing remote shell commands over the SSH connection with server systems that understand
standard shell commands such as the Unix command syntax.
14.6.3. Preparations
Prior to configuring an SCP agent, consider the following preparation notes:
• Server Identification
• Attributes
735
Desktop 7.1
• Authentication
• Server Keys
mz.ssh.known_hosts_file
It is set in executioncontext.xml to manage where the file is saved. The default value is
${mz.home}/etc/ssh/known_hosts.
The SSH implementation uses JCE (Java Cryptography Exentsion), which means that there may be
limitations on key sizes for your Java distribution. This is usually not a problem. However, there may
be some cases where the unlimited strength cryptography policy is needed. For instance, if the host
RSA keys are larger than 2048 bits (depending on the SSH server configuration). This may require
that you update the Java Platform that runs the Execution Context.
For unlimited strength cryptography on the Oracle JRE, download the JCE Unlimited Strength Juris-
diction Policy Files from:
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html.
Replace the jar files in $JAVA_HOME/jre/lib/security with the files in this package.
The OpenJDK JRE does not require special handling of the JCE policy files for unlimited strength
cryptography.
14.6.3.2. Attributes
The SCP collection agent and the SCP forwarding agent share a number of common attributes. They
are both supported by a number of algorithms:
14.6.3.3. Authentication
The SCP agents support authentication through either username/password or private key. Private keys
can optionally be protected by a Key password. Most commonly used private key files, can be imported
into MediationZone® .
keyType The type of key to be generated. Both RSA and DSA key types are supported.
directoryPath Where to save the generated keys.
736
Desktop 7.1
Example 133.
The private key may be created using the following command line:
When the keys are created the private key may be imported to the SCP agent:
The agent uses a file with the known hosts and keys. It will accept the key supplied by the server if
either of the following is fulfilled:
1. The host is previously unknown. In this case the public key will be registered in the file.
2. The host is known and the public key matches the old data.
737
Desktop 7.1
3. The host is known however has a new key and the user has configured to accept the new key. For
further information, see description of the Advanced tab.
If the host key changes for some reason, the file will have to be removed (or edited) in order for the
new key to be accepted.
Upon activation, the agent establishes an SSH2 connection and an SCP session towards the remote
host. If this fails, additional hosts are tried, if configured. On success, the source directory on the remote
host is scanned for all files matching the current filter. In addition, the Filename Sequence service
may be utilized for further control of the matching files. All files found, will be fed one after the other
into the workflow.
When a file has been successfully processed by the workflow, the agent offers the possibility of moving,
renaming, removing or ignoring the original file. The agent can also automatically delete moved or
renamed files after a configurable number of days. In addition, the agent offers the possibility of de-
compressing (gzip) files after they have been collected, before they are inserted into the workflow.
When all the files have been successfully processed the agent stops, awaiting the next activation,
scheduled or manually initiated.
14.6.4.1. Configuration
To open the SCP collection agent configuration view from the workflow editor, either double-click
the agent, or right-click the agent and then select Configuration.
You can configure part of the parameters can be done in the Filename Sequence or Sort Order
service tabs. For further information, see Section 4.1.6.2.2, “Filename Sequence Tab” and Sec-
tion 4.1.6.2.3, “Sort Order Tab”.
• Connection
• Source
• Advanced
The Connection tab contains configuration settings related to the remote host and authentication.
738
Desktop 7.1
Server Information If your MediationZone® system is installed with the Multi Server functionality
Provider you can configure the SCP agent to collect from more than one server. For
further information, see the Multi Server File user's guide.
Host Primary host name or IP-address of the remote host to be connected. If a con-
nection cannot be established to this host, the Additional Hosts, specified in
the Advanced tab, are tried.
File System Type Type of file system on the remote host. This information is used to construct
the remote filenames.
Authenticate With Choice of authentication mechanism. Both password and private key authentic-
ation are supported.
Username Username for an account on the remote host, enabling the SCP session to login.
Password Password related to the specified Username. This option only applies when
password authentication is enabled.
Private Key The Select... button will display a window where the private key may be inser-
ted. If the private key is protected by a passphrase, the passphrase must be
provided as well. This option only applies when private key authentication is
enabled. For further information, see Section 14.6.3.3, “Authentication”.
Enable Collection Select this check box to enable repetitive attempts to connect and start a file
Retries transfer.
When this option is selected, the agent will attempt to connect to the host as
many times as is stated in the Max Retries field described below. If the con-
nection fails, a new attempt will be made after the number of seconds entered
in the Retry Interval (s) field described below.
Retry Interval (s) Enter the time interval in seconds, between retries.
If a connection problem occurs, the actual time interval before the first attempt
to reconnect will be the time set in the Timeout field in the Advanced tab plus
739
Desktop 7.1
the time set in the Retry Interval (s) field. For the remaining attempts, the
actual time interval will be the number seconds entered in this field.
Max Retries Enter the maximum number of retries to connect.
In case more than one connection attempt has been made, the number of used
retries will be reset as soon as a file transfer is completed successfully.
Note! This number does not include the original connection attempt.
The Source tab contains configurations related to the remote host, source directories and source files.
The configuration available can be modified through the choice of a Collection Strategy. The following
text describes the configuration options available when no custom strategy has been chosen.
Collection If there are more than one collection strategy available in the system a Collection
Strategy Strategy drop down list will also be visible. For further information about the
collection strategy, see Section 15, “Appendix VII - Collection Strategies”.
Directory Absolute pathname of the source directory on the remote host, where the source
files reside. The pathname might also be given relative to the home directory of
the Username account.
Filename Name of the source files on the remote host. Regular expressions according to
Java syntax applies. For further information, see http://docs.or-
acle.com/javase/8/docs/api/java/util/regex/Pattern.html.
740
Desktop 7.1
Example 134.
Compression Compression type of the source files. Determines whether the agent will decompress
the files before passing them on in the workflow or not.
Move to Tempor- If enabled, the source files will be moved to the automatically created subdirectory
ary Directory DR_TMP_DIR in the source directory, prior to collection. This option supports
safe collection of a source file reusing the same name.
Append Suffix to Enter the suffix that you want added to the file name prior to collecting it.
Filename
Important! Before you execute your workflow, make sure that none of the
file names in the collection directory include this suffix.
Inactive Source If enabled, when the configured number of hours have passed without any file
Warning (h) being available for collection, a warning message (event) will appear in the System
Log and Event Area:
Move to If enabled, the source files will be moved from the source directory (or from the
directory DR_TMP_DIR, if using Move Before Collecting) after collection, to
the directory specified in the Destination field. If Prefix or Suffix are set, the file
will be renamed as well.
If a file with the same filename already exist in the target directory, this file
will be overwritten and the workflow will not abort.
Destination Absolute pathname of the directory on the remote host into which the source files
will be moved after the collection. This field is only available if Move to is enabled.
The Directory has to be located in the same file system as the collected
files at the remote host. Also, absolute pathnames must be defined. Relative
pathnames cannot be used.
Prefix and Suffix Prefix and/or suffix that will be appended to the beginning and/or the end, respect-
ively, of the source files after the collection. This field is only available if Move
to or Rename is enabled.
741
Desktop 7.1
If Rename is enabled, the source files will be renamed in the current direct-
ory (source or DR_TMP_DIR). Be sure not to assign a Prefix or Suffix,
giving files new names still matching the Filename Regular Expression.
That would cause the files to be collected over and over again.
• Search: Enter the part of the filename that you want to replace.
Search and Replace operate on your entries in a way that is similar to the Unix
sed utility. The identified filenames are modified and forwarded to the following
agent in the workflow.
• Use regular expression in the Search entry to specify the part of the filename
that you want to extract.
A regular expresion that fails to match the orignal file name will abort
the workflow.
• Enter Replace with characters and metacharacters that define the pattern and
content of the replacement text.
• Search: .new
• Replace: .old
• Search: ([A-Z]*[0-9]*)_([a-z]*)
• Replace: $2_DONE
Note that the search value divides the file name into two parts by using
brackets. The replace value applies the second part by using the place
holder $2.
Keep (days) Number of days to keep moved or renamed source files on the remote host after
the collection. In order to delete the source files, the workflow has to be executed
(scheduled or manually) again, after the configured number of days.
Note, a date tag is added to the filename, determining when the file may be re-
moved. This field is only available if Move to or Rename is selected.
742
Desktop 7.1
Rename If enabled, the source files will be renamed after the collection, remaining (or
moved back from the directory DR_TMP_DIR, if using Move Before Collecting)
in the source directory from which they were collected.
Remove If enabled, the source files will be removed from the source directory (or from the
directory DR_TMP_DIR, if using Move Before Collecting), after the collection.
Ignore If enabled, the source files will remain in the source directory after the collection.
This option is not available if Move Before Collecting is enabled.
The Advanced tab contains configurations related to more specific use of the SCP service.
Port The port number the SCP service will use on the remote host.
Timeout (s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Accept New Host If selected, the agent overwrites the existing host key when the host is repres-
Keys ented with a new key. The default behavior is to abort when the key mis-
matches.
Selecting this option causes a security risk since the agent will accept
new keys regardless if they possibly belong to another machine.
Enable Key Re-Ex- Used to enable and disable automatic re-exchange of session keys during on-
change going connections. This can be useful if you have long lived sessions since
you may experience connection problems for some SFTP servers if one of the
sides initiates a key re-exchange during the session.
Additional Hosts List of additional host names or IP-addresses that may be used to establish a
connection. These hosts are tried, in sequence from top to bottom, if the agents
fail to connect to the remote host set in their Connection tabs.
Use the Add, Edit, Remove, Move up and Move down buttons to configure
the host list.
743
Desktop 7.1
14.6.4.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Will be emitted just before the first byte of each collected file is fed into a workflow.
End Batch Will be emitted just after the last byte of each collected file has been fed into the system.
14.6.4.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
APL code where Hint End Batch is followed by a Cancel Batch will always
result in workflow abort. Make sure to design the APL code to first evaluate
the Cancel Batch criteria to avoid this sort of behavior.
Hint End Batch If a Hint End Batch message is received, the collector splits the batch at the end of
the current processed block (as received from the server), provided that no UDR is
split. If the block end occurs within a UDR, the batch will be split at the end of the
preceding UDR.
After a batch split, the collector emits an End Batch Message, followed by a Begin
Batch message (provided that there is more data in the subsequent block).
14.6.4.3. Introspection
The introspection is the type of data an agent expects and delivers.
14.6.4.4.1. Publishes
744
Desktop 7.1
When the agent collects from multiple directories, the value of this
parameter is cleared after collection of each directory. Then, the MIM
value is updated with the listing of the next directory.
Source File Count is of the long type and is defined as a global MIM
context type.
Source Filename This MIM parameter contains the name of the currently processed file, as
defined at the source.
Source Files Left is of the long type and is defined as a header MIM
context type.
Source Host This MIM parameter contains the name of the host from which files are collec-
ted, as defined in the Host field in the Connection tab.
Source Host is of the string type and is defined as a global MIM context
type.
Source Pathname This MIM parameter contains the path from where the currently processed file
was collected, as defined in the Source tab.
14.6.4.4.2. Accesses
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
745
Desktop 7.1
Reported along with the name of the source file that has been collected and inserted into the workflow.
Reported along with the name of the current file, each time a cancelBatch message is received. This
assumes the workflow has not aborted. For further information, see Section 14.6.5.4.2, “Retrieves”.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
To ensure that downstream systems will not use the files until they are closed, they are maintained in
a temporary directory on the remote host until the endBatch message is received. This behavior is
also used for cancelBatch messages. If a Cancel Batch is received, file creation is cancelled.
14.6.5.1. Configuration
The SCP forwarding agent configuration window is displayed when the agent in a workflow is right-
clicked, selecting Configuration... or double-clicked. Part of the configuration may be done in the
Filename Template service tab described in Section 4.1.6.2.4, “Filename Template Tab”.
For information about the Connection tab see Figure 496, “The SCP Collection Agent Configuration
- Connection Tab”.
746
Desktop 7.1
The Target tab contains configuration settings related to the remote host, target directories and target
files.
Input Type The agent can act on two input types. Depending on which one the agent is
configured to work with, the behavior will differ.
The default input type is bytearray, that is the agent expects bytearrays. If
nothing else is stated the documentation refers to input of bytearray.
Compression Compression type of the destination files. Determines whether the agent will
compress the output files as it writes them.
747
Desktop 7.1
Note that no extra extension will be appended to the target filenames, even
if compression is selected.
Produce Empty If enabled, the agent will create empty output files for empty batches rather than
Files omitting those batches.
Handling of Select the behavior of the agent when the file already exists, the alternatives are:
Already Existing
Files • Overwrite - The old file will be overwritten and a warning will be logged in
the System Log.
• Add Suffix - If the file already exists the suffix ".1" will be added. If this file
also exists the suffix ".2" will be tried instead and so on.
• Abort - This is the default selection and is the option used for upgraded con-
figurations, that is workflows from an upgraded system.
Use Temporary If this option is selected, the agent will move the file to a temporary directory
Directory before moving it to the target directory. After the whole file has been transferred
to the target directory, and the endBatch message has been received, the tem-
porary file is removed from the temporary directory.
Use Temporary If there is no write access to the target directory and, hence, a temporary directory
File cannot be created, the agent can move the file to a temporary file that is stored
directly in the target directory. After the whole file has been transferred, and the
endBatch message has been received, the temporary file will be renamed.
The temporary filename is unique for every execution of the workflow. It consists
of a workflow and agent ID, and a file number.
Abort Handling Select how to handle the file in case of cancelBatch or rollback, either Delete
Temporary File or Leave Temporary File.
When a workflow aborts, the file will not be removed until the next time
the workflow is run.
748
Desktop 7.1
The Advanced tabs contain configurations related to more specific use of the SCP service, which
might not be frequently utilized.
Port The port number the SCP service will use on the remote host.
Timeout (s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Accept New Host If selected, the agent overwrites the existing host key when the host is represented
Keys with a new key. The default behaviour is to abort when the key mismatches.
Selecting this option causes a security risk since the agent will accept new
keys regardless if they possibly belong to another machine.
Enable Key Re-Ex- Used to enable and disable automatic re-exchange of session keys during ongoing
change connections. This can be useful if you have long lived sessions since you may
experience connection problems for some SFTP servers if one of the sides initiates
a key re-exchange during the session.
Additional Hosts List of additional host names or IP-addresses that may be used to establish a
connection. These hosts are tried, in sequence from top to bottom, if the agents
fail to connect to the remote host set in the Connection tab.
Use the Add, Edit, Remove, Move up and Move down buttons to configure
the host list.
Execute Select between the two options:
• Before Move: Execute the following command and its arguments prior to
transfer.
749
Desktop 7.1
• After Move: Execute the following command and its arguments on the local
copy of the transferred UDR, after transfer.
The temporary filename is inserted as the second last parameter, and the final
filename is inserted as the last parameter, automatically. This means that if, for
instance, no parameter is given in the field, the arguments will be as follows:
$1=<temporary_filename> $2=<final_filename>
If three parameters are given in the Arguments field, the arguments are set as:
$1=<parameter_value_#1>
$2=<parameter_value_#2>
$3=<parameter_value_#3>
$4=<temporary_filename>
$5=<final_filename>
The Backlog tab contain configurations related to backlog functionality. If the backlog is not enabled,
the files will be moved directly to their final destination when an end batch message is received. If the
backlog however is enabled, the files will first be moved to a directory called DR_READY and then to
their final destination. For further information about transaction behavior, see Section 14.6.5.4.2,
“Retrieves”.
When backlog is initialized and when backlogged files are transferred a note is registered in the System
Log.
Enable Backlog Enables backlog functionality. When not selected the agent's behavior is
similar to the standard SFTP forwarding agent.
Directory Base directory in which the agent will create sub directories to handle back-
logged files. Absolute or relative path names can be used.
750
Desktop 7.1
Type Files is the maximum number of files allowed in the backlog folder. Bytes
is the total sum (size) of the files that resides in the backlog folder. If a limit
is exceeded the workflow will abort.
Size Enter the maximal number of files or bytes that the backlog folder can contain.
Processing Order Determine the order by which the backlogged data will be processed once
connection is reestablished, select between First In First Out (FIFO) or Last
In First Out (LIFO).
Duplicate File Hand- Specifies the behavior if a file with the same file name as the one being
ling transferred is detected. The options are Abort or Overwrite and the action
is taken both when a file is transferred to the target directory or to the backlog.
Note that this global backlog memory buffer is used and shared by this and any other forwarding agent
that transfers files to a remote server. The same memory buffer is used for all ongoing transactions on
the same execution context.
When several workflows are scheduled to run simultaneously, and the forwarding agents are assigned
with the backlog function, there is a risk that the buffer may be too small. In such case, it is recommen-
ded that you increase the size of this property.
Example 136.
If no property is set the default value of 10 MB will be used. The amount allocated will be printed out
in the Execution Context's log file. This memory will not affect the Java heap size and is used by the
agent when holding a copy of the file being transferred.
internal MultiForwardingUDR {
// Entire file content
bytearray content;
// Target filename and directory
FNTUDR fntSpecification;
};
Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.
A UDR of the type MultiForwardingUDR which target filename is not identical to its precedent is
saved in a new output file.
751
Desktop 7.1
After a target filename that is not identical to its precedent is saved, you cannot use the first fi-
lename again. For example: Saving filename B after saving filename A, prevents you from using
A again. Instead, you should first save all the A filenames, then all the B filenames, and so forth.
Non-existing directories will be created if the Create Non-Existing Directories checkbox under the
Filename Template tab is checked, if not a runtime error will occur.
When MultiForwardingUDRs are expected configuration options in the Filename Template referring
to bytearray input will be ignored. For information about Filename Template see Section 4.1.6.2.4,
“Filename Template Tab”.
Example 137.
This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.
import ultra.FNT;
MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){
//Create the FNTUDR
FNTUDR fntudr = udrCreate(FNTUDR);
fntAddString(fntudr, dir);
fntAddDirDelimiter(fntudr);//Add a directory
fntAddString(fntudr, file);//Add a file
MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;
return multiForwardingUDR;
}
consume {
bytearray file1Content;
strToBA (file1Content, "file nr 1 content");
bytearray file2Content;
strToBA (file2Content, "file nr 2 content");
The Analysis agent mentioned previous in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the user defined directory. The Create Non-Existing Directories check box under
the Filename Template tab in the configuration of the forwarding agent must be checked if the
directories don't exist.
752
Desktop 7.1
14.6.5.4.1. Emits
None.
14.6.5.4.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Begin Batch When a Begin Batch message is received, the temporary directory DR_TMP_DIR is
first created in the target directory, if not already created. Then, a target file is created
and opened in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is closed
and, finally, the file is moved from the temporary directory to the target directory.
If backlog functionality is enabled an additional step is taken where the file is moved
from DR_TMP_DIR to DR_READY and then to the target directory. If the last step
failed the file will be left in DR_READY.
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.
14.6.5.5. Introspection
The agent consumes bytearray or MultiForwardingUDR types.
14.6.5.6.1. Publishes
This parameter is of the string type and is defined as a batch MIM context
type.
File Transfer This MIM parameter contains a timestamp, indicating when the target file is
Timestamp created in the temporary directory.
753
Desktop 7.1
Target File Size is of the long type and is defined as a trailer MIM
context type.
Target Hostname This MIM parameter contains the name of the target host, as defined in the
Connection tab of the agent.
14.6.5.6.2. Accesses
Various resources from the Filename Template configuration to construct the target filename.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
Reported, along with the name of the target file, when the file is successfully written to the target
directory.
For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.
754
Desktop 7.1
14.7.1.1. Prerequisites
The reader of this information should be familiar with:
• APL code
14.7.2. Preparations
Prior to configuring an SFTP agent, consider the following preparation notes:
• Server Identification
• Attributes
• Authentication
• Server Keys
mz.ssh.known_hosts_file
It is set in executioncontext.xml to manage where the file is saved. The default value is
${mz.home}/etc/ssh/known_hosts.
The SSH implementation uses JCE (Java Cryptography Exentsion), which means that there may be
limitations on key sizes for your Java distribution. This is usually not a problem. However, there may
be some cases where the unlimited strength cryptography policy is needed. For instance, if the host
RSA keys are larger than 2048 bits (depending on the SSH server configuration). This may require
that you update the Java Platform that runs the Execution Context.
For unlimited strength cryptography on the Oracle JRE, download the JCE Unlimited Strength Juris-
diction Policy Files from:
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
Replace the jar files in $JAVA_HOME/jre/lib/security with the files in this package.
The OpenJDK JRE does not require special handling of the JCE policy files for unlimited strength
cryptography.
14.7.2.2. Attributes
The SFTP collection agent and the SFTP forwarding agent share a number of common attributes. They
are both supported by a number of algorithms:
755
Desktop 7.1
14.7.2.3. Authentication
The SFTP agents support authentication through either username/password or private key. Private
keys can optionally be protected by a Key password. Most commonly used private key files, can be
imported into MediationZone® .
keyType The type of key to be generated. Both RSA and DSA key types are supported.
directoryPath The directory in which you want to save the generated keys.
756
Desktop 7.1
Example 138.
The private key may be created using the following command line:
When the keys are created the private key may be imported to the SFTP agent:
The agent uses a file with the known hosts and keys. It will accept the key supplied by the server if
either of the following is fulfilled:
1. The host is previously unknown. In this case the public key will be registered in the file.
2. The host is known and the public key matches the old data.
757
Desktop 7.1
3. The host is known however has a new key and the user has been configured to accept the new key.
For further information, see the Advanced tab.
If the host key changes for some reason, the file will have to be removed (or edited) in order for the
new key to be accepted.
Upon activation, the agent establishes an SSH2 connection and an SFTP session towards the remote
host. If this fails, additional hosts are tried, if configured. On success, the source directory on the remote
host is scanned for all files matching the current filter. In addition, the Filename Sequence service
may be utilized for further control of the matching files. All files found, will be fed one after the other
into the workflow.
When a file has been successfully processed by the workflow, the agent offers the possibility of moving,
renaming, removing or ignoring the original file. The agent can also automatically delete moved or
renamed files after a configurable number of days. In addition, the agent offers the possibility of de-
compressing (gzip) files after they have been collected, before they are inserted into the workflow.
When all the files have been successfully processed, the agent stops, awaiting the next activation,
scheduled or manually initiated.
14.7.3.1. Configuration
To open the SFTP collection agent configuration view from the workflow editor, either double-click
on the agent, or right-click on the agent and select the Configuration option.
Note!You can configure part of the parameters in the Filename Sequence or Sort Order service
tabs. For further information, see Section 4.1.6.2.2, “Filename Sequence Tab” and Sec-
tion 4.1.6.2.3, “Sort Order Tab”.
• Connection
• Source
• Advanced
The Connection tab contains configuration settings related to the remote host and authentication.
758
Desktop 7.1
Server Informa- If your MediationZone® system is installed with the Multi Server functionality,
tion Provider you can configure the SFTP agent to collect from more than one server. For
further information, see the Multi Server File user's guide.
Host Primary host name or IP-address of the remote host to be connected. If a connec-
tion cannot be established to this host, the Additional Hosts, specified in the
Advanced tab, are tried.
File System Type Type of file system on the remote host. This information is used to construct the
remote filenames.
Authenticate With Choice of authentication mechanism. Both password and private key authentica-
tion are supported.
Username Username for an account on the remote host, enabling the SFTP session to login.
Password Password related to the specified Username. This option only applies when
password authentication is enabled.
Private Key When you select this option, a Select... button will appear, which opens a window
where the private key may be inserted. If the private key is protected by a pass-
phrase, the passphrase must be provided as well. This option only applies when
private key authentication is enabled. For further information, see Section 14.7.2.3,
“Authentication”.
Enable Collection Select this check box to enable repetitive attempts to connect and start a file
Retries transfer.
When this option is selected, the agent will attempt to connect to the host as
many times as is stated in the Max Retries field described below. If the connec-
tion fails, a new attempt will be made after the number of seconds entered in the
Retry Interval (s) field described below.
759
Desktop 7.1
Retry Interval (s) Enter the time interval in seconds, between retries.
If a connection problem occurs, the actual time interval before the first attempt
to reconnect will be the time set in the Timeout field in the Advanced tab plus
the time set in the Retry Interval (s) field. For the remaining attempts, the actual
time interval will be the number seconds entered in this field.
Max Retries Enter the maximum number of retries to connect.
In case more than one connection attempt has been made, the number of used
retries will be reset as soon as a file transfer is completed successfully.
Note! This number does not include the original connection attempt.
Enable RESTART Select this check box to enable the agent to send a RESTART command if the
Retries connection has been broken during a file transfer. The RESTART command
contains information about where in the file you want to resume the file transfer.
When this option is selected, the agent will attempt to re-establish the connection,
and resume the file transfer from the point in the file stated in the RESTART
command, as many times as is entered in the Max Retries field described below.
When a connection has been re-established, a RESTART command will be sent
after the number of seconds entered in the Retry Interval (s) field described
below.
Note! The RESTART Retries settings will not work if you have selected
to decompress the files in the Source tab, see Section 14.7.3.1.2, “Source
Tab”.
Retry Interval (s) Enter the time interval, in seconds, you want to wait before initiating a restart in
this field. This time interval will be applied for all restart retries.
If a connection problem occurs, the actual time interval before the first attempt
to send a RESTART command will be the time set in the Timeout field in the
Advanced tab plus the time set in the Retry Interval (s) field. For the remaining
attempts, the actual time interval will be the number seconds entered in this field.
Max Retries Enter the maximum number of restarts per file you want to allow.
In case more than one attempt to send the RESTART command has been made,
the number of used retries will be reset as soon as a file transfer is completed
successfully.
The Source tab contains configurations related to the remote host, source directories and source files.
The configuration available can be modified by creating and selecting a customized Collection Strategy.
The following text describes the configuration options available when no customized Collection
Strategy has been selected.
760
Desktop 7.1
Collection If there are more than one collection strategy available in the system a Collection
Strategy Strategy drop down list will also be visible containing the available strategies.
For further information about the nature of the collection strategy, see Section 15,
“Appendix VII - Collection Strategies”.
Directory Absolute pathname of the source directory on the remote host, where the source
files reside. The pathname might also be given relative to the home directory of
the Username account.
Filename Name of the source files on the remote host. Regular expressions according to
Java syntax applies. For further information, see
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Example 139.
Compression Compression type of the source files. Determines whether the agent will decom-
press the files before passing them on in the workflow or not.
Move to Tempor- If enabled, the source files will be moved to the automatically created subdirectory
ary Directory DR_TMP_DIR in the source directory, prior to collection. This option supports
safe collection of a source file reusing the same name.
Append Suffix to Enter the suffix that you want added to the file name prior to collecting it.
Filename
761
Desktop 7.1
Important! Before you execute your workflow, make sure that none of the
file names in the collection directory include this suffix.
Inactive Source If enabled, when the configured number of hours have passed without any file
Warning (h) being available for collection, a warning message (event) will appear in the System
Log and Event Area:
Move to If enabled, the source files will be moved from the source directory (or from the
directory DR_TMP_DIR, if using Move Before Collecting) after collection, to
the directory specified in the Destination field. If Prefix or Suffix are set, the file
will be renamed as well.
Note! If a file with the same filename already exist in the target directory,
this file will be overwritten and the workflow will not abort.
Destination Absolute pathname of the directory on the remote host into which the source files
will be moved after the collection. This field is only available if Move to is en-
abled.
Note! The Directory has to be located in the same file system as the collec-
ted files at the remote host. Also, absolute pathnames must be defined.
Relative pathnames cannot be used.
Prefix and Suffix Prefix and/or suffix that will be appended to the beginning and/or the end, respect-
ively, of the source files after the collection. These fields are only available if
Move to or Rename is enabled.
Search and Re-
Note! To apply Search and Replace, select either Move to or Rename.
place
• Search: Enter the part of the filename that you want to replace.
Search and Replace operate on your entries in a way that is similar to the Unix
sed utility. The identified filenames are modified and forwarded to the following
agent in the workflow.
• Use regular expression in the Search entry to specify the part of the filename
that you want to extract.
Note! A regular expresion that fails to match the orignal file name will
abort the workflow.
762
Desktop 7.1
• Enter Replace with characters and metacharacters that define the pattern and
content of the replacement text.
• Search: .new
• Replace: .old
• Search: ([A-Z]*[0-9]*)_([a-z]*)
• Replace: $2_DONE
Note that the search value divides the file name into two parts by using
brackets. The replace value applies to the second part by using the place
holder $2.
Keep (days) Number of days to keep moved or renamed source files on the remote host after
the collection. In order to delete the source files, the workflow has to be executed
(scheduled or manually) again, after the configured number of days.
Note! A date tag is added to the filename, determining when the file may
be removed. This field is only available if Move to or Rename is selected.
Rename If enabled, the source files will be renamed after the collection, remaining (or
moved back from the directory DR_TMP_DIR, if using Move Before Collecting)
in the source directory from which they were collected.
Note! You must avoid creating new file names still matching the criteria
for what files to be collected by the agent, or else the files will be collected
over and over again.
Remove If enabled, the source files will be removed from the source directory (or from
the directory DR_TMP_DIR, if using Move Before Collecting), after the collec-
tion.
Ignore If enabled, the source files will remain in the source directory after the collection.
This option is not available if Move Before Collecting is enabled.
Route FileRefer- Select this check box if you want to forward the data to an SQL Loader agent.
enceUDR See the description of the SQL Loader agent for further information.
The Advanced tab contains configurations related to more specific use of the SFTP Advanced service.
763
Desktop 7.1
Port The port number the SFTP service will use on the remote host.
Timeout (s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Accept New Host If selected, the agent overwrites the existing host key when the host is repres-
Keys ented with a new key. The default behavior is to abort when the key mis-
matches.
Warning! Selecting this option causes a security risk since the agent
will accept new keys regardless if they might belong to another machine.
Enable Key Re-Ex- Used to enable and disable automatic re-exchange of session keys during on-
change going connections. This can be useful if you have long lived sessions since
you may experience connection problems for some SFTP servers if one of the
sides initiates a key re-exchange during the session.
Additional Hosts List of additional host names or IP-addresses that may be used to establish a
connection. These hosts are tried, in sequence from top to bottom, if the agents
fail to connect to the remote host set in their Connection tabs.
Use the Add, Edit, Remove, Move up and Move down buttons to configure
the host list.
14.7.3.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch Will be emitted just before the first byte of each collected file is fed into a workflow.
End Batch Will be emitted just after the last byte of each collected file has been fed into the system.
14.7.3.2.2. Retrieves
The agent retrieves commands from other agents and, based on them, generates a state change of the
file currently processed.
764
Desktop 7.1
Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.
APL code where Hint End Batch is followed by a Cancel Batch will always
result in workflow abort. Make sure to design the APL code to first evaluate
the Cancel Batch criteria to avoid this sort of behavior.
Hint End Batch If a Hint End Batch message is received, the collector splits the batch at the end of
the current processed block (as received from the server), provided that no UDR is
split. If the block end occurs within a UDR, the batch will be split at the end of the
preceding UDR.
After a batch split, the collector emits an End Batch Message, followed by a Begin
Batch message (provided that there is more data in the subsequent block).
14.7.3.3. Introspection
The introspection is the type of data an agent expects and delivers.
14.7.3.4.1. Publishes
Note! When the agent collects from multiple directories, the MIM value
is cleared after collection of each directory. Then, the MIM value is up-
dated with the listing of the next directory.
765
Desktop 7.1
Source File Count is of the long type and is defined as a global MIM
context type.
Source Filename This MIM parameter contains the name of the currently processed file, as
defined at the source.
Source Files Left is of the long type and is defined as a header MIM
context type.
Source File Size This MIM parameter provides the size of the file that is about to be read. The
file is located on the server.
Source File Size is of the long type and is defined as a header MIM
context type.
Source Host This MIM parameter contains the name of the host from which files are collec-
ted, as defined in the Host field in the Connection tab.
Source Host is of the string type and is defined as a global MIM context
type.
Source Pathname This MIM parameter contains the path from where the currently processed file
was collected, as defined in the Directory field in the Source tab.
14.7.3.4.2. Accesses
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
Reported along with the name of the source file that has been collected and inserted into the workflow.
Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted. For further information, see Section 14.7.4.4.2, “Retrieves”.
766
Desktop 7.1
For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.
To ensure that downstream systems will not use the files until they are closed, they are maintained in
a temporary directory on the remote host until the endBatch message is received. This behavior is
also used for cancelBatch messages. If a Cancel Batch is received, file creation is cancelled.
14.7.4.1. Configuration
The SFTP forwarding agent configuration window is displayed when right-clickin on the agent in a
workflow and selecting the Configuration... option, or double-clicking on the agent. Part of the con-
figuration can be made in the Filename Template service tab described in Section 4.1.6.2.4, “Filename
Template Tab”.
See description of the Connection tab in Figure 503, “The SFTP Collection Agent Configuration -
Connection Tab”.
767
Desktop 7.1
The Target tab contains configuration settings related to the remote host, target directories and target
files.
Input Type The agent can act on two input types. Depending on which one the agent is
configured to work with, the behavior will differ.
The default input type is bytearray, that is the agent expects bytearrays. If
nothing else is stated, the documentation refer to input of bytearray.
Compression Compression type of the destination files. Determines whether the agent will
compress the output files as it writes them.
768
Desktop 7.1
Produce Empty If you require to create empty files, check this setting.
Files
Handling of Select the behavior of the agent when the file already exists, the alternatives are:
Already Existing
Files • Overwrite - The old file will be overwritten and a warning will be logged in
the System Log.
• Add Suffix - If the file already exists, suffix ".1" will be added. If this file
also exists, suffix ".2" will be tried instead and so on.
• Abort - This is the default selection and is the option used for upgraded con-
figurations, that is workflows from an upgraded system.
Use Temporary If this option is selected, the agent will move the file to a temporary directory
Directory before moving it to the target directory. After the whole file has been transferred
to the target directory, and the endBatch message has been received, the tem-
porary file is removed from the temporary directory.
Use Temporary If there is no write access to the target directory and, hence, a temporary directory
File cannot be created, the agent can move the file to a temporary file that is stored
directly in the target directory. After the whole file has been transferred, and the
endBatch message has been received, the temporary file will be renamed.
The temporary filename is unique for every execution of the workflow. It consists
of a workflow and agent ID, and a file number.
Abort Handling Select how to handle the file in case of cancelBatch or rollback, either Delete
Temporary File or Leave Temporary File.
Note! When a workflow aborts, the file will not be removed until the next
time the workflow is started.
769
Desktop 7.1
The Advanced tabs contain configurations related to more specific use of the SFTP service, which
might not be frequently utilized.
Port The port number the SFTP service will use on the remote host.
Timeout (s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Accept New Host If selected, the agent overwrites the existing host key when the host is represented
Keys with a new key. The default behaviour is to abort when the key mismatches.
Selecting this option causes a security risk since the agent will accept new
keys regardless if they might belong to another machine.
Enable Key Re- Used to enable and disable automatic re-exchange of session keys during ongoing
Exchange connections. This can be useful if you have long lived sessions since you may
experience connection problems for some SFTP servers if one of the sides initiates
a key re-exchange during the session.
Additional Hosts List of additional host names or IP-addresses that may be used to establish a
connection. These hosts are tried, in sequence from top to bottom, if the agents
fail to connect to the remote host set in their Connection tabs.
Use the Add, Edit, Remove, Move up and Move down buttons to configure the
host list.
Execute During transfer a temporary file is written, which is then moved to the final file.
Select if the script should be executed on the transferred working copy or the final
file with the following two options:
770
Desktop 7.1
• Before Move: Execute the following command and its arguments on the tem-
porary file.
• After Move: Execute the following command and its arguments on the final
file.
Command Enter a command or a script. The script will be executed on the remote system
from it's working directory.
Argument This field is optional. Each entered parameter value has to be separated from the
preceding value with a space.
The temporary filename is inserted as the second last parameter, and the final fi-
lename is inserted as the last parameter, automatically. This means that if, for in-
stance, no parameter is given in the field, the arguments will be as follows:
$1=<temporary_filename> $2=<final_filename>
If three parameters are given in the Arguments field, the arguments are set as:
$1=<parameter_value_#1>
$2=<parameter_value_#2>
$3=<parameter_value_#3>
$4=<temporary_filename>
$5=<final_filename>
If After Move: has been selected, the argument with <temporary filename> is
excluded.
The Backlog tab contains configurations related to backlog functionality. If the backlog is not enabled,
the files will be moved directly to their final destination when an endBatch message is received. If the
backlog however is enabled, the files will first be moved to a directory called DR_READY and then to
their final destination. For further information about the transaction behavior, see Section 14.7.4.4.2,
“Retrieves”.
When backlog is initialized, and when backlogged files are transferred, a note is registered in the
System Log.
771
Desktop 7.1
Note! This global backlog memory buffer is used and shared by this and any other forwarding
agent that transfers files to a remote server. The same memory buffer is used for all ongoing
transactions on the same execution context.
When several workflows are scheduled to run simultaneously, and the forwarding agents are assigned
with the backlog function, there is a risk that the buffer may be too small. In that case, it is recommended
that you increase the size of this property.
Example 141.
If no property is set the default value of 10 MB will be used. The amount allocated will be printed out
in the Execution Context's log file. This memory will not affect the Java heap size and is used by the
agent when holding a copy of the file being transferred.
internal MultiForwardingUDR {
// Entire file content
byte[] content;
// Target filename and directory
FNTUDR fntSpecification;
};
Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
772
Desktop 7.1
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.
A UDR of the type MultiForwardingUDR with a target filename that is not identical to its precedent
will be saved in a new output file.
Note! After a target filename that is not identical to its precedent has been saved, you cannot
use the first filename again. For example: Saving filename B after saving filename A, prevents
you from using A again. Instead, you should first save all the A filenames, then all the B file-
names, and so forth.
Non-existing directories will be created if the Create Non-Existing Directories checkbox under the
Filename Template tab is selected, if not, a runtime error will occur. When MultiForwardingUDRs
are expected configuration options referring to bytearray input are ignored. For further information
about Filename Template, see Section 4.1.6.2.4, “Filename Template Tab”.
773
Desktop 7.1
Example 142.
This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.
import ultra.FNT;
MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){
//Create the FNTUDR
FNTUDR fntudr = udrCreate(FNTUDR);
fntAddString(fntudr, dir);
fntAddDirDelimiter(fntudr);//Add a directory
fntAddString(fntudr, file);//Add a file
MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;
return multiForwardingUDR;
}
consume {
bytearray file1Content;
strToBA (file1Content, "file nr 1 content");
bytearray file2Content;
strToBA (file2Content, "file nr 2 content");
The Analysis agent mentioned previously in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the root directory. If the directories do not exist, the Create Non-Existing Directories
check box in the forwarding agent Configuration dialog under the Filename Template tab must
be selected.
14.7.4.4.1. Emits
14.7.4.4.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
774
Desktop 7.1
Command Description
Begin Batch When a Begin Batch message is received, the temporary directory DR_TMP_DIR is
first created in the target directory, if not already created. Then, a target file is created
and opened in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is closed
and, finally, the file is moved from the temporary directory to the target directory.
If backlog functionality is enabled, an additional step is taken where the file is moved
from DR_TMP_DIR to DR_READY and then to the target directory. If the last step
failed, the file will be left in DR_READY and marked as backlogged.
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.
14.7.4.5. Introspection
The agent consumes bytearray or MultiForwardingUDR types.
14.7.4.6.1. Publishes
This parameter is of the string type and is defined as a batch MIM context
type.
File Transfer This MIM parameter contains a timestamp, indicating when the target file is
Timestamp created in the temporary directory.
Target File Size is of the long type and is defined as a trailer MIM
context type.
Target Hostname This MIM parameter contains the name of the target host, as defined in the
Target or Advanced tab of the agent.
775
Desktop 7.1
14.7.4.6.2. Accesses
Various resources from the Filename Template configuration to construct the target filename.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
Reported, along with the name of the target file, when the file is successfully written to the target
directory.
For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.
14.8.1.1. Prerequisites
MediationZone® supports a number of different database types for example Oracle, SQL Server and
Derby. In this user guide the user is assumed to know the specifics of the SQL syntax needed to retrieve
the information from the database.
776
Desktop 7.1
When the workflow is executed the agent will execute a query in SQL, based on the user configuration
and retrieve all rows matching the statement. For each row a UDR is created and populated according
to the assignments in the configuration window.
14.8.2.1. Configuration
The SQL collection agent configuration window is displayed when a database agent in a workflow is
right-clicked, selecting Configuration... or double-clicked.
The SQL tab contains configurations related to the SQL query to use to retrieve information from the
source database, as well as the UDR type to be created and how the UDRs are populated by the agent.
Database Profile name of the database that the agent will connect to and retrieve data from. For
further information about database profile setup, see Section 9.3, “Database Profile”.
SQL Expres- The user enters a SQL statement specifying the query MediationZone® should send
sion to the database.
By right clicking in the pane, selecting MIM Assistance..., the MIM Browser appears.
The user can select a MIM value to be used in the SQL query. The value of the MIM
will be used in the SQL query during execution. The name of the MIM Value for ex-
ample "Workflow.Batch Count" will be displayed in blue color as "$(Workflow.Batch
Count)" in the text field.
There is support for Stored Procedures. When using the collection agent to produce
output from the procedure the JDBC support for output parameters is used.
The character "?" is used to mark an output parameter in the SQL statement in the
agent. An example of a procedure with one input argument and one output argument
could have a SQL statement looking like this:
777
Desktop 7.1
The procedure will be called and the value of the output parameter ("?") will be assigned
to the configured UDR field. When using output parameters only one UDR will be
produced in the batch.
The exact supported syntax for stored procedures varies between databases. For example
calling an Oracle function can be done via:
The statement syntax of the statement will not be validated in the GUI, but ref-
erences to MIM values are validated. If a incorrect SQL statement is entered this
will generate an exception during runtime.
UDR Type Type of UDR mapped from the result set and routed into the workflow.
When selecting the Browse button next to the field the UDR Internal Format Browser
will open and one and only one UDR type can be selected.
UDR Fields The table represents the mapping from the result set, returned when executing the
statement entered in the SQL field to a specified Value in the UDR.
The agent emits commands changing the state of the file currently processed.
Command Description
Begin Batch The agent will emit beginBatch before the first UDR from the result set is routed into
the workflow.
End Batch The agent will emit endBatch after the last row in the result set has been mapped to a
UDR and routed into the workflow.
14.8.2.2.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.
Command Description
Hint End Batch When hintEndBatch is called the agent will call endBatch followed by beginBatch
(if more records exists in the result set). It will then continue to process the result
set.
Cancel Batch Cancel Batch is not supported by the agent.
14.8.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
The agent produces the UDR type selected from the UDR Type.
778
Desktop 7.1
The agent does not publish nor access any MIM parameters.
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
• SQL: XXX
The debug message is sent when the SQL agent creates its SQL string to send to the database.
The agent also enables you to populate database columns with MIM values either by using a plain
SQL statement, or by invoking a stored procedure that inserts the data.
14.8.3.1. Configuration
You open the SQL forwarding agent configuration view from the workflow editor. In the workflow
template, either double-click the agent icon, or right-click it and select Configuration.
The SQL tab contains configuration data that is related to the target database and the UDR Type.
779
Desktop 7.1
Figure 511. The SQL Forwarding Agent Configuration View - SQL Tab
Database Profile defining the database that the agent is supposed to connect and forward data
to.
When selecting the Browse button next to the field it will open a browser where one
and only one database profile can be selected. For further information about database
profile setup, see Section 9.3, “Database Profile”.
UDR Type The UDR type the agent accesses as input type.
When selecting the Browse button next to the field it will open the UDR Internal
Format Browser and one and only one UDR type can be selected.
SQL State- The user enters a SQL statement MediationZone® should send to the database.
ment
By right clicking in the pane, selecting MIM Assistance..., the MIM Browser appears.
The user can select a MIM value to be used in the SQL query. The value of the MIM
will be used in the SQL query during execution. The name of the MIM Value for ex-
ample "Workflow.Batch Count" will be displayed in blue color as "$(Workflow.Batch
Count)" in the text field.
By right clicking in the pane, selecting UDR Assistance..., the UDR Internal Format
Browser appears.
The user can select a field from the UDR specified in the UDR Type selector. The
name of the UDR field name for example "UDR.Fieldname" will be displayed in
green color as "$(UDR.Fieldname)" in the text field. If the input type UDR is changed
after writing the SQL syntax the GUI validation will fail (unless the different UDR's
have identical field names). The field value will be used as an input variable in the
SQL Statement in the same way as MIM values do.
There is support for Stored Procedures. When using the forward agent use JDBC to
call a stored procedure in the same way as a normal call.
The exact supported syntax for stored procedures varies between databases. An ex-
ample of a procedure with two input arguments could have a SQL statement looking
like this:
Example 143.
780
Desktop 7.1
The statement syntax of the statement will not be validated in the GUI, but
references to MIM values are validated. If an incorrect SQL statement is entered
this will generate an exception during runtime.
Commit Win- The number of UDRs to be processed between each database commit command.
dow Size This value may be used to tune the performance. If tables are small and contain no
Binary Objects, the value may be set to a higher value than the default. Default is
1000.
A number field where it is possible to enter an integer number. If the check box is
enabled the agent will call commit on the database after reaching the specified number
of successful executions. It will also call commit when receiving endBatch. If the
check box is disabled the agent will only do a commit when receiving endBatch.
Route on SQL Check to prevent the workflow from aborting when selected exceptions occur. Such
Exception exceptions are filtered by the rule that you specify in the Regular Expression Criteria
editing pane. Instead of aborting the workflow due to these exceptions, the workflow
proceeds to the agent that you now can route the selected exceptions to.
Note! Since the error message contains linefeed, the regular expression has to
adjusted according to this.
Start the regular expression with "(?s)" to ignore linefeed, for example:
(?s).*ORA-001.*
When the agent identifies erroneous data it generates an Agent Message Event. For
further information, see Section 5.5.14, “Agent Event”.
MediationZone® specific database tables from the Platform database should never be utilized
as targets for output as this might cause severe damage to the system in terms of data corruption
that in turn might make the system unusable.
781
Desktop 7.1
Example 144.
The SQL forwarding agent identifies the asciiSEQ_TI UDR as erroneous, crates an er-
rorUDR UDR that wraps together the original UDR with the error message that was generated:
Figure 513. The Erroneous UDR Before and After SQL Forwarding
None.
14.8.3.3.2. Retrieves
The database transaction in the SQL forwarding agent is not consistent with the MediationZone®
batch transaction behavior, that is the normal batch transaction safety is not guaranteed for this agent.
If a workflow aborts, the database transaction may have been partly or completely done, however the
input file will be reprocessed and consequently can cause duplication of data if an INSERT statement
is used in the forwarding agent.
14.8.3.4. Introspection
The introspection is the type of data an agent expects and delivers.
The agent does not publish nor access any MIM parameters.
782
Desktop 7.1
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
• SQL: sql-statement
Example 145.
The debug message is sent when the SQL agent creates its SQL string to send to the database.
For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.
14.9.1.1. Prerequisites
The reader of this information should be familiar with:
• TCP/IP
The UDR types used by the TCP/IP forwarding agent can be viewed in the UDR Internal Format
Browser. To open the browser open an APL Editor, in the editing area right-click and select UDR
Assistance.... The browser will then open.
783
Desktop 7.1
14.9.2.1.1.1. RemoteHostConfig
The RemoteHostConfig UDR contains the connection details for the remote host. This UDR is included
in all the other UDR types.
Field Description
host (string) This field contains the hostname or IP address to the remote host.
port (int) This field contains the port to the remote host.
14.9.2.1.1.2. ConnectionRequestUDR
When the agent receives this UDR, it will try to establish a new connection, or close a connection to
a a remote host. The agent will then return the ConnectionStateUDR containing information about the
current state of the connection.
Field Description
closeConnection This field determines whether the request is for opening or closing
(boolean) a connection. If a new connection is to be made, this field will be
set to false, and if a connection is to be closed, this field will
be set to true.
remoteHost (RemoteHost- This is the RemoteHostConfig UDR containing the connection
Config (TPCIP)) details.
14.9.2.1.1.3. ConnectionStateUDR
The agent returns the ConnectionStateUDR when a ConnectionRequestUDR has been sent, as well as
in the event a connection goes down for some reason.
784
Desktop 7.1
Field Description
connectionOpen (boolean) In case you have a valid connection (see the validAddress
field below, this field indicates whether the connection is open
or not, true for open and false for closed.
remoteHost (RemoteHost- This is the RemoteHostConfig UDR containing the connection
Config (TPCIP)) details.
validAddress (boolean) This field indicates if the connection details in the RemoteHost-
Config UDR were valid or not, true for valid and false for
invalid.
14.9.2.1.1.4. RequestUDR
When the agent receives a RequestUDR it will try to send the included bytearray to the remote host.
Field Description
data (bytearray) This field contains the actual data to be sent.
remoteHost (RemoteHostConfig This is the RemoteHostConfig UDR containing the con-
(TPCIP)) nection details.
14.9.2.1.1.5. ResponseUDR
If the TCP/IP forwarding agent has been configured to handle responses, it will return ResponseUDRs
to the workflow.
Field Description
data (bytearray) This field contains the response.
remoteHost (RemoteHostConfig This is the RemoteHostConfig UDR containing the
(TPCIP)) connection details.
14.9.2.1.1.6. ErrorUDR
Field Description
data (bytearray) This is original data from the RequestUDR. This can be used
for storing the data.
ErrorReason (string) This field contains the reason for the failure.
ErrorStackTrace (string) This field contains the stack trace from the failure. Note! This
field should only be read if absolutely necessary, since it re-
quires a large amount of CPU.
remoteHost (RemoteHostCon- This is the RemoteHostConfig UDR containing the connection
fig (TPCIP)) details.
14.9.2.2. Configuration
The TCP/IP forwarding agent configuration window is displayed when double-clicking on the agent
in a workflow, or when right-clicking o the agent and selecting Configuration...
785
Desktop 7.1
Note! The visual string containing the Host and Port will act as an identifier for the connection.
In the Advanced tab you can configure additional properties for optimizing the performance of the
TCP/IP forwarding agent.
Figure 517. TCP/IP forwarding agent configuration window, Advanced properties tab
See the text in the tab for further information about the properties.
14.9.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
786
Desktop 7.1
The agent does not publish nor access any MIM parameters.
Figure 518. A TCP/IP workflow may be configured to send responses to the source.
Upon activation, the collector binds to the defined port and awaits connections to be accepted. Note
the absence of a Decoder in the workflows. The collector has built-in decoding functionality, supporting
any format as defined in the Ultra Format Editor.
The UDR type created by default in the TCPIP agent can be viewed in the UDR Internal Format
Browser. To open the browser open an APL Editor, in the editing area right-click and select UDR
Assistance...; the browser opens.
787
Desktop 7.1
Field Description
RemoteIP(ipaddress) The IP address of the client.
RemotePort(int) The port through which the agent connects
to the client.
response(bytearray) The data that the agent sends back to the
client.
SequenceNumber(long) A per-connection unique number that is
generated by the TCPIP agent. This num-
ber enables you to follow the order by
which the UDRs are collected. The agent
counter is reset whenever connection with
the agent is established.
• The UDR fields RemoteIP, RemotePort, and SequenceNumber are accessible from
the workflow configuration only if the TCPIP agent is configured with a decoder that extends
the built-in TCPIP format. For further information see Decoder in Section 14.9.3.2.2, “Decoder
Tab”.
• The TCPIP UDR cannot be cloned and the socket connection will not be initialized if a
cloning is attempted. It is therefore recommended that you initialize every UDR from the
decoder, and then route it into the workflow.
14.9.3.2. Configuration
The TCP/IP Collection agent configuration window is displayed when the agent in a workflow is
double-clicked or right-clicked, selecting Configuration...
788
Desktop 7.1
Host The IP address or hostname to which the TCP collector will bind. If the host is
bound the port must also be bound. If left empty TCP collector binds to all IP ad-
dresses available on the system.
Port The port number from which the data is received. Make sure the port is not used
by other applications.
The port can also be dynamically updated while the agent is running. Double-
click the agent in the Workflow Editor in monitor mode and modify it. To
trigger the agent to use the new port the workflow must be saved. For further
information about updating agent configurations while the workflow is
running, see Section 2.2.2, “Dynamic Update”.
Allow Multiple If enabled, several TCP/IP connections are allowed simultaneously. If disabled,
Connections only one at a time is allowed.
Number of Con- If Allow Multiple Connections is enabled, the maximum number of simultaneous
nections Allowed connections is specified as a number between 2 and 65000.
Send Response If enabled, the collector will be able to send a response back to the source. If Allow
Multiple Connections is enabled, the collector expects a UDR extended with the
default TCPIPUDR as reply. If disabled, it expects a bytearray.
Drag and release in the opposite direction in the workflow to create a re-
sponse route between the agents. The TCP/IP agent must be connected to
an agent utilizing APL. This since responses are created with APL com-
mands.
Figure 521.
789
Desktop 7.1
Decoder List holding available decoders introduced via the Ultra Format Editor. The decoders
are named according to the following syntax:
<decoder> (<module>)
The option MZ Format Tagged UDRs indicate that the expected UDRs are stored in
one of the built-in MediationZone® formats. If the compressed format is used, the
decoder will automatically detect this. Select this option to make the Tagged UDR
Type list accessible for configuration. If this option is selected, the Tagged UDR Type
list will be enabled.
Tagged UDR List of available internal UDR formats stored in the Ultra and Code servers. The formats
Type are named according to the following syntax:
<internal> (<module>)
If disabled (default), the amount of work needed for decoding is minimized, using a
"lazy" method decoding sub fields. This means the actual decoding work may not be
done until later in the workflow, when the field values are accessed for the first time.
Corrupt data (that is, data for which decoding fails) may not be detected during the
decoding stage and could cause the UDR to be discarded at a later processing stage.
internal TCP_Int :
extends_class( "com.digitalroute.wfc.tcpipcoll.TCPIPUDR" ) {
};
790
Desktop 7.1
in_map TCP_InMap :
external( my_ext ),
internal( TCP_Int ),
target_internal( my_TCP_TI ) {
automatic;
};
14.9.3.4. Introspection
The introspection is the type of data an agent expects and delivers.
The agent produces UDRs in accordance with the Decoder tab. If Send Response is enabled, the agent
consumes bytearray types for single connections and TCPIPUDR for multiple connections.
The agent does not publish nor access any MIM parameters.
14.9.4. An Example
A workflow containing a TCP/IP Collection agent can be set up to send responses back to the source
from which the incoming data was received. This requires an APL agent (Analysis or Aggregation)
to be part of the workflow.
Figure 523. A TCP/IP workflow can be configured to send responses to the source.
To illustrate how such a workflow is defined, an example is given where an incoming UDR is validated,
resulting in either the field anum or a sequence number being sent back as a reply message to the
source. Depending on if one or several TCP/IP connections are allowed, the format of the reply message
sent from the Analysis agent differs:
791
Desktop 7.1
To keep the example as simple as possible, the valid records are not processed. Usually, no reply
is sent back until the UDRs are fully validated and processed. The example aims to focus on
the response handling only.
In order to be able to send reply messages, Send Response must be enabled in the configuration window
of the agent. Drop an Analysis agent in the workflow and connect it to the TCP/IP agent. Drag and
release in the opposite direction to create a response route in the workflow.
Also, an Ultra format for decoding of incoming data must be defined. Note, no format has to be defined
for the response - it will be sent as a bytearray from the Analysis agent.
The Analysis agent handles both the validation of the incoming records, as well as sending the response.
If the field duration is less than or equal to zero, the UDR is discarded and the field anum, in form
of a bytearray, is sent back as response. All other UDRs are routed to the next agent in turn, and
instead a sequence number is sent as response.
Note the use of the synchronized keyword. Updating a global variable within a real-time workflow
must be done within a synchronized function. This to assure consistency between threads. By
default, a real-time workflow utilizes several threads.
int seqNum;
consume {
bytearray reply;
if ( input.duration <= 0 ) {
strToBA( reply, input.anum );
udrRoute( reply, "response" );
} else {
strToBA( reply, (string)createSeqNum() );
udrRoute( reply, "response" );
792
Desktop 7.1
In order to be able to send reply messages, Send Response must be enabled in the configuration window
of the agent. An additional connection point will appear on the agent, to which an Analysis agent is
to be linked. Also, an Ultra format for the decoding of the incoming data must be defined. This format
must contain the built-in TCPIP format . See the section below.
The incoming external format must be extended with the TCPIPUDR format.
internal TCP_Int :
extends_class( "com.digitalroute.wfc.tcpipcoll.TCPIPUDR" ) {
};
in_map TCP_InMap :
external( asciiSEQ_ext ),
internal( TCP_Int ),
target_internal( ascii_TCP_TI ) {
automatic;
};
793
Desktop 7.1
};
The Analysis agent handles both the validation of the incoming records, as well as sending the response.
If the field duration is less than or equal to zero, the UDR is discarded and the field anum is inserted
into the response field, and the complete UDR is sent back as response. All other UDRs are routed to
the next agent in turn and a sequence number is inserted into the response field before any routing.
Note the use of the synchronized keyword. Updating a global variable within a real-time workflow
must be done within a synchronized function. This to assure consistency between threads. By
default, a real-time workflow utilizes several threads.
int seqNum;
consume {
bytearray reply;
if ( input.duration <= 0 ) {
strToBA( reply, input.anum );
input.response = reply;
udrRoute( input, "response" );
} else {
strToBA( reply, (string)createSeqNum() );
input.response = reply;
udrRoute( input, "response" );
udrRoute( input, "UDRs" );
}
794
Desktop 7.1
15.1.1. Prerequisites
The reader of this document should be familiar with:
• Analysis Programming Language. For further information, see the APL Reference Guide.
15.1.2. Overview
Collection Strategies are used to setup rules for handling collection of files from the Disk, FTP, SFTP,
and SCP Collection agents.
The APL Collection Strategy is created on top of one of the pre-defined Collection Strategies, to cus-
tomize the way of collecting files by using the APL language.
The menu items that are specific for APL Collection Strategy Editor are described in the following
sections:
Item Description
Import... Select this option to import code from an external file. Note that the file has to reside on
the host where the client is running.
795
Desktop 7.1
Export... Select this option to export your code to an *.apl file that can be edited in other code ed-
itors, or be used by other MediationZone® systems.
Item Description
Validate Compiles the current APL Collection Strategy code, checking for grammatical and
syntactical errors. The status of the compilation is displayed in a dialog. Upon failure,
the erroneous line is highlighted and a message, including the line number, is displayed.
Undo Select this option to undo your last action.
Redo Select this option to redo the last action you "undid" with the Undo option.
Find... Displays a dialog where chosen text may be searched for and, optionally, replaced.
Find Again Repeats the search for the last string entered in the Find dialog.
The additional buttons that are specific for APL Collection Strategy Editor are described in the following
sections:
Button Description
Validate Compiles the current APL Collection Strategy code, checking for grammatical and
syntactical errors. The status of the compilation is displayed in a dialog. Upon failure,
the erroneous line is highlighted and a message, including the line number, is displayed.
Undo Select this option to undo your last action.
Redo Select this option to redo the last action you "undid" with the Undo option.
796
Desktop 7.1
Find... Displays a dialog where chosen text may be searched for and, optionally, replaced.
Find Again Repeats the search for the last string entered in the Find dialog.
Zoom Out Zoom out the code area by modifying the font size. The default value is 12(pt). Clicking
the button between the Zoom Out and Zoom In buttons will reset the zoom level to
the default value. Changing the view scale does not affect the configuration.
Zoom In Zoom in the code area by modifying the font size. The default value is 12(pt). Clicking
the button between the Zoom Out and Zoom In buttons will reset the zoom level to
the default value. Changing the view scale does not affect the configuration.
15.1.4. Configuration
You create your APL Collection Strategy in the APL Collection Strategy Editor.
Base Collec- From the drop-down list, select a pre-defined collection strategy. The Default Collec-
tion Strategy tion Strategy is the standard collection strategy that is used by default by the Disk
and FTP agents.
797
Desktop 7.1
The Base Collection Strategy is the collection strategy that your APL Exten-
sion will be based on.
When saving your new collection strategy, make sure to use a descriptive name
since it will be added to the list of available strategies in the agent's Configura-
tion.
APL Exten- The code that you see in the APL Extension coding pad is a default 'skeleton' set of
sion procedures that are already defined in the Base Collection Strategy. By adding APL
code within these procedures, you customize the way by which the collection is going
to be handled by the workflow. For further information about the different APL pro-
cedures, refer to the APL Reference Guide.
1. During run-time, when each of the procedures are invoked, the workflow
first runs the procedure's Base part and then it executes your APL Extension
code.
• udrRoute
• mimSet
• mimPublish
• cancelBatch
• hintEndBatch
3. In the following APL functions you cannot assign a persistent variable with
a value. For information about persistent values, see the MediationZone®
APL Reference Guide.
• initialize
• deinitialize
• commit
• rollback
The FileInfo UDR type can be viewed in the UDR Internal Format Browser. To open the browser
right-click in the editing area of an APL Editor and select UDR Assistance.... The browser opens.
15.1.5.1. Format
The following fields are included in the FileInfo UDR:
798
Desktop 7.1
Field Description
isDirectory(boolean) Set to True if FileInfo represents a directory.
isFile(boolean) Set to True if FileInfo represents a file.
name(string) The name of the file or directory.
size(long) The size of the file or directory.
timestamp(long) The timestamp for when the file or directory was last modified.
• initialize
• deinitialize
• prepareBaseDirList
• accept
• filterFiles
• preFileCollection
• postFileCollection
• begin
• commit
• rollback
For further information about APL functions, see the APL Reference Guide.
15.2.1. Overview
The collection strategy makes it possible to collect files for which a corresponding control file exist.
If the control file does not exist, the file is ignored.
The Control File Collection Strategy controls which further configuration options that are available
in the Source tab. If no strategy is selected, the default strategy is used.
799
Desktop 7.1
The Collection Strategy drop down list will only be visible if there are other collection strategies
available in the system, apart from the default collection strategy available.
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Example 146.
Compression Select compression type for the source files. This selection determines if the agent
will decompress the files before passing them on in the workflow.
800
Desktop 7.1
Position The control filename consists of an extension added either before or after the shared
filename part. Select one ofh the choices: Prefix or Suffix.
Prefix means that the text entered inte the Control File Extension field will be
searched for before the shared filename part and Suffix means that the text entered
in the Control File Extension field, will be searched for after the shared filename
part.
Control File Ex- The Control File Extension is used to define when the data file should be collected.
tension A data file with filename FILE will only be collected if the corresponding control
file exists. A possible control filename can be FILE.ok.
The text entered in this field is the expected extension of the shared filename. The
Control File Extension will be attached in the beginning or the end of the shared
filename, depending on the selection made in the Position list, above.
Data File Exten- The Data File Extension will only be applicable if Position is set to Suffix.
sion
There can be cases where a more strict definition of which files should be collected
is needed. This is defined in the Data File Extension field.
Consider a data file called FILE.dat. If .dat is entered in the Data File Exten-
sion field the corresponding Control file will be called FILE.ok if .ok is entered
in the Control File Extension field.
• FILE1.dat
• FILE2.dat
• FILE1.ok
• ok.FILE1
• FILE1
1. The Position field is set to Prefix and the Control File Extension field
is set to .ok.
The control file is ok.FILE1 and FILE1 will be the file collected.
2. The Position field is set to Suffix and the Control File Extension field
is set to .ok.
The control file is FILE1.ok and FILE1 will be be the file collected.
3. The Position field is set to Suffix and the Control File Extension field
is set to .ok and the Data File Extension field is set to .dat.
The control file is FILE1.ok and FILE1.dat will be the file collected.
After collection, the control file is handled in the same way as the collected
file is configured to be handled, that is the system should delete/re-
name/move/ignore it.
Move to Tem- If this option is selected, the source files will be moved to the automatically created
porary Direct- subdirectory DR_TMP_DIR in the source directory, before collection. This option
ory supports safe collection when source files repeatedly use the same name.
801
Desktop 7.1
Inactive Source If this option is selected, a warning message (event) will appear in the System Log
Warning (h) and Event Area when the configured number of hours have passed without any file
being available for collection:
The source has been idle for more than <n> hours,
the last inserted file is <file>.
Move to If this option is selected, the source files will be moved from the source directory
(or from the directory DR_TMP_DIR if using Move to Temporary Directory), to
the directory specified in the Destination field, after collection.
The Destination must be located in the same file system as the collected files
at the remote host. Additionally, absolute path names must be defined (relative
path names cannot be used).
Rename If this option is selected, the source files will be renamed after the collection, and
remain (or moved back from the directory DR_TMP_DIR if using Move Before
Collecting) in the source directory from which they were collected.
Remove If this option is selected, the source files will be removed from the source directory
(or from the directory DR_TMP_DIR, if using Move Before Collecting), after the
collection.
Ignore If this option is selected, the source files will remain in the source directory after
the collection. This field is not available if Move Before Collecting is enabled.
Destination If the Move to option has been selected, enter the full path name of the directory
on the remote host into which the source files will be moved after the collection in
this field. If any of the other After Collection options have been selected, this option
will not be available.
Prefix and Suf- If any of the Move to or Rename options have been selected, enter the prefix and/or
fix suffix that will be appended to the beginning and/or end of the name of the source
files, respectively, after the collection, in these fields. If any of the other After
Collection options have been selected, this option will not be available.
If Rename is enabled, the source files will be renamed in the current (source
or DR_TMP_DIR) directory. Be sure not to assign a Prefix or Suffix, giving
files new names still matching the Filename regular expression. That will
cause the files to be collected over and over again.
Keep (days) If any of the Move to or Rename options have been selected, enter the number of
days to keep moved or renamed source files on the remote host after the collection
in this field. In order to delete the source files, the workflow has to be executed
(scheduled or manually) again, after the configured number of days. If any of the
other After Collection options have been selected, this option will not be available.
A date tag is added to the filename, determining when the file may be re-
moved.
802
Desktop 7.1
15.3.1. Overview
The Duplicate Filter Collection Strategy enables you to configure a collection agent to collect files
from a directory without having the same files being collected again.
15.3.2. Configuration
You configure the Duplicate Filter Collection Strategy from the Source tab in the agent configuration
view.
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Example 147.
803
Desktop 7.1
Compression Compression type of the source files. Determines if the agent will decompress
the files before passing them on in the workflow.
Duplicate Criteria - Select this option to have only the filename compared for the duplicate check.
Filename If the filename is in the list of files which have already been collected once,
the file is ignored by the agent.
Duplicate Criteria - Select this option to have both the filename and the time stamp of the last
Filename and modification, compared when checking for duplicates. If the file has already
Timestamp been collected once, it is collected again only if the duplicate check reveals that
the file has been updated since the previous collection.
Files that have the same name and are older than the last collected file
by the same name, are ignored. Only files which time stamp is more re-
cent are collected.
File List Size Enter a value to specify the maximum size of the list of already collected files.
This list of files is compared to the input files in order to detect duplicates and
prevent them from being collected by the agent.
When this collection strategy is used with multiple server connection strategy,
each host has its own duplicate list. If a server is removed from the multiple
server configuration the collection strategy will automatically drop the list of
duplicates for that host in the next successful collection.
If the number of files to be collected is greater than the file list size, files
older than the oldest file in the list are not collected.
15.4.1. Overview
The Multi Directory Collection Strategy enables you to configure a collection agent to collect data
from a series of directories that are listed in a control file. The collection agent reads the control file
and collects from the specified directories.
15.4.2. Configuration
You configure the Multi Directory Collection Strategy from the first tab in the agent configuration
view.
804
Desktop 7.1
If the control file is missing, it is empty or if the file is not readable, the
workflow aborts.
controlfile.txt:
directory1
directory1/subdir1
directroy1/subdir2
directory2
/home/user/directroy3
...
controlfile_vms.txt:
DISK$USERS:[USERS.USER1.TESTDIR1]
DISK$USERS:[USERS.USER1.TESTDIR2]
DISK$USERS:[USERS.USER1.TESTDIR2.SUBDIR1]
DISK$USERS:[USERS.USER1.TESTDIR3]
DISK$USERS:[USERS.USER1.TESTDIR4]
...
Filename The regular expression of the names of the source files on the local file system.
Regular expressions according to Java syntax applies. For further information, see:
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
805
Desktop 7.1
Example 150.
If you leave Filename empty, or if you specify .*, the agent collects all the
files.
Abort on Miss- Check to abort the workflow if a directory, that is specified on the control file list,
ing Directory is missing on the server. Otherwise, the workflow continues to execute (default).
Enable Duplicate Check to prevent collection of the same file more than once.
Filter
Files are considered to be duplicate if the absolute filename is the same.
The workflow holds an internal data structure with information about which files
the collector has collected in previous executions. The data structure is purged by
the collection strategy based on the contents of the collection directories. If files
collected in the past are no longer found in the collection directory they are removed
from the data structure.
The internal data structure is stored in the workflow state. Since workflow
state is only updated when files are collected the purged internal data structure
will be stored the next time a successful file collection is performed.
806
Desktop 7.1
16.1.1.1. Prerequisites
The reader of this information should be familiar with:
The ECS Forwarding agent is applicable for UDRs only. Batches are handled through the cancel-
Batch functionality.
Note! From the ECS Forwarding agent, it is possible to pass on MIM values to be associated
with the UDRs in the ECS Inspector.
16.1.2.1. Configuration
The ECS Forwarding agent configuration window is displayed when right clicking on an ECS Forward-
ing agent and selecting Configuration... or when double clicking on the agent.
Logged MIMs MIM values to be associated with a UDR when sent to ECS.
807
Desktop 7.1
16.1.2.3. Introspection
The introspection is the type of data an agent expects and delivers.
16.1.2.4.1. Publishes
16.1.2.4.2. Accesses
If batches are collected, the ECS Collection agent produces bytearray data. Which UDRs/batches
to collect is determined by selecting a reprocessing group that has been defined in the ECS Inspector.
It is only possible to have one active ECS collection workflow per reprocessing group at a time.
808
Desktop 7.1
Note! Collecting UDRs from the ECS does not mean that they are physically removed from
ECS, only that their state is changed. Automatic UDR removal can be managed by the predefined
task ECS_Maintenance. For further information, see Section 16.1.4, “ECS_Maintenance System
Task”. Manual deletion directly from the ECS Inspector is also possible.
16.1.3.1. Configuration
The ECS Collection agent configuration window is displayed either when right clicking on an ECS
Collection agent in a workflow and selecting Configuration... or double clicking on the agent. You
can select to either collect data from a Reprocessing Group that has been defined in the ECS Inspector
or from a filter that has been saved in the ECS Inspector. The available settings in the configuration
dialog depends on which option you choose.
Note! The default directory, used for storage of UDRs and batches routed to the ECS, is
$MZ_HOME/ecs.
Reprocessing Select this option if you want to collect data from a Reprocessing Group. In order
Group for data to be collectible, it must belong to a predefined reprocessing group. The
groups in the Reprocessing Group list are suffixed with their type - batch or
UDR.
809
Desktop 7.1
SQL Bulk Size To improve performance, data records are retrieved in bulk. The SQL Bulk Size
value specifies how many records that will be included in each bulk. Valid range
is 1 to 1000, with a default value of 20.
Routed Types A group of UDRs can consist of several format types. Selecting Add... will display
the UDR Internal Format Browser. Select the type/types to collect. Any UDRs
in the reprocessing group not matching the selected types will be ignored.
16.1.3.2.1. Emits
The agent emits commands that changes the state of the file currently processed.
Command Description
Begin Batch • UDRs - Emitted prior to the routing of the first UDR in the batch created by the
UDRs matching the collection definitions.
End Batch • UDRs - Emitted when all UDRs have been collected or when a Hint End Batch re-
quest is received. The UDRs are then marked as Reprocessed in ECS.
• Batches - Emitted after each batch has been processed. The batch is then marked as
Reprocessed.
16.1.3.2.2. Retrieves
The agent retrieves commands from other agents and, based on them, generates a state change of the
file currently processed.
Command Description
Cancel Batch No Cancel Batches are retrieved.
Note! If any agent in the workflow emits a cancelBatch, the workflow will
abort immediately (regardless of the workflow configuration).
16.1.3.3. Introspection
The introspection is the type of data an agent expects and delivers.
The agent produces data depending on the data type of the reprocessing group.
UDRs The selected types under Routed Types. If no types are selected, the generic drudr type will
be produced.
Batch Produces bytearray types.
810
Desktop 7.1
16.1.3.4.3. Accesses
For further information about the agent message event type, see Section 5.5.14, “Agent Event”.
This event is logged for batch collection only, and is reported at end batch stating the name of the
file currently processed.
You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.
• Start collecting
This event is logged for UDR collection only, and is reported when the collection from ECS starts.
• Commit started
This event is logged for UDR collection only, and is reported at end batch when starting to commit
changes to the database.
811
Desktop 7.1
This event is logged for UDR collection only, and is reported at end batch upon a successful commit.
The number of days to keep data is set in the ECS_Maintenance configuration dialog. It is also possible
to fully turn off the cleanup of UDRs, Batches, Statistics or all of them. The Statistics can be reported
by email, if so configured in the Report tab of the task.
When the ECS_Maintenance System Task is executed, a number of things will happen:
• UDRs, batches and ECS statistics will be removed from the ECS according to the configurations
in the Cleanup tab. See Section 16.1.4.1.1, “Cleanup Tab” for further information.
If the number of days after which data should be removed has been configured to 0 (zero) days, data
will be removed every time the ECS_Maintenance System Task is executed, with a minimum time
interval of one hour.
• An ECS Statistics Event will be generated containing information about the number of UDRs asso-
ciated with every error code.
This will happen everytime the ECS_Maintenance System Task is executed according to the exact
time interval with which the ECS_Maintenance task is configured.
See Section 5.5.21, “ECS Statistics Event” for further information about how to configure notifications
for the ECS Statistics Event.
• Statistical information will be sent to the ECS Statistics, according to your configurations in the
Report tab in the configuration dialog for the ECS_Maintenance system task. See Section 16.1.4.1.2,
“Report Tab” for further information.
The statistical information will be sent every time the ECS_Maintenance system task is executed,
with a minimum time interval of one hour.
• An email containing statistical information will be sent to the mail recipient stated in the Report
tab in the configuration dialog for the ECS_Maintenance system task. See Section 16.1.4.1.2, “Report
Tab” for further information.
The email will be sent every time the ECS_Maintenance system task is executed, with a minimum
time interval of one hour.
Note! The ECS is designed to store a fairly limited amount of erroneous UDRs and batches. It
is therefore important that the data is extracted, reprocessed or deleted from ECS on a regular
basis.
16.1.4.1. Configuration
To open the ECS_Maintenance system task configuration:
1. Click the Show Configuration Browser button in the upper left part of MediationZone® Desktop
to show the Configuration Browser pane.
A workflow containing the ECS_Maintenance agent is opened. Double click on the agent to open the
configuration.
812
Desktop 7.1
UDRs If this check box is selected, UDRs will be deleted from ECS when they are older than
the number of days stated (maximum 999 days). If disabled, the UDRs will remain until
manually cleaned out via the ECS Inspector. If 0 (zero) is entered, all UDRs with state
Reprocessed will be removed whenever the cleanup task is performed, with a minimum
time interval of one hour.
813
Desktop 7.1
Typically, a UDR may be sent to ECS as a result of a failing table lookup evaluation. To make sure
that the error was not temporary and that the tables simply needed to be updated first, these UDRs are
recycled. A new workflow is created, collecting them from ECS and reevaluating the same table
lookup. If the problem still exists, the UDR is sent back to ECS.
UDRs may be sent to ECS without any Error Code or MIM values associated with it. However, this
will make browsing in the ECS Inspector more difficult, and no auto-assignment to reprocessing groups
using the Error Code is possible.
Error Codes can be associated with reprocessing groups via the ECS Inspector window (accessed
from the Edit menu, selecting Reprocessing Groups...). Then all UDRs with an Error Code will be
automatically assigned to the respective reprocessing group. Otherwise, the UDRs will have to be as-
signed manually in order to be available for collection.
Figure 536. Add ECS Error Code window - where a reprocessing group can be selected.
The Analysis agent is used for validation and routing of the UDRs, and association to a valid (existing)
Error Code. The following example appends an Error Code and an Error Case to the UDR prior to
sending it on to the ECS Forwarding agent.
814
Desktop 7.1
Example 151.
In the ECS Forwarding agent, the MIM values you want to associate with the UDR are appended. This
is optional, however, it makes it easier to search for data and get additional information about the UDR
from the ECS Inspector.
The prerequisites for being able to collect ECS data is that the UDRs or batches must each belong to
an existing reprocessing group, and have the reprocessing state set to New.
Since we want to redo the processing made in the forwarding workflow, we keep the configurations
of the ECS Inspector and ECS Forwarding agents the same as in the previous workflow.
The Error tab in the Workflow Properties must not be configured to handle cancelBatch beha-
vior, since it will never be valid for ECS collection workflows. No calls to cancelBatch are allowed
from any agent since it will cause the workflow to immediately abort.
815
Desktop 7.1
All UDRs conforming to the collection criteria will be selected and processed as a batch.
The Analysis agent only needs to validate and route the UDRs. The Error Code and Error Case is
already associated with the UDR.
Example 152.
Suppose there is a workflow collecting and validating UDRs from ECS. If the validation fails,
the UDRs will be sent back to ECS with an associated Error Code. UDRs assigned to a new or
a different Error Code will directed to a new reprocessing group. If desired to associate these
UDRs with a different reprocessing group, udrClearErrors must be called prior to
udrAddError.
The exception is if the new Error Code is associated with the same reprocessing group.
• Using udrClearErrors will result in a new Error Code and reprocessing group being
associated with the UDR in ECS. It will also avoid several Error Codes pointing at different
reprocessing groups which makes automatic group assignment impossible.
• Leaving out udrClearErrors will result in old as well as new Error Codes (including the
reprocessing group) being associated with the UDR in ECS.
• Using udrClearErrors will result in a new Error Code and reprocessing group being
associated with the UDR in ECS.
• Leaving out udrClearErrors will not result in any association to a reprocessing group,
however both Error Codes are associated with the UDR in ECS.
Note! All UDRs collected at one activation of the workflow will be processed as one batch.
816
Desktop 7.1
Figure 539. The Analysis agent can call the cancelBatch function.
MIM values to be associated with the batch are mapped in the Workflow Properties window. Also,
the number of allowed cancelled batches is set here. Note that Abort Immediately is enabled, no batch
will be sent to ECS if the workflow aborts.
The error UDR is handled from the Analysis agent. For further information, see Section 16.1.7.2.3,
“Analysis Agent”. APL code always overrides any Desktop settings. Hence, the set Error Code will
have no effect on this.
Automatic assignment to reprocessing groups is done exactly the same way as for UDRs Sec-
tion 16.1.7.1.1, “ECS Inspector” via the ECS Inspector window (accessed from the Edit menu, selecting
Reprocessing Groups...). Make sure to select the appropriate Error UDR Type. Then the UDR fields
will be included as MIMs in the collection workflow.
817
Desktop 7.1
Figure 541. ECS Error Codes - where a reprocessing groups can be selected.
The Error UDR may be mapped from the Workflow Properties window as well, however in this case
APL code must be used, since it is desired to insert other values than MIM values in the error UDR
fields. Also, an Error Case will be assigned, and this is not possible from the Workflow Properties
window. For further information, see Section 6.7.7, “Error Codes”, and Section 6.7.8, “Reprocessing
Groups”.
Example 154.
Note! to send error UDRs with the batch is optional. However, it is necessary if access to any
application specific information is wanted when reprocessing the batch. Error UDR fields will
appear as MIM values in the reprocessing workflow. Also, the only possibility to associate an
Error Code with the batch is by appending an Error UDR.
The Error tab may be configured to handle cancelBatch behavior, however, it will never be valid
for ECS batch collection workflows. Any call to cancelBatch will cause the workflow to abort
immediately.
All batches conforming to the collection criteria will be selected. If a batch contains historic UDRs,
that is UDRs belonging to old, not used format definitions, they will by default be converted automat-
ically to the latest format. If this behavior is not desired, the automatic conversion may be disabled
from the Ultra Format Converter. In this case the workflow will abort, logging an informative message
in the System Log.
818
Desktop 7.1
Calls to cancelBatch shall not be made in APL because it will cause the workflow to abort imme-
diately and nothing will be sent to ECS.
819