0% found this document useful (0 votes)
13 views819 pages

Desktop

Uploaded by

Mohit Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views819 pages

Desktop

Uploaded by

Mohit Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 819

Desktop 7.

Copyright © 2015 Digital Route AB

1
Desktop 7.1

Copyright © 2015 Digital Route AB


The contents of this document are subject to revision without further notice due to continued progress in methodology, design,
and manufacturing.

Digital Route AB shall have no liability for any errors or damage of any kind resulting from the use of this document.

DigitalRoute® and MediationZone® are registered trademarks of Digital Route AB. All other trade names and marks mentioned
herein are the property of their respective holders.

2
Desktop 7.1

Table of Contents
1. Introduction ............................................................................................................. 12
1.1. Prerequisites .................................................................................................. 12
1.2. Execution Context .......................................................................................... 12
1.3. Commands .................................................................................................... 13
1.3.1. Agent Command .................................................................................. 13
2. Desktop Overview ..................................................................................................... 14
2.1. Security ........................................................................................................ 14
2.1.1. Login/Logout ...................................................................................... 14
2.1.2. Locks ................................................................................................ 16
2.1.3. Connection Failure ............................................................................... 16
2.1.4. Encryption .......................................................................................... 16
2.2. Administration and Management ....................................................................... 17
2.2.1. Desktop Background Color .................................................................... 17
2.2.2. Dynamic Update .................................................................................. 17
2.2.3. Folders ............................................................................................... 18
2.2.4. Configuration Naming .......................................................................... 18
2.2.5. Date and Time Format Codes ................................................................. 19
2.2.6. Properties used for Desktop ................................................................... 20
2.2.7. Text Editor .......................................................................................... 27
2.2.8. Configuration List Editor ....................................................................... 30
2.2.9. UDR Browser ...................................................................................... 31
2.2.10. Meta Information Model ...................................................................... 32
2.3. Desktop User Interface .................................................................................... 36
2.3.1. Tabs .................................................................................................. 37
2.3.2. Menus and Buttons ............................................................................... 37
2.3.3. Configuration Navigator ........................................................................ 39
2.3.4. Status Bar ........................................................................................... 45
3. Configuration ........................................................................................................... 46
3.1. Menus and Buttons ......................................................................................... 46
3.1.1. Configuration Menus ............................................................................ 46
3.1.2. Configuration Buttons ........................................................................... 48
3.2. Alarm Detection ............................................................................................. 49
3.2.1. Alarm Detection Menus ......................................................................... 49
3.2.2. Alarm Detection Buttons ....................................................................... 50
3.2.3. Defining an Alarm Detection ................................................................. 50
4. Working with workflows ............................................................................................ 67
4.1. Workflow ...................................................................................................... 67
4.1.1. Workflow Types ................................................................................... 67
4.1.2. Multithreading ..................................................................................... 73
4.1.3. Workflow Menus ................................................................................. 74
4.1.4. Workflow Buttons ................................................................................ 75
4.1.5. Agent Pane ......................................................................................... 76
4.1.6. Workflow Template .............................................................................. 78
4.1.7. Workflow Table ................................................................................... 90
4.1.8. Workflow Properties ............................................................................. 93
4.1.9. Validation ......................................................................................... 113
4.1.10. Version Management ......................................................................... 114
4.1.11. Workflow Monitor ............................................................................ 114
4.1.12. Deactivation Issues ........................................................................... 122
4.2. Workflow Group ........................................................................................... 123
4.2.1. Creating a Workflow Group Configuration .............................................. 123
4.2.2. Managing a Workflow Group ................................................................ 125
4.2.3. Workflow Group States ........................................................................ 135
4.2.4. Suspend Execution ............................................................................. 136
4.2.5. Suspend Execution Editor .................................................................... 136

3
Desktop 7.1

4.2.6. Execution Suspension ......................................................................... 139


5. Event Notifications .................................................................................................. 141
5.1. Event Notification Menus ............................................................................... 142
5.1.1. The Edit Menu ................................................................................... 142
5.2. Event Notification Buttons .............................................................................. 142
5.3. Configuration ............................................................................................... 143
5.3.1. The Notifier Setup Tab ........................................................................ 143
5.3.2. The Event Setup Tab ........................................................................... 150
5.4. Event Fields ................................................................................................. 152
5.5. Event Types ................................................................................................. 153
5.5.1. Base Event ........................................................................................ 154
5.5.2. Alarm Event ...................................................................................... 154
5.5.3. Code Manager Event ........................................................................... 155
5.5.4. Couchbase Monitor Event .................................................................... 155
5.5.5. Diameter Dynamic Event ..................................................................... 161
5.5.6. Group State Event .............................................................................. 164
5.5.7. Suppressed Event ............................................................................... 168
5.5.8. Suspend Execution Event ..................................................................... 169
5.5.9. System Event ..................................................................................... 169
5.5.10. System External Reference Event ........................................................ 174
5.5.11. User Event ...................................................................................... 177
5.5.12. SharedTables Event ........................................................................... 178
5.5.13. Workflow Event ............................................................................... 182
5.5.14. Agent Event ..................................................................................... 183
5.5.15. Agent Failure Event ........................................................................... 183
5.5.16. Agent Message Event ........................................................................ 184
5.5.17. User Agent Message Event ................................................................. 185
5.5.18. Agent State Event ............................................................................. 185
5.5.19. Diameter Peer State Changed Event ..................................................... 186
5.5.20. ECS Insert Event .............................................................................. 189
5.5.21. ECS Statistics Event .......................................................................... 192
5.5.22. Debug Event .................................................................................... 195
5.5.23. Dynamic Update Event ...................................................................... 196
5.5.24. Workflow State Event ........................................................................ 196
5.5.25. Workflow External Reference Event ..................................................... 197
5.5.26. Supervision Event ............................................................................. 198
5.5.27. Redis HA Event ............................................................................... 201
5.5.28. Space Action Event ........................................................................... 202
5.5.29. <User Defined> Event ....................................................................... 204
5.6. Event Category ............................................................................................. 205
5.7. Enabling External Referencing ........................................................................ 206
6. Inspection .............................................................................................................. 207
6.1. Menus and Buttons ....................................................................................... 207
6.2. Aggregation Session Inspector ........................................................................ 207
6.3. Alarm Inspector ........................................................................................... 207
6.4. Archive Inspector ......................................................................................... 207
6.5. Duplicate Batch Inspector .............................................................................. 207
6.6. Duplicate UDR Inspector ............................................................................... 207
6.7. ECS Inspection ............................................................................................. 208
6.7.1. ECS Inspector .................................................................................... 208
6.7.2. Configuring Searchable Fields in the ECS ............................................... 214
6.7.3. Restricted Fields Configuration ............................................................. 216
6.7.4. Configuring Restricted Fields in the ECS ................................................ 216
6.7.5. Searching the ECS .............................................................................. 217
6.7.6. ECS Inspector Table ........................................................................... 221
6.7.7. Error Codes ....................................................................................... 225
6.7.8. Reprocessing Groups .......................................................................... 227
6.7.9. Changing State .................................................................................. 227

4
Desktop 7.1

6.8. ECS Statistics .............................................................................................. 228


6.8.1. Searching the ECS Statistics ................................................................. 228
7. Tools ..................................................................................................................... 232
7.1. Menus and Buttons ....................................................................................... 232
7.2. Access Controller .......................................................................................... 232
7.2.1. Users Tab .......................................................................................... 232
7.2.2. Access Groups Tab ............................................................................. 233
7.2.3. Authentication Method Tab .................................................................. 234
7.2.4. Enhanced User Security ....................................................................... 237
7.2.5. Enhanced Security Password Rules ........................................................ 237
7.3. Configuration Browser ................................................................................... 238
7.3.1. Menus .............................................................................................. 239
7.3.2. The Folder Pane ................................................................................. 240
7.3.3. Configuration Browser Table ................................................................ 241
7.3.4. Configuration Tracer ........................................................................... 241
7.3.5. Properties ......................................................................................... 242
7.4. Configuration Monitor ................................................................................... 245
7.4.1. Menus and Buttons ............................................................................. 246
7.4.2. Configuration Monitor Table ................................................................ 246
7.4.3. Details ............................................................................................. 246
7.5. Documentation Generator ............................................................................... 247
7.5.1. To Generate Automated Documentation .................................................. 247
7.5.2. Content of Automated Documentation .................................................... 248
7.6. Execution Manager ....................................................................................... 249
7.6.1. The Overview Tab .............................................................................. 249
7.6.2. The Running Workflows Tab ................................................................ 250
7.6.3. The Detail Views Tab .......................................................................... 251
7.6.4. Managing the Detail Views Tabs ............................................................ 251
7.7. Pico Manager ............................................................................................... 252
7.7.1. The Pico tab ...................................................................................... 253
7.7.2. The Groups Tab ................................................................................. 254
7.8. Pico Viewer ................................................................................................. 255
7.8.1. Tool-Tip Information ........................................................................... 256
7.9. System Exporter ........................................................................................... 256
7.9.1. Exporting .......................................................................................... 257
7.9.2. Export File Structure ........................................................................... 260
7.10. System Importer ......................................................................................... 261
7.10.1. Importing ........................................................................................ 261
7.11. System Log ................................................................................................ 264
7.11.1. Searching the System Log .................................................................. 266
7.11.2. Printing the System Log ..................................................................... 268
7.12. System Statistics ......................................................................................... 268
7.12.1. Host Statistics .................................................................................. 268
7.12.2. Pico Instance ................................................................................... 269
7.12.3. Workflow Statistics ........................................................................... 270
7.12.4. Viewing the System Statistics .............................................................. 270
7.12.5. Exporting Statistics ........................................................................... 273
7.12.6. Importing Statistics ........................................................................... 273
7.13. UDR File Editor .......................................................................................... 274
7.14. Ultra Format Converter ................................................................................. 274
8. Monitoring ............................................................................................................. 275
8.1. Starting the Jconsole Client ............................................................................. 275
8.2. Event Monitoring .......................................................................................... 276
8.2.1. Monitoring the EventServerQueue ......................................................... 276
8.2.2. Monitoring the EventListenerQueue ....................................................... 277
8.2.3. Monitoring the ECEventSenderQueue .................................................... 278
8.3. Workflow Monitoring .................................................................................... 279
8.4. RCP Latency Monitoring ................................................................................ 281

5
Desktop 7.1

8.4.1. Attributes .......................................................................................... 282


8.4.2. Operations ........................................................................................ 283
8.5. Aggregation Monitoring ................................................................................. 283
8.5.1. File Storage ....................................................................................... 283
8.5.2. Couchbase Storage ............................................................................. 285
8.6. Couchbase Monitoring ................................................................................... 288
8.6.1. Monitoring the ConfigCordinator .......................................................... 289
8.6.2. Monitoring the MonitorCoordinator ....................................................... 290
8.6.3. Monitoring the Monitor_<cluster id> ..................................................... 291
9. Appendix I - Profiles ................................................................................................ 293
9.1. Audit Profile ................................................................................................ 293
9.1.1. Audit Profile Menus ............................................................................ 294
9.1.2. Audit Profile Buttons .......................................................................... 294
9.1.3. Adding and Editing a Table Mapping ...................................................... 294
9.1.4. An Example ...................................................................................... 296
9.2. Couchbase Profile ......................................................................................... 299
9.2.1. Connectivity settings ........................................................................... 299
9.2.2. Management settings ........................................................................... 300
9.2.3. Advanced Configurations ..................................................................... 301
9.2.4. Couchbase Profile Menus .................................................................... 302
9.2.5. Couchbase Profile Buttons ................................................................... 302
9.3. Database Profile ........................................................................................... 302
9.3.1. Database Profile Menus ....................................................................... 303
9.3.2. Database Profile Buttons ...................................................................... 303
9.3.3. Database Connection Setup .................................................................. 303
9.3.4. Enabling External Referencing .............................................................. 305
9.3.5. Database Types .................................................................................. 305
9.4. Distributed Storage Profile .............................................................................. 314
9.4.1. Overview .......................................................................................... 314
9.4.2. Configuration .................................................................................... 315
9.5. External Reference Profile .............................................................................. 315
9.5.1. Profile Management ............................................................................ 315
9.5.2. Configuration .................................................................................... 317
9.5.3. Enabling External References in an Agent Profile Field ............................. 318
9.5.4. Using passwords in External References ................................................. 319
9.6. Redis Profile ................................................................................................ 320
9.6.1. General Configurations ........................................................................ 321
9.6.2. Advanced Configurations ..................................................................... 322
9.6.3. Redis Profile Menus ............................................................................ 323
9.6.4. Redis Profile Buttons .......................................................................... 323
9.7. Shared Table Profile ...................................................................................... 323
9.7.1. Memory Allocation ............................................................................. 323
9.7.2. The Shared Table Profile Configuration ................................................. 324
9.7.3. APL ................................................................................................. 326
10. Appendix II - Collection agents ................................................................................ 328
10.1. AFT/TCP Agent .......................................................................................... 328
10.1.1. Introduction ..................................................................................... 328
10.1.2. AFT/TCP Agent ............................................................................... 328
10.2. FTAM/5ESS Agent ...................................................................................... 331
10.2.1. Introduction ..................................................................................... 331
10.2.2. FTAM/5ESS Agent ........................................................................... 331
10.2.3. FTAM Interface Service ..................................................................... 335
10.3. FTAM/EWSD Agent .................................................................................... 336
10.3.1. Introduction ..................................................................................... 336
10.3.2. FTAM/EWSD Agent ......................................................................... 336
10.3.3. FTAM Interface Service ..................................................................... 340
10.4. FTAM/IOG Agent ....................................................................................... 340
10.4.1. Introduction ..................................................................................... 340

6
Desktop 7.1

10.4.2. FTAM/IOG Agent ............................................................................. 341


10.4.3. FTAM Interface Service ..................................................................... 344
10.5. FTAM/Nokia Agent ..................................................................................... 345
10.5.1. Introduction ..................................................................................... 345
10.5.2. FTAM/Nokia Agent .......................................................................... 346
10.5.3. FTAM Interface Service ..................................................................... 349
10.6. FTAM/S12 Agent ........................................................................................ 350
10.6.1. Introduction ..................................................................................... 350
10.6.2. FTAM/S12 Agent ............................................................................. 351
10.6.3. FTAM Interface Service ..................................................................... 354
10.7. FTP/DX200 Collection Agent ........................................................................ 355
10.7.1. Introduction ..................................................................................... 355
10.7.2. Overview ........................................................................................ 355
10.7.3. Preparations ..................................................................................... 355
10.7.4. Configuration ................................................................................... 358
10.8. FTP/EWSD Agent ....................................................................................... 363
10.8.1. Introduction ..................................................................................... 363
10.8.2. Overview ........................................................................................ 363
10.8.3. Configuration ................................................................................... 364
10.8.4. Transaction Behavior ......................................................................... 366
10.8.5. Introspection .................................................................................... 367
10.8.6. Meta Information Model .................................................................... 367
10.8.7. Agent Event Messages ....................................................................... 367
10.8.8. Debug Events ................................................................................... 368
10.9. FTP/NMSC Collection Agent ........................................................................ 368
10.9.1. Introduction ..................................................................................... 368
10.9.2. FTP/NMSC Collection Agents ............................................................ 368
10.10. GTP' Agent .............................................................................................. 372
10.10.1. Introduction ................................................................................... 372
10.10.2. Overview ....................................................................................... 372
10.10.3. Configuration ................................................................................. 374
10.10.4. Introspection .................................................................................. 378
10.10.5. Meta Information Model .................................................................. 378
10.10.6. MZSH Commands .......................................................................... 378
10.10.7. Agent Message Events ..................................................................... 378
10.10.8. Debug Events ................................................................................. 378
10.10.9. Limitations - GTP' Transported Over TCP ........................................... 379
10.11. HiCAP Agent ........................................................................................... 379
10.11.1. Introduction ................................................................................... 379
10.11.2. Overview ....................................................................................... 379
10.11.3. Configuration ................................................................................. 380
10.11.4. Introspection .................................................................................. 383
10.11.5. Meta Information Model .................................................................. 383
10.11.6. Agent Message Events ..................................................................... 384
10.11.7. Debug Events ................................................................................. 384
10.12. HTTPD Agent ........................................................................................... 384
10.12.1. Introduction ................................................................................... 384
10.12.2. HTTPD Agent ................................................................................ 384
10.12.3. APL Functions ............................................................................... 387
10.13. HTTP Batch Agent .................................................................................... 388
10.13.1. Introduction ................................................................................... 388
10.13.2. HTTP Batch Collection Agent ........................................................... 388
10.13.3. Appendix ....................................................................................... 393
10.14. IBM MQ Agent ......................................................................................... 393
10.14.1. Introduction ................................................................................... 393
10.14.2. IBM MQ Collection agent ................................................................ 394
10.14.3. IBM MQ UDRs .............................................................................. 396
10.14.4. Meta Information Model .................................................................. 399

7
Desktop 7.1

10.14.5. Agent Message Events ..................................................................... 399


10.14.6. Agent Debug Events ........................................................................ 399
10.14.7. IBM MQ APL functions ................................................................... 400
10.14.8. Examples ...................................................................................... 402
10.15. Netflow Agent .......................................................................................... 404
10.15.1. Introduction ................................................................................... 404
10.15.2. NetFlow Agent ............................................................................... 404
10.15.3. Netflow V9 considerations ................................................................ 407
10.16. Nokia IACC Agent .................................................................................... 408
10.16.1. Introduction ................................................................................... 408
10.16.2. Nokia IACC Agent .......................................................................... 408
10.16.3. An Example ................................................................................... 411
10.17. Merge Files Agent ..................................................................................... 413
10.17.1. Introduction ................................................................................... 413
10.17.2. Merge Files Collection Agent ............................................................ 413
10.18. Latency Statistics ....................................................................................... 417
10.18.1. Introduction ................................................................................... 417
10.18.2. Latency Statistics Agent ................................................................... 418
10.18.3. Latency Related APL Functions ......................................................... 419
10.18.4. Related UDR Types ......................................................................... 421
11. Appendix III - Processing agents ............................................................................... 423
11.1. Aggregation Agent ....................................................................................... 423
11.1.1. Introduction ..................................................................................... 423
11.1.2. Overview ........................................................................................ 423
11.1.3. Configuration ................................................................................... 424
11.1.4. Aggregation Session Inspection ........................................................... 446
11.1.5. Example - Association of IP Data ......................................................... 448
11.2. Analysis Agent ........................................................................................... 453
11.2.1. Introduction ..................................................................................... 453
11.2.2. Analysis Agent ................................................................................. 453
11.3. Categorized Grouping Agent ......................................................................... 467
11.3.1. Introduction ..................................................................................... 467
11.3.2. Categorized Grouping Agent ............................................................... 467
11.3.3. Example with Categorized Grouping Agent ........................................... 471
11.4. Compression Agents .................................................................................... 474
11.4.1. Introduction ..................................................................................... 474
11.4.2. Overview ........................................................................................ 475
11.4.3. Decompressor Agent ......................................................................... 475
11.4.4. Compressor Agent ............................................................................ 477
11.5. Decoder Agent ............................................................................................ 478
11.5.1. Configuration ................................................................................... 478
11.5.2. Transaction Behavior ......................................................................... 479
11.5.3. Introspection .................................................................................... 480
11.5.4. Meta Information Model .................................................................... 480
11.5.5. Agent Message Events ....................................................................... 480
11.5.6. Debug Events ................................................................................... 480
11.6. Duplicate Batch Agent ................................................................................. 480
11.6.1. Introduction ..................................................................................... 480
11.6.2. Duplicate Batch Detection Agent ......................................................... 481
11.6.3. Duplicate Batch Inspector .................................................................. 484
11.7. Duplicate UDR Detection Agent .................................................................... 485
11.7.1. Introduction ..................................................................................... 485
11.7.2. Duplicate UDR Detection Agent .......................................................... 485
11.7.3. Duplicate UDR Inspector ................................................................... 491
11.8. Encoder Agent ............................................................................................ 493
11.8.1. Configuration - Batch Workflow .......................................................... 493
11.8.2. Configuration - Real-time Workflow ..................................................... 493
11.8.3. Transaction Behavior - Batch workflow ................................................ 494

8
Desktop 7.1

11.8.4. Introspection .................................................................................... 494


11.8.5. Meta Information Model .................................................................... 494
11.8.6. Agent Message Events ....................................................................... 494
11.8.7. Debug Events ................................................................................... 494
11.8.8. Agent Services - Batch Workflow ......................................................... 495
11.9. SQL Loader Agent ...................................................................................... 496
11.9.1. Introduction ..................................................................................... 496
11.9.2. Overview ........................................................................................ 496
11.9.3. Configuration ................................................................................... 497
11.9.4. Transaction Behavior ......................................................................... 498
11.9.5. Introspection .................................................................................... 498
11.9.6. Meta Information Model .................................................................... 498
11.9.7. Debug Events ................................................................................... 498
11.9.8. SQL Statements ............................................................................... 498
11.10. PSI Agent ................................................................................................ 500
11.10.1. Introduction ................................................................................... 500
11.10.2. PSI Agent ...................................................................................... 500
11.10.3. Exception Messages ........................................................................ 506
11.10.4. Example ........................................................................................ 507
11.11. RTBS Agent ............................................................................................. 508
11.11.1. Introduction ................................................................................... 508
11.11.2. RTBS Agent ................................................................................... 508
11.11.3. Event and Exception Messages .......................................................... 512
11.11.4. WFCommands ................................................................................ 513
11.11.5. APL ............................................................................................. 513
11.11.6. Example ........................................................................................ 513
12. Appendix IV - Forwarding agents ............................................................................. 516
12.1. Archiving ................................................................................................... 516
12.1.1. Introduction ..................................................................................... 516
12.1.2. Overview ........................................................................................ 516
12.1.3. Configuration ................................................................................... 516
12.1.4. Archive Inspector .............................................................................. 524
12.1.5. Maintaining Archives ........................................................................ 526
12.1.6. Transaction Behavior ......................................................................... 526
12.1.7. Introspection .................................................................................... 527
12.1.8. Meta Information Model .................................................................... 527
12.1.9. Agent Message Events ....................................................................... 527
12.1.10. Debug Events ................................................................................. 527
13. Appendix V - Collection and Processing Agents ........................................................... 528
13.1. Radius Agents ............................................................................................ 528
13.1.1. Introduction ..................................................................................... 528
13.1.2. Radius Server Agent .......................................................................... 528
13.1.3. Radius Client Agent .......................................................................... 531
13.1.4. Radius Related UDR Types ................................................................ 535
13.1.5. The Radius Format ............................................................................ 535
13.1.6. An Example ..................................................................................... 536
13.2. Diameter Agents ......................................................................................... 539
13.2.1. Introduction ..................................................................................... 539
13.2.2. Overview ........................................................................................ 540
13.2.3. Diameter Profiles .............................................................................. 547
13.2.4. Diameter_Stack Agent ....................................................................... 565
13.2.5. Diameter_Request Agent .................................................................... 574
13.2.6. A Diameter Example ......................................................................... 574
13.2.7. Syntax Description ............................................................................ 581
13.2.8. Configuration and Design Considerations .............................................. 589
13.3. Kafka Agents .............................................................................................. 590
13.3.1. Introduction ..................................................................................... 590
13.3.2. Preparations ..................................................................................... 590

9
Desktop 7.1

13.3.3. Overview ........................................................................................ 591


13.3.4. Kafka Profile ................................................................................... 592
13.3.5. Kafka Forwarding Agent .................................................................... 594
13.3.6. Kafka Collection Agent ...................................................................... 596
13.3.7. Kafka UDR Types ............................................................................. 598
13.4. SMPP Agents ............................................................................................. 598
13.4.1. Introduction ..................................................................................... 598
13.4.2. Overview ........................................................................................ 599
13.4.3. Agents ............................................................................................ 599
13.4.4. Introspection .................................................................................... 601
13.4.5. Meta Information Model .................................................................... 602
13.4.6. Agent Message Events ....................................................................... 602
13.4.7. Debug Events ................................................................................... 602
13.4.8. SMPP UDRs .................................................................................... 603
13.4.9. Examples ........................................................................................ 606
13.5. Web Service Agents ..................................................................................... 607
13.5.1. Introduction ..................................................................................... 607
13.5.2. Overview ........................................................................................ 608
13.5.3. WS Profile Configuration ................................................................... 610
13.5.4. UDR Type Structure .......................................................................... 615
13.5.5. Web Service Provider Agent ............................................................... 619
13.5.6. Web Service Request Agent ................................................................ 621
13.5.7. Example ......................................................................................... 622
13.6. Workflow Bridge Agents ............................................................................... 629
13.6.1. Introduction ..................................................................................... 629
13.6.2. Overview ........................................................................................ 629
13.6.3. Workflow Bridge Profile .................................................................... 631
13.6.4. Workflow Bridge Forwarding Agents .................................................... 633
13.6.5. Workflow Bridge Collection Agent ....................................................... 638
13.6.6. Workflow Bridge UDR Types .............................................................. 640
13.6.7. Examples ........................................................................................ 643
14. Appendix VI - Collection and Forwarding Agents ........................................................ 659
14.1. Database Agents ......................................................................................... 659
14.1.1. Introduction ..................................................................................... 659
14.1.2. Database Collection Agent .................................................................. 659
14.1.3. Database Forwarding Agent ................................................................ 666
14.1.4. General ........................................................................................... 671
14.2. Disk Agents ............................................................................................... 678
14.2.1. Introduction ..................................................................................... 678
14.2.2. Disk Collection Agent ....................................................................... 679
14.2.3. Disk Forwarding Agent ...................................................................... 685
14.3. FTP Agents ................................................................................................ 690
14.3.1. Introduction ..................................................................................... 690
14.3.2. FTP Collection Agent ........................................................................ 690
14.3.3. FTP Forwarding Agent ...................................................................... 700
14.4. Hadoop File System Agents .......................................................................... 708
14.4.1. Introduction ..................................................................................... 708
14.4.2. Preparations ..................................................................................... 709
14.4.3. Hadoop FS Collection Agent ............................................................... 710
14.4.4. Hadoop FS Forwarding Agent ............................................................. 717
14.5. Inter Workflow Agents ................................................................................. 723
14.5.1. Introduction ..................................................................................... 723
14.5.2. Overview ........................................................................................ 723
14.5.3. Inter Workflow Profile ....................................................................... 724
14.5.4. Inter Workflow Collection Agent ......................................................... 727
14.5.5. Inter Workflow Forwarding Agents ...................................................... 732
14.6. SCP Agents ................................................................................................ 735
14.6.1. Introduction ..................................................................................... 735

10
Desktop 7.1

14.6.2. Overview ........................................................................................ 735


14.6.3. Preparations ..................................................................................... 735
14.6.4. SCP Collection Agent ........................................................................ 738
14.6.5. SCP Forwarding Agent ...................................................................... 746
14.7. SFTP Agents .............................................................................................. 755
14.7.1. Introduction ..................................................................................... 755
14.7.2. Preparations ..................................................................................... 755
14.7.3. SFTP Collection Agent ...................................................................... 758
14.7.4. SFTP Forwarding Agent ..................................................................... 767
14.8. SQL Agents ............................................................................................... 776
14.8.1. Introduction ..................................................................................... 776
14.8.2. SQL Collection Agent ....................................................................... 776
14.8.3. SQL Forwarding Agent ...................................................................... 779
14.9. TCP/IP Agents ............................................................................................ 783
14.9.1. Introduction ..................................................................................... 783
14.9.2. TCP/IP Forwarding Agent .................................................................. 783
14.9.3. TCP/IP Collection Agent .................................................................... 787
14.9.4. An Example ..................................................................................... 791
15. Appendix VII - Collection Strategies ......................................................................... 795
15.1. APL Collection Strategy ............................................................................... 795
15.1.1. Prerequisites .................................................................................... 795
15.1.2. Overview ........................................................................................ 795
15.1.3. APL Collection Strategy Editor ........................................................... 795
15.1.4. Configuration ................................................................................... 797
15.1.5. The FileInfo UDR Type ..................................................................... 798
15.1.6. APL Functions ................................................................................. 799
15.2. Control File Collection Strategy ..................................................................... 799
15.2.1. Overview ........................................................................................ 799
15.3. Duplicate Filter Collection Strategy ................................................................ 803
15.3.1. Overview ........................................................................................ 803
15.3.2. Configuration ................................................................................... 803
15.4. Multi Directory Collection Strategy ................................................................ 804
15.4.1. Overview ........................................................................................ 804
15.4.2. Configuration ................................................................................... 804
16. Appendix IX - Error Correction System ..................................................................... 807
16.1. Error Correction System ............................................................................... 807
16.1.1. Introduction ..................................................................................... 807
16.1.2. ECS Forwarding Agent ...................................................................... 807
16.1.3. ECS Collection Agent ........................................................................ 808
16.1.4. ECS_Maintenance System Task ........................................................... 812
16.1.5. ECS Inspection ................................................................................ 813
16.1.6. ECS Statistics .................................................................................. 813
16.1.7. Example - ECS handling of UDRs ....................................................... 814
16.1.8. Example - ECS handling of batches ...................................................... 816

11
Desktop 7.1

1. Introduction
MediationZone® is a data mediation foundation, based on a distributed real-time architecture on which
any type of mediation functionality can be deployed.

The system is based on workflow technology, where mediation processes can be modeled in a graph-
ical user interface. Workflow activities are performed by software Agents, that are linked into flows
providing the required mediation functionality.

Figure 1. System Architecture Diagram

1.1. Prerequisites
The reader of this document should be familiar with:

• Databases

• Distributed systems

For information about Terms and Abbreviations used in this document, see the Terminology document.

1.2. Execution Context


A workflow is loaded and started on an Execution Context according to a configured distribution cri-
teria. See Section 4.1.8.4, “Execution Tab”. For example: Distribution of the workflow based on machine
load, or explicitly specifying the Execution Contexts where the workflow should run.

There are two kinds of Execution Contexts. One that can execute any type of workflow and one that
can run stand-alone. The stand-alone version only works with real-time workflows that are configured
to not depend on external entities. The purpose with a stand-alone workflow is to allow it to run without
relying on the platform. For example: Assume a work environment where either the network is unreli-
able, or the workflow must guarantee uptime, even if the platform - for some reason - has terminated.
If the platform is down, a stand-alone Execution Context keeps track of all events that occurred, and
once the platform is up and running again, these events are propagated to the platform. The Debug
Event and internal events for statistics are not remembered.

12
Desktop 7.1

An Execution Context features a Web Interface showing the workflows running. The Web Interface
should only be used in case that the platform process is unavailable or the user is unable to stop a
workflow due to communication failure between the platform and the Execution Context.

1.3. Commands
A workflow agent may support execution of commands while it is executing. Such commands are
agent specific and can be invoked from either the command-line tool mzsh or the workflow monitor.
A command will in most cases affect the data in the workflow in some way, for instance by flushing
an internal cache of data to downstream agents.

1.3.1. Agent Command


Agents that can be controlled while active may provide a user interface for interaction with the agent
or just rely on the wfcommand in mzsh. If the agent provides a command user interface this is dis-
played when double clicking the agent. The Command tab holds the configuration for the command
and Execute runs the command. The result of the command will then be shown once the command is
complete.

13
Desktop 7.1

2. Desktop Overview
Desktop is the user interface application that enables you to manage, navigate, and monitor Medi-
ationZone® . With Desktop you create workflows. A workflow is a set of agents that are connected
to each other and represent a flow of data processing. All the agents in a workflow operate on a specific
data type, and most agents need to know the structure, the introspection, of the data in order to operate
properly.

This chapter describes applications, features and settings used in within the MediationZone® Desktop.

2.1. Security
2.1.1. Login/Logout
When the MediationZone® Desktop application is started a login window is presented. In order to
gain access to the Desktop, the user has to be authenticated by supplying a Username and a Password.

Figure 2. The Login Dialog Box

Once the Username and Password have been successfully entered, the Desktop will be available.
Depending on the logged in user, access is granted to different parts of the system. Note that all parts
of Desktop are visible to all users, regardless of permission restrictions. Configuration and operation
options may however be disabled.

Note! By setting the mz.security.user.restricted.login property in the


platform.xml file to value="true", you restrict user logins to one login for each of the
interface types:

• a single Desktop

• a single MediationZone® Web Interface, and

• a single MZSH shell

A second consecutive attempt to login to any of these is not authorized.

To restrict to one login, enter the following line to the platform.xml file: <property
name="mz.security.user.restricted.login" value="true"/>

The name of the logged in user and the name of the MediationZone® system that Desktop has connected
to are available in the status area at the bottom of Desktop.

A login banner can be added to the login window, the purpose of the login banner is to provide inform-
ation to the user before logging in. To enable the login banner the property mz.security.lo-
gin.banner is added to the platform.xml file. The value of the property is the name of a file
with the text that should be displayed in the banner. The text in the login banner can be formatted using
HTML-tags.

14
Desktop 7.1

2.1.1.1. Logging Into A Configuration Space


If you have more than one configuration space, when you login to the desktop, there is a dropdown
list from which you can select the space that you want to work in.

Figure 3. List of available spaces

For further information on configuration spaces, see the Configuration Spaces document.

2.1.1.2. Accessing Multiple Systems


The same Desktop application may be configured to run against several MediationZone® systems.
In that case, a list will appear from which a specific instance is selected.

To be able to do this the desktop.xml file must be updated to describe what systems that are
available. First open the desktop.xml file in a text editor. Duplicate the configuration element, one
below the other. There will now be two configuration elements.

The properties that may be added are defined in the desktop.xml file and are modified in order to
comply with the relevant MediationZone® instance.

Example 1.

<configlist prompt="true">
<config name="Desktop1">
.
.
.
<property name="pico.rcp.platform.host" value="10.0.1.33"/>
<property name="pico.rcp.platform.port" value="4111"/>
</config>
<config name="Desktop2">
.
.
.
<property name="pico.rcp.platform.host" value="10.0.1.33"/>
<property name="pico.rcp.platform.port" value="4192"/>
</config>
</configlist>

15
Desktop 7.1

Figure 4. Several MediationZone® Instances May Be Accessed

2.1.1.3. Changing Desktop User


To change the Desktop user, you have to select Exit from the File menu and then start the Desktop
again.

2.1.1.4. Closing Desktop


By selecting Exit from the File menu the user is logged out and Desktop is closed.

2.1.2. Locks
Configurations that you edit are locked for manipulation by other users. When you open a locked
Configuration, a message box appears with information about the user that has access and can edit it.
To edit a locked, or read-only, Configuration you can save a copy of it by a different name.

Figure 5. A Locked Configuration

Locks are not persistent. If the system is restarted, all locks will be forgotten.

2.1.3. Connection Failure


When MediationZone® detects a connection failure between the platform and the execution context,
it will block user input for as long as it takes to re-establish connection. Once reconnected, save any
open Configurations and close all dialog boxes.

Figure 6. A Disconnected Desktop

2.1.4. Encryption
A Configuration in MediationZone® is persisted using XML and therefore more or less available in
a readable form to any user (See Section 7.3, “Configuration Browser”). Some Configurations may
be sensitive and possibly contain descriptions that are proprietary and must be protected. To protect
such Configurations, MediationZone® features the ability to encrypt Configurations using a pass-
phrase. A Configuration will thereby only be readable given that the pass-phrase is known by the user.
In case the pass-phrase is lost the Configuration should be considered lost as well.

There are Configurations that generate information to the system. For example: The Ultra format that
renders UDRs. A user can have access to the UDRs without knowing the pass-phrase for the Config-
uration source, by setting the user group execute permission. A user can also import a format or ana-
lysis package which execute permission is configured.

16
Desktop 7.1

Encrypted Configurations retain their encryption and pass-phrase across export and import. This means
that in order to open a Configuration that is imported from another system, you need its pass-phrase.

The Database profile and some of the agents can use passwords from External References. These can
be encrypted, either by using the default key, or by using a crypto service keystore file. See Section 9.5.4,
“Using passwords in External References” for further information.

2.2. Administration and Management


2.2.1. Desktop Background Color
You can start several Desktop applications on a specific computer. To tell the difference between the
applications and their respective views you can vary the background color of every Desktop.

To Change the Desktop Application Background Color, add the following text into
$MZ_HOME/etc/desktop.xml:

<property name="mz.gui.color.space.active" value=""/>

value can be any of the following colors: blue, green, yellow, orange, red, darkblue, darkgreen, magenta,
or darkred.

In addition, to tell the difference between different spaces, you can vary the background color of each
space. For further information on configuration spaces, see the Configuration Spaces document.

2.2.2. Dynamic Update


While a real-time workflow is being executed, you can change the value of the following parameters:

• The Host- and Port parameters of the TCP/IP agent

• The NAS list of the Radius agent

To be able to dynamically update TCP/IP Host- and Port parameters you need to set them to either
Default or Per Workflow in the Workflow Properties dialog box. See Figure 83, “The Workflow
Table Tab”.

To update, select Dynamic Update from the Edit menu. On the title bar of the monitor dialog-box the
text Dynamic Update followed by a number appears. It represents the number of times that you have
updated the workflow configuration while running, that is since the last time you started it.

17
Desktop 7.1

Figure 7. Dynamic Update

2.2.3. Folders
Folders enable the user to categorize Configurations, and simplify their maintenance and operation.
Folders could for instance be created based on traffic type, decoding for a specific network element,
as well as being based on geographic location etc.

MediationZone® includes a system folder named Default. This folder cannot be renamed nor removed.

2.2.4. Configuration Naming


Configuration names within a folder must be unique.

Some named items in the MediationZone® environment are used when constructing file names. To
avoid potential conflicts in the file systems, MediationZone® will convert the illegal characters when
constructing the file names.

The following characters are considered to be legal. Any other character will result in a validation error.

• a-z

• A-Z

• 0-9

• - (dash)

• _ (underscore)

MediationZone® has an internal key for every Configuration. This key is used to identify the Config-
uration. Renaming a Configuration will not change this key. The key is constructed by using the system
name and the date when the Configuration was created. The generated key can be viewed by selecting

18
Desktop 7.1

the Show Properties option in the right click menu in the Configuration Navigator, as well as in the
Configuration Browser and Configuration Tracer.

2.2.5. Date and Time Format Codes


In various places in MediationZone® date formats are entered. The following list shows valid date
and time format codes that may be combined with any characters that are not in the ranges of 'a'-'z'
and 'A'-'Z'. For instance, characters such as ':', '.', ' ', '#' and '@' will appear in the resulting time even
if they are not specified within single quotation marks.

The date syntax conforms to the Java class SimpleDateFormat. This section contains a summary
only. For a full description see:

http://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html

Letter Date or time component Example


G Era designator AD
y Year 1996; 96
M Month in year July; Jul; 07
w Week in year 27
W Week in month 2
D Day in year 189
d Day in month 10
F Day of week in month 2
E Day in week Tuesday; Tue
a am/pm marker PM
H Hour in day(0-23) 0
k Hour in day(1-24) 24
K Hour in am/pm (0-11) 0
h Hour in am/pm (1-12) 12
m Minute in hour 30
s Second in minute 55
S Millisecond 978
z Time zone PST; GMT-08:00
Z Time zone -08:00

19
Desktop 7.1

Example 2.

The following examples show how date and time patterns are interpreted in the U.S. local. The
given date and time are 2001-07-04 12:08:56 local time in the U.S. Pacific Time time zone.

Format Example
"yyyy.MM.dd G 'at' HH:mm:ss z" 2001.07.04 AD at 12:08:56 PDT
"EEE, MMM d, ''yy" Wed, Jul 4, '01
"h:mm a" 12:08 PM
"hh 'o''clock' a, zzzz" 12 o'clock PM, Pacific Daylight Time
"K:mm a, z" 0:08 PM, PDT
"yyyyy.MMMMM.dd GGG hh:mm aaa" 02001.July.04 AD 12:08 PM
"EEE, d MMM yyyy HH:mm:ss Z" Wed, 4 Jul 2001 12:08:56 -0700
"yyMMddHHmmssZ" 010704120856-0700

2.2.6. Properties used for Desktop


When having installed the Desktop client, you will have two different files with properties;
desktop.xml and common.xml. Both files are located in the <your local mz
directory>\etc directory.

2.2.6.1. Desktop Properties


This section describes the different properties that can be used in the desktop.xml file. Apart from
these properties, JDK argument can also be added in the following format:

<jdkarg value="<the value of your JDK argument>" />

2.2.6.1.1. Default Properties in Desktop.xml

mz.gui.apl.syn- Default value: yes


taxhighlighting
This property specifies whether you want the text in your APL code to be
color coded according to code definitions or not.

The text is color coded according to the following definitions; brown = strings,
dark blue = functions, light blue = constants, green = types, orange = user
defined types, purple = key words, red = comments
mz.gui.edit- Default value: notepad.exe
or.command
This property specifies the command used for starting the editor you want to
use for editing APL code or Ultra Formats. If you, for example, want to use
Emacs, and are running on Windows, the command should be emacs.exe
while in Linux/Unix it should be emacs.
mz.gui.edit- Default value: 8,10,12,14,18,20,24,36
or.menufont-
sizes This property specifies the font sizes you want to be able to choose from when
editing APL code or Ultra Formats in the APL Code Editor and the Ultra
Format Editor. The current value is displayed between the "-" and "+" magni-
fying glasses to the left in the button list in the editors and can be changed by
clicking on the magnifying glasses, or using the key combinations CTRL+
and CTRL- The current value can also be changed by opening the right-click
popup menu and selecting Font Size.

20
Desktop 7.1

Figure 8.

mz.gui.re- Default value: false


start.tabs
This property determines whether the tabs that are open when exiting Desktop
should be remembered or not. The default behaviour is that the tabs will not
be remembered, but setting this property to true will restore the open tabs
the next time Desktop is opened.

Note! Setting this property to true may cause the startup of Desktop
to be bit slower.

mz.gui.ufl.syn- Default value: yes


taxhighlighting
This property specifies whether you want the text in your Ultra Formats to be
color coded according to code definitions or not.

The text is color coded according to the following definitions; brown = strings,
dark blue = functions, light blue = constants, green = types, orange = user
defined types, purple = key words, red = comments
pico.bootstrap- Default value: com.digitalroute.ui.MZDesktopMain
class
This property specifies the bootstrap classes used for desktop.xml.
pico.swing Default value: yes

This property specifies how you want notifications to be made for this pico.
For Desktops you usually want notifications to be made in the GUI, and in
that case this property should be set to yes, meaning that Swing will be used.
For other picos, such as the Platform and the Execution Context, this property
will usually be excluded, which will result in notifications being sent to the
console instead.
pico.type Default value: desktop

This property specifies the type of pico instance used for the Desktop. See the
Terminology document for further information.
swing.aatext Default value: true

This property specifies that Java anti-aliasing should be used, which will im-
prove the display of graphical elements in the GUI.

2.2.6.1.2. Additional Properties in Desktop.xml

mz.gui.apl.col- Default value: ""


or.comments
With this property you can specify the color you want to use for comments
in the APL code. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value red


will be used.

21
Desktop 7.1

mz.gui.apl.col- Default value: ""


or.constants
With this property you can specify the color you want to use for constants
in the APL code. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value light


blue will be used.
mz.gui.apl.col- Default value: ""
or.error
With this property you can specify the color you want to use for errors in
the APL code. Colors are entered in hex format, e g "#666666".
mz.gui.apl.col- Default value: ""
or.functions
With this property you can specify the color you want to use for functions
in the APL code. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value dark


blue will be used.
mz.gui.apl.col- Default value: ""
or.keywords
With this property you can specify the color you want to use for keywords
in the APL code. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value purple


will be used.
mz.gui.apl.col- Default value: ""
or.normal
With this property you can specify the color you want to use on your reg-
ular APL code. Colors are entered in hex format, e g "#666666".
mz.gui.apl.col- Default value: ""
or.owntypes
With this property you can specify the color you want to use for user
defined types in the APL code. Colors are entered in hex format, e g
"#666666".

If this property is not included in desktop.xml, the default value orange


will be used.
mz.gui.apl.col- Default value: ""
or.strings
With this property you can specify the color you want to use for strings in
the APL code. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value brown


will be used.
mz.gui.apl.col- Default value: ""
or.types
With this property you can specify the color you want to use for types in
the APL code. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value green


will be used.
mz.gui.apl.syntax- Default value: ""
file
With this property you can specify a file that contains further components
within the APL code that you want to be able to highlight. The value should
contain path and file name to the syntax file.

22
Desktop 7.1

mz.gui.col- Default value: ""


or.space.active
This property can be added in order to change the background color of the
Desktop. Possible values are: blue, green, yellow, orange, red,
darkblue, darkgreen, magenta, or darkred.
mz.gui.systemex- Default value: ""
port.default.dir
This property can be added in order to configure the default directory you
want to use when clicking on the Browse... button when doing a system
export.

The value must be the full path to an existing directory, e g /home/mz.


mz.gui.systemim- Default value: ""
port.default.dir
This property can be added in order to configure the default directory you
want to use when clicking on the Browse... button when doing a system
import.

The value must be the full path to an existing directory, e g /home/mz.


mz.gui.udredit- Default value: yes
or.limit
This property can be used to support decoding of files that are larger than
3MB. When set to yes the UDR File Editor will only read up to 3MB and
then stop, when set to no the UDR File Editor will continue to read until
the end of the file is reached.
mz.gui.ufl.col- Default value: ""
or.comments
With this property you can specify the color you want to use for comments
in the Ultra formats. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value red


will be used.
mz.gui.ufl.col- Default value: ""
or.constants
With this property you can specify the color you want to use for constants
in the Ultra formats. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value light


blue will be used.
mz.gui.ufl.col- Default value: ""
or.error
With this property you can specify the color you want to use for errors in
the Ultra formats. Colors are entered in hex format, e g "#666666".
mz.gui.ufl.col- Default value: ""
or.functions
With this property you can specify the color you want to use for functions
in the Ultra formats. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value dark


blue will be used.
mz.gui.ufl.col- Default value: ""
or.keywords
With this property you can specify the color you want to use for keywords
in the Ultra formats. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value purple


will be used.

23
Desktop 7.1

mz.gui.ufl.col- Default value: ""


or.normal
With this property you can specify the color you want to use on your reg-
ular Ultra format text. Colors are entered in hex format, e g "#666666".
mz.gui.ufl.col- Default value: ""
or.owntypes
With this property you can specify the color you want to use for user
defined types in the Ultra formats. Colors are entered in hex format, e g
"#666666".

If this property is not included in desktop.xml, the default value orange


will be used.
mz.gui.ufl.col- Default value: ""
or.strings
With this property you can specify the color you want to use for strings in
the Ultra formats. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value brown


will be used.
mz.gui.ufl.col- Default value: ""
or.types
With this property you can specify the color you want to use for types in
the Ultra formats. Colors are entered in hex format, e g "#666666".

If this property is not included in desktop.xml, the default value green


will be used.
mz.gui.ufl.syntax- Default value: ""
file
With this property you can specify a file that contains further components
within the Ultra formats that you want to be able to highlight. The value
should contain path and file name to the syntax file.
mz.gui.wfedit- Default value: 500
or.maxrows
This property can be added to change the maximum number of allowed
rows in the workflow table. If this property has not been added, the default
value of 500 rows will apply.
pico.inhibit.start- Default value: "false"
message
This property determines if the start message, generated when starting the
Desktop client, should be logged or not.

The default value is false, which means that the start message will be
logged. Excluding the property entirely will have the same effect. Setting
the property to false will result in no logging of the start message.
pico.logdateformat Default value: "YYYY-MM-DD"

This property specifies the date format to be used in the log files.

See http://docs.oracle.com/javase/8/docs/api/java/text/SimpleDate-
Format.html for further information.
pico.name Default value: "<pico instance type>"

This property specifies the name of the pico instance used for the Desktop.
If this property is not included, the name of the config element will be
used, see Section 2.2.6.1.4, “Configuration Properties for Multiple Desktops
in Desktop.xml” for further information.
pico.pid Default value: $MZ_HOME/core/log

24
Desktop 7.1

This property specifies the directory you want the Desktop to write process
ID (PID) file to.

If this property is not included in desktop.xml, the default directory,


$MZ_HOME/core/log will be used.
pico.stderr Default value: $MZ_HOME/core/log

This property specifies the directory you want the Desktop to write standard
errors to.

If this property is not included in desktop.xml, the default directory,


$MZ_HOME/core/log will be used.
pico.stdout Default value: $MZ_HOME/core/log

This property specifies the directory you want the Desktop to write standard
output to.

If this property is not included in desktop.xml, the default directory,


$MZ_HOME/core/log will be used.
pico.tmpdir Default value: ""

This property specifies the pico temp directory you want the Desktop to
use.

If this property is not included in desktop.xml, the default directory,


$MZ_HOME/core/log will be used.

2.2.6.1.3. Properties for Enabling Logging

To enable logging for the Desktop, add the following lines in the desktop.xml file:

<property name="java.util.loggin.config.class" value="com.digitalroute.picostart


<property name="pico.log.level" value="INFO"/>
<property name="pico.log.filter" value="com.digitalroute.wf"/>

where the pico.log.level specifies the log level. Available levels are:

• Finest

• Fine

• Info (most common)

• Warning

• Severe

• Off (default, which is also the same as having no logging properties included)

2.2.6.1.4. Configuration Properties for Multiple Desktops in Desktop.xml

If you want to configure several Desktops that connect to different Platforms, add <config> sections
for each Desktop in the desktop.xml file according to the following example.

25
Desktop 7.1

Example 3.

<configlist prompt="true">
<config name="Desktop1">
.
.
.
<property name="pico.rcp.platform.host" value="10.0.1.33"/>
<property name="pico.rcp.platform.port" value="4111"/>
</config>
<config name="Desktop2">
.
.
.
<property name="pico.rcp.platform.host" value="10.0.1.33"/>
<property name="pico.rcp.platform.port" value="4192"/>
</config>
</configlist>

The following properties, that are usually not included in desktop.xml, have to be added for each
Desktop:

pico.rcp.plat- Default value: ""


form.host
This property is left blank by default for the Desktop client installation.
It may be used for specifying the IP address or the name of the Medi-
ationZone® Platform host.
pico.rcp.pla- Default value: ""
form.port
This is left blank by default for the Desktop client installation. It may
be used for specifying the port to be used for communicating with the
MediationZone® Platform host.

2.2.6.2. Properties in Common.xml


The following properties are available in the common.xml file in your Desktop client installation.

java.nio.preferSe- Default value: true


lect
This property specifies that computers using OS X should use select
instead of kqueue to avoid problems with the kernels.
pico.cache.basedir Default value: ${mz.home}/pico-cache

This property specifies the directory that should be used for the pico-
cache that is cashing information about all running picos, which is used
by all servers and clients.
pico.tmpdir Default value: ${mz.home}/tmp

This property specifies the temp directory you want to use for you picos.
pico.rcp.server.host Default value: ""

This property is left blank by default. If a High Availability environment


is configured, this value may be set. Consult your system administrator
for further information.

26
Desktop 7.1

pico.rcp.plat- Default value: ""


form.host
This property is left blank by default for the Desktop client installation.
It may be used for specifying the IP address or the name of the Platform
host.
pico.rcp.plat- Default value: ""
form.port
This property specifies the port that is used for communicating with
the Platform host.
pico.synchron- Default value: ""
izer.port
This property specifies the port that is used for synchronizing files from
the Platform to external ECs and ECSAs.

2.2.7. Text Editor


At various places in the MediationZone® environment, editable text areas are available in order to
enter APL code. To assist in writing the code, these areas host functions available in a pop-up menu
that is displayed when you right-click anywhere in the text area.

Figure 9. Text Editor Right-Click Menu

The menu has the following options:

Font Size Sets the font size.


Cut Moves the selected text to the clipboard.
Copy Copies the selected text to the clipboard.
Paste Pastes the contents of the clipboard into the place where the insertion point has
been set.
Select All Selects all the text.
Undo Undoes your last action.
Redo Redoes the last action that you undid with Undo.
Find/Replace... Displays a dialog where chosen text may be searched for and, optionally, re-
placed.

27
Desktop 7.1

You can also press the CTRL+H keys to perform this action.

Figure 10. Find/Replace Dialog

Quick Find Searches the code for the highlighted text.

You can also press the CTRL+F keys to perform this action.
Find Again Repeats the search for last entered text in the Find/Replace dialog.

You can also press the CTRL+G keys to perform this action.
Go to Line... Opens the Go to Line dialog where you can enter which line in the code you
want to go to. Click OK and you will be redirected to the entered line.

You can also press the CTRL+L keys to perform this action.
Show Definition If you right click on a function in the code that has been defined somewhere
else and select this option, you will be redirected to where the function has
been defined.

If the function has been defined within the same Configuration, you will simply
jump to the line where the function is defined. If the function has been defined
in another Configuration, the Configuration will be opened and you will jump
directly to the line where the function has been defined.

You can also click on a function and press the CTRL+F3 keys to perform this
action.

Note! If you have references to an external function with the same name
as a function within the current code, some problems may occur. The
Show Definition option will point to the function within the current
code, while the external function is the one that will be called during
workflow execution.

Show Usages If you right click on a function where it is defined in the code and select this
option, a dialog called Usage Viewer will open and display a list of the Con-
figurations that are using the function.

You can also select a function and press the CTRL+F4 keys to perform this
action.
UDR Assistance... Opens the UDR Internal Format Browser from wihich the UDR Fields may be
inserted into the code area.

You can also press the CTRL+U keys to perform this action.
MIM Assistance... Opens the MIM Browser from which the available MIM Resources may be
inserted into the code area.

You can also press the CTRL+M keys to perform this action.
Import... Imports the contents from an external text file into the editor. Note that the file
has to reside on the host where the client is running.

28
Desktop 7.1

Export... Exports the current contents into a new file to, for instance, allow editing in
another text editor or usage in another MediationZone® system.
Use External Editor Opens the editor specified by the property mz.gui.editor.command in
the $MZ_HOME/etc/desktop.xml file.

Example 4.

Example:

mz.gui.editor.command = notepad.exe

APL Help... Opens the APL Reference Guide.


APL Code Comple- Performs code completion on the current line. For more information about
tion Code Completion, see Section 2.2.7.1, “APL Code Completion”.

You can also press the CTRL+SPACE keys to perform this action.
Indent Adjusts the indentation of the code to make it more readable.

You can also press the CTRL+I keys to perform this action.
Jump to Pair Moves the cursor to the matching parenthesis or bracket.

You can also press the CTRL+SHIFT+P keys to perform this action.
Toggle Comments Adds or removes comment characters at the beginning of the current line or
selection.

You can also press the CTRL+7 keys to perform this action.
Surround With Adds a code template that surrounds the current line or selection:

• for Loop (CTRL+ALT+F)

• while Loop (CTRL+ALT+W)

• Debug Expression (CTRL+ALT+D)

• if Condition (CTRL+ALT+I)

• Block Comment (CTRL+ALT+B)

2.2.7.1. APL Code Completion


In order to make APL coding easier, the APL Code Completion feature will help you find and add
APL functions and UDR formats.

To access APL Code Completion, place the cursor where you want to add an APL function, press
CTRL+SPACE and select the correct function or UDR format. In order to reduce the number of hits,
type the initial characters of the APL function. The characters to the left of the cursor will be used as
a filter.

APL Code Completion covers:

• Installed APL functions.

• APL functions defined in APL Code configurations.

• APL functions created with MediationZone® Development Toolkit.

29
Desktop 7.1

• Function blocks such as beginBatch and consume.

• Flow control statements such as while and if.

• Installed UDR formats.

• UDR formats created with MediationZone® Development Toolkit.

• User defined UDR formats.

Figure 11. APL Code Completion

2.2.8. Configuration List Editor


The Configuration List Editor appears in many Agent Configuration dialog boxes in MediationZone®
and usually looks like the example on Figure 12, “Configuration List Editor”. This table enables you
to select and list several entries that you want to include in a certain Configuration definition.

Figure 12. Configuration List Editor

Add Click to open a dialog box where you can add an item to the Configuration list.

Edit Select a row and click Edit; an Update dialog box opens and enables you to modify the
data entry.

Remove Select a row and click Remove.

Select an entry from the Configuration list and click Up or Down to move it to an upper
or lower position

30
Desktop 7.1

Note! To change the order of any of the appended rows from ascending to descending, or vise
versa, click a column's heading. The new order will not be saved for the next time you will open
this view.

2.2.9. UDR Browser


In places where one or many UDR types or UDR fields will be selected, the UDR Browser is utilized.
See Figure 13, “UDR Internal Format Browser ”. The UDR Internal Format Browser is accessed
from the Analysis or Aggregation agents, by right-clicking in the APL Code area and selecting UDR
Assistance....

The browser contains all UDR types available in the system:

1. UDR types created in the Ultra Format Editor. These include events and sessions for the Aggreg-
ation sub-system.

2. UDR types installed with the system.

For further information about UDR types and fields, see the MediationZone® Ultra Format Management
user's guide.

Figure 13. UDR Internal Format Browser

UDR Types List of available UDR types ordered in a tree structure.

Formats created in Ultra Format Editor usually have the following structure:

folder name - configuration name - internal type name

Note! There are a number of agents, for example Diameter and Inter
Workflow, that have predefined UDR types with corresponding folder
names.

UDR Fields Displays the UDR type fields Type, in a tree structure.

31
Desktop 7.1

To ease identification, the fields are color coded:

• Optional - Italic black

• Read-only - Red

• Default - Blue

• Nested UDRs - Gray

Note! A nested UDR that is Optional appears in Italic Black.

Show Optional If enabled, fields declared as optional are displayed in black italic text.
Show Readonly Check to display read-only fields; the text appears in red.

Note! Clearing this check-box also affects the blue text entries. These are
reserved fields that you cannot modify.

Datatype If enabled, only fields that match the selected data type are displayed

2.2.9.1. Selection Modes


The browser operates in five modes depending on where it is utilized.

1. Single selection of UDR Type. The UDR Type may be chosen either by double-clicking the UDR
Type or by selecting it followed by OK or Apply. OK and double-click dismisses the dialog.

2. Multiple selections of UDR Type. Many UDR Types may be chosen at once by selecting them
followed by OK or Apply. OK or double-click dismisses the dialog.

3. Single selection of UDR fields. Same as for UDR Type, but fields are selected instead.

4. Multiple selections of UDR field. Same as for UDR Type but fields are selected instead.

5. Field input assistance. Fields may be inserted in the target text field by double-clicking them or
selecting them followed by Apply. The OK button is not available.

2.2.10. Meta Information Model


Some agents in a workflow need information from the workflow or other agents in order to operate.
For instance, an agent that produces a file might need the source file name and the number of processed
UDRs to be used in the outgoing file name. In order to satisfy these requirements, MediationZone®
uses a model called Meta Information Model (MIM).

MIM is based on the fact that individual agents may supply information during run-time that other
agents may need to use. The MIM information is used in various parts of MediationZone® , for instance,
when selecting which MIM resources to use in a file name, or when selecting what data to identify
UDRs delivered to outlast.

MediationZone® uses Java Management Extensions (JMX) to monitor MIM tree attributes in running
workflows. For more information, refer to Section 8.3, “Workflow Monitoring”.

32
Desktop 7.1

Figure 14. A MIM Tree Example

MIM resources for each agent will have their values assigned at any time, depending on type. As an
example, the Disk collection agent publishes the MIM resource Source Filename which is set at Begin
Batch. The agent will put the name of the collected file in this resource before it starts collecting the
file.

There are in total four types of MIMs:

Batch Batch MIMs are dynamic values, populated during batch processing. An example of such a
value is outbound UDRs.
Global A global MIM value can be accessed whenever during the execution phase of a workflow.
For instance static values such as agent Name.
Header Header MIM values are populated when a batch is received for processing (an agent emits
Begin Batch). For example Source Filename (published by the Disk collection agent).
Trailer Trailer MIM values are populated after a batch is processed (an agent emits End Batch). An
example of such a value is Target Filename (published by the Disk forwarding agent).

By default, all agents may publish the following MIM resources depending on their introspection types.

Agent Name This MIM parameter contains the name of the agent as defined in the
Workflow Editor. All agents publish this resource. The value is set when
the workflow starts executing.

Agent Name is defined as a global MIM context type.


<route name> UDRs (or This MIM parameter contains the number of UDRs (or Bytes) routed on
Bytes) the link by the agent. The value is updated continuously during processing.
This MIM is not valid for forwarding agents.

<route name> UDRs (or Bytes) is defined as a batch MIM context


type.
<route name> Queue Size This MIM parameter contains the number of objects that are currently in
the route's queue.

<route name> Queue Full This MIM parameter contains the number of queue state changes and the
Count value is updated each time a route's queue enters "full" state.

33
Desktop 7.1

Inbound UDRs (or Bytes) This MIM parameter contains the number of incoming UDRs (or Bytes)
since last Begin Batch. The value is updated continuously during batch
processing. This MIM is not valid for collection agents.

Inbound UDRs (or Bytes) is defined as a batch MIM context type.


Outbound UDRs (or This MIM parameter contains the number of outgoing UDRs (or Bytes)
Bytes) since last Begin Batch. The value is updated continuously during batch
processing. This MIM is not valid for forwarding agents.

Outbound UDRs (or Bytes) is defined as a batch MIM context


type.

Note! Some MIM resources will not be available until the agent to which they belong has been
configured.

There are also pico specific MIM resources representing information about the picos' JVMs. These
are:

Available CPUs This MIM parameter states the number of processors that are available for
the JVM.

Available CPUs is defined as a global MIM context type.


System Load Average This MIM parameter states the average system load during the last minute.
The system load is the sum of the number of runnable entitites that are queued
or running on the available processors, and this value will show the average
of this sum for the last minute. This information is useful for indicating the
current system load and may be queried frequently.

System Load Average is defined as a global MIM context type.


Heap Memory Used This MIM parameter states JVM's heap size, i e the amount of memory that
Percentage is currently used by the JVM, in percent.

Heap Memory Bytes Usedn is defined as a global MIM context type.


Non Heap Memory This MIM parameter states the amount of memory outside of the heap (non-
Used Percentage heap memory) that is currently used by the JVM, in percentage.

Non Heap Memory Bytes Used is defined as a global MIM context


type.

There are workflow specific MIM resources representing information of a running workflow as well.
These are:

Batch Cancelled This MIM parameter states if the current batch has been cancelled.

Batch Cancelled is defined as a header MIM context type.


Batch Count This MIM parameter contains a unique sequence number that will be associated
with each batch processed by the workflow. This value is increased by 1 up to
263 and is saved between workflow invocations.

Batch Count is defined as a header MIM context type.


Batch Duration This MIM parameter contains the time it took to process a batch. This value is
updated during processing.

Batch Duration is defined as a batch MIM context type.


Batch End Time This MIM parameter contains the end time for the processing of a batch.

34
Desktop 7.1

Batch End Time is defined as a trailer MIM context type.


Batch Start Time This MIM parameter contains the start time for the processing of a batch.

Batch Start Time is defined as a header MIM context type.


Execution Context This MIM parameter contains the name of the Execution Context hosting the
current workflow.

Execution Context is defined as aUntitled global MIM context type.


Start Time This MIM parameter contains the date and time when the workflow was started.
The value is set when the workflow is activated.

Start Time is defined as a global MIM context type.


Throughput By default, Throughput is the volume-per-time processing rate of a particular
workflow or agent.

Transaction ID Each batch closed by all MediationZone® workflows will receive a unique
transaction ID, cancelled batches as well. This MIM parameter contains the unique
transaction ID.

Transaction ID is defined as a header MIM context type.

Workflow ID This MIM parameter contains the unique identification name of every workflow
in MediationZone® .

Workflow Name This MIM parameter contains the name of the current workflow. The value is set
when the workflow is activated.

Workflow Name is defined as a global MIM context type.

2.2.10.1. MIM Browser


At places where access to MIM resources is used, the MIM Browser is utilized. The MIM Browser
is accessed from the Analysis or Aggregation agent, by right-clicking in the APL Code area and se-
lecting MIM Assistance....

The browser contains all MIM resources available in the workflow:

Figure 15. The MIM Browser

35
Desktop 7.1

The available MIM resources are displayed, ordered in a tree structure. The MIM resource may be
chosen either by double-clicking the MIM resource or by selecting it followed by Apply. Cancel dis-
misses the dialog.

2.3. Desktop User Interface


MediationZone® Desktop provides a tabbed user interface, which allows for having more than one
Configuration open in the same window and to easily switch between different Configurations.

Figure 16. Desktop Window

The MediationZone® Desktop window is organized into two main sections:

• The left part of the Desktop window includes the Configuration Navigator pane. The Configuration
Navigator holds all Configurations in MediationZone® and enables easy navigation between the
different Configurations. For further information, see Section 2.3.3, “Configuration Navigator”.

• The right part of the Desktop window holds all Configurations, Inspectors and Tools that have
been opened, each of them shown in a separate tab.

• Configurations - A Configuration is a configurable MediationZone® item, like, for example, a


workflow or a Database profile. For further information about MediationZone® configurations,
see Section 3, “Configuration”.

• Inspection - When Workflows are executed, the agents may generate various kinds of data, such
as logging errors into the System Log, or sending erroneous data to the Error Correction System
(ECS). The inspectors allow the user to view such information and are further described in Sec-
tion 6, “Inspection”.

• Tools - MediationZone® provides different tools to, for example, view logs, statistics, and pico
instance information, and to import and export Configurations. The tools are described in Section 7,
“Tools”.

For information on how to create a new configuration, and how to open an inspector or a tool, refer
to Section 2.3.2.2, “Desktop Standard Buttons”.

36
Desktop 7.1

2.3.1. Tabs
Configurations and tools are opened in separate tabs, in the right part of the MediationZone® Desktop
window. However, dialogs that are opened from a tool or configuration, e g an Agent configuration
dialog or the MIM Browser, will be opened in a dialog box and not in another tab.

To the right of the list with tabs, you have a button for viewing open tabs, which may be useful
in case you have many configurations open at the same time.

Click on this button and a menu will open, containing all the currently opened tabs.

Figure 17.

When exiting from Desktop, all tabs will be closed, and unless you have set the property
mz.gui.restart.tabs to true, they will not be remembered and restored the next time the
Desktop is started. See section Section 2.2.6.1.1, “Default Properties in Desktop.xml” for further in-
formation.

To reorder the tabs, click a tab and drag it to a different position along the top of the window.

To move a tab to a separate Desktop window, click it and then drag it outside the current window.
This can be useful when running and analyzing several workflows in the workflow monitor, to be able
to view the monitors side by side. If there is only one tab open in the Desktop and the tab is moved to
a separate window, the original Desktop window will be closed. A Tool, Inspector or Configuration
can only be open in one tab at a time.

It is possible to move tabs between several Desktop windows.

2.3.2. Menus and Buttons


The MediationZone® Desktop user interface includes the following standard features:

Desktop The Desktop main menus are found at the top of the Desktop window. The menus are dynamic and
Main change according to the type of Configuration, Inspector or Tool that has been opened in the currently
Menus displayed tab. Refer to Section 3, “Configuration”, Section 6, “Inspection” and Section 7, “Tools”
for more details about the specific menus and menu items. For a description of the Desktop standard
menus, see Section 2.3.2.1, “Desktop Standard Menus”.

The following figure shows the main menus that are visible for a workflow configuration.

Figure 18. The Desktop Main Menus for a Workflow Configuration

Desktop The Desktop buttons are located in the upper left part of the MediationZone® Desktop window.
Buttons Refer to Section 2.3.2.2, “Desktop Standard Buttons” for a description of the buttons.

Figure 19. The Desktop Buttons

37
Desktop 7.1

Tab There are different closing options available for a tab and these are selected from a right-click menu.
Right- See Section 2.3.2.3, “Tab Right-Click Menu” for more information.
Click
Menu

Figure 20. The Tab Right-Click Menu

Tab The button panel is visible at the top of a tab. It is dynamic and changes according to the type of
Button Configuration, Inspector or Tool that has been opened in the currently displayed tab. For a description
Panel of the specific buttons, refer to Section 3, “Configuration”, Section 6, “Inspection”, and Section 7,
“Tools”.

The following figure shows the button panel visible for a workflow configuration.

Figure 21. The Tab Button Panel for a Workflow Configuration

2.3.2.1. Desktop Standard Menus


The following figure shows the Desktop main menus and menu items that are shown in MediationZone®
Desktop when starting it for the first time and when there are no tabs open yet:

Figure 22. The Desktop Standard Main Menus

2.3.2.1.1. The File Menu

Item Description
Change Pass- From the File menu, select Change Password and the Change Password dialog
word... box opens.

Figure 23. The Change Password Dialog Box

Exit Select to exit from MediationZone® Desktop. To login as another user, you have
to exit and then start the MediationZone® Desktop again.

2.3.2.1.2. The Help Menu

Shows all help topics for your MediationZone® installation.

Note! If you press F1, you will open the relevant topic for the dialog or window you currently
have active. However, for the various configurations in the Configuration menu, you may have
to scroll to the right section.

38
Desktop 7.1

2.3.2.2. Desktop Standard Buttons


These are the Desktop buttons that are located in the upper left part of the MediationZone® Desktop
window.

Show/Hide Configura- To show or hide the Configuration Navigator pane in the left area of the
tion Navigator Desktop. The Configuration Navigator is described in more detail in Sec-
tion 2.3.3, “Configuration Navigator”. You can also toggle this option by
pressing CTRL+1.
New Configuration To create a new MediationZone® configuration. The configuration is opened
in a tab in the right part of the Desktop window. The different configuration
types that can be created are described in more detail in Section 3, “Config-
uration”. You can press CTRL+F1 to open this menu.
Inspection To open an MediationZone® Inspector. The Inspector is opened in a tab in
the right part of the Desktop window. The MediationZone® Inspectors are
described in more detail in Section 6, “Inspection”. You can press CTRL+F2
to open this menu.
Tools To open a MediationZone® Tool. The Tool is opened in a tab in the right
part of the Desktop window. The MediationZone® Tools are described in
more detail in Section 7, “Tools”. You can press CTRL+F3 to open this
menu.

When you have developed your own DTK plugins available in the Extensions menu, you can press
CTRL+F4 to open this menu.

Hint! You can also activate the button panel by pressing the CTRL key twice. You can then use
the arrow keys to move between the different buttons in the panel.

2.3.2.3. Tab Right-Click Menu


Right-click at the top of a tab and select one of the following options:

Close Select this option to close the selected tab.


Close Other Select this option to close all other tabs in the MediationZone® Desktop window.
Close All Select this option to close all tabs in the MediationZone® Desktop window.

2.3.3. Configuration Navigator


The Configuration Navigator gives a view of all configurations in MediationZone® and makes it
possible to easily navigate between different configurations.

In the Configuration Navigator you can also filter what Configurations to be shown, by selecting
Configurations of a specific type. The Configuration Navigator can be hidden or visible. By default,
it is visible and all Configurations are displayed.

The Configuration Navigator supports a set of operations that can be performed for the Configurations
by using the right-click menu. For each Configuration you can also open a Properties dialog where
permissions can be set and where you can view history, references and basic information. Refer to
Section 7.3.5, “Properties” for more information.

To show or hide the Configuration Navigator pane, click the Show/Hide Configuration
Navigator button in the upper left part of the Desktop.

39
Desktop 7.1

Figure 24. The Configuration Navigator

In the Configuration Navigator pane there are two standard folders:

• Default - where all configurations are stored if no other folder is specified when saving the config-
uration.

• SystemTask - includes workflows for performing different background routines. For further inform-
ation, refer to Section 4.1.1.4, “System Task Workflows”.

The Default and SystemTask folders cannot be renamed or deleted.

Each folder listed in the Configuration Navigator pane has a number attached to its name. This number
indicates how many Configurations that are stored in the folder.

2.3.3.1. Right-Click Menu


In this section, the different options available in the right-click menu of the Configuration Navigator
are described.

Right-click a Configuration and select one of the following options:

New Folder... Select this option to create a new folder.

Open Configura- Available when at least one Configuration is selected.


tion(s)...
Select this option to open the selected Configuration(s).

40
Desktop 7.1

Export Configura- Available when at least one Configuration is selected.


tion(s)
Select this option to export the selected Configurations. The System Export-
er window will open with the Configurations pre-selected.

Note! When exporting from the Configuration Navigator Configur-


ation dependencies are not automatically selected. This can be
achieved by selecting the "Select Dependencies" check box in the
System Exporter window. For further information see Section 7.9,
“System Exporter”.

Cut Select this option to put one or more Configurations on the clipboard for
moving the Configuration to another location. Select the menu option Paste
in the folder where the Configurations should be stored.

This option is not applicable if the Configuration is locked. For further in-
formation see Section 2.1.2, “Locks”.
Copy Select this option to put one or more Configurations on the clipboard for
copy the Configurations to another location. Select the menu option Paste
in the folder where the copied Configurations should be stored.
Paste Select this option to store Configurations that have been cut or copied to
the clipboard into a folder.

Delete... Select this option to delete the selected Configuration(s). If the Configuration
is referenced by another Configuration, a warning message will be displayed,
informing you that you cannot remove the Configuration. For further inform-
ation see Section 7.3.5.3, “The References Tab”.
Rename... Select this option to change the name of the selected Configuration. Take
special precaution when renaming a Configuration. If, for example, an APL
script is renamed, workflows that are using this script will become invalid.
This is especially important to know when renaming folders containing
many ultra format Configurations or APL. Renaming a folder with ultra
formats or APL Configurations will make all referring Configurations inval-
id.
Encrypt... Select this option to encrypt the selected Configurations.
Decrypt... Select this option to decrypt the selected Configurations.
Validate... Select this option to validate the Configuration. A validation message will
be shown to the user.

Show Properties Select this option to launch the Properties dialog for the selected Configur-
ation. For further information, see Section 2.3.3.2, “Properties”.

Documentation Select this option to launch the Documentation dialog for the selected
Configuration. For further information, see Section 2.3.3.3, “Documenta-
tion”.

2.3.3.2. Properties
To open the Properties dialog, right-click on a Configuration and then select Show Properties.

41
Desktop 7.1

Figure 25. The Properties Dialog Box

This dialog contains four different folders; Basic, which contains basic information about the Config-
uration, Permission, where you set permissions for different users, References, where you can see
which other Configurations that are referenced by the selected Configuration, or that refers to the se-
lected Configuration, and History which displays the revision history for the Configuration. The Basic
folder is displayed by default.

2.3.3.2.1. The Basic Tab

The Basic tab is the default tab in the Properties dialog and contains the following information:

Name Displays the name of the Configuration.


Type Displays the type of Configuration.
Key Displays the internal key used by MediationZone® to identify the Configuration.
Folder Displays the name of the folder in which the Configuration is located.
Version Displays the version number of the Configuration, see the History folder for further
information about the different versions.
Permissions Displays the permissions granted to the current user of the Configuration. Permissions
are shown as R (Read), W (Write) and X (eXecute). If the Configuration is encrypted,
an E will also be added. For further information about permissions, see Section 7.3.5.2,
“The Permissions Tab”.
Owner Displays the username of the user that created the Configuration. The owner can:

• Read, modify (write), and execute the Configuration

• Modify the permissions of user groups to read, modify, and execute the Configura-
tion.

Modified by Displays the user name of the user that made the last modifications to the Configuration.
Modified Displays the date when the Configuration was last modified.

If you want to use the information somewhere else you can highlight the information and press CTRL-
C to copy the information to the clipboard.

2.3.3.2.2. The Permissions Tab

The Permissions tab contains settings for what different user groups are allowed to do with the Con-
figuration:

42
Desktop 7.1

Figure 26. The Permissions Tab

As access permissions are assigned to user groups, and not individual users, it is important to make
sure that the users are included in the correct user groups to allow access to different Configurations.

R W X E Permission Description
R - - - Allowed only to view the Configuration, given that the
user is granted access to the application.
- W - - Allowed to edit and delete the Configuration.
- - X - Allowed only to execute the Configuration.
R W - - Allowed to view, edit and delete the Configuration, given
that the user is granted access to the application.
- W X - Allowed to edit, delete and execute the Configuration.
R - X - Allowed to view and execute the Configuration, given
that the user is granted access to the application.
R W X - Full access.
- - - E Encrypted.

2.3.3.2.3. The References Tab

The References tab contains information about which other Configurations that the current Configur-
ation is referring to, and which other Configurations that the current Configuration is referenced by:

Figure 27. The References Tab

43
Desktop 7.1

The References tab contains two sub tabs; Used By, that displays all the Configurations that uses the
current Configuration, and Uses, that displays all the Configurations that the current Configuration
uses.

If you want to edit any of the Configurations, you can double click on the Configuration to open it for
editing.

2.3.3.2.4. The History Tab

The History tab contains version information for the Configuration:

Figure 28. The History Tab

In the version table, the following columns are included:

Version Displays the version number.


Modified Date Displays the date and time when the version was saved.
Modified By Displays the user name of the user that saved the version.
Comment Displays any comments for the version.

If you want to clear the history for the Configuration, click on the Clear Configuration History button.
The version number will not be affected by this.

2.3.3.3. Documentation
To open the Documentation dialog, right-click on a configuration and then select Documentation.

Figure 29. The Documentation editor

In this dialog, you can provide information on the selected configuration, for example, a description
and the purpose of the configuration. You can use markdown syntax if preferred. The text entered is
then included in the automated documentation that you can generate using the Documentation Gen-

44
Desktop 7.1

erator tool. When you have completed the text you want to include, click OK to save. For further in-
formation on the Documentation Generator tool, see Section 7.5, “Documentation Generator”.

2.3.4. Status Bar


At the bottom of the main window a status bar is shown. It is divided into four sections, which contain
information about your system.

Figure 30. The Status Bar

Actions The first section shows desktop actions. It could either be a text message with
user information such as "Saved myWorkflow" or a progress bar when data is
being loaded from the platform to the desktop.
Operations Inform- An icon for displaying the status of the Configuration Monitor. While operations
ation are being performed, for example when workflows are in building state, the
icon will indicate that the operations are in progress. If any warnings have been
detected during the operations, a warning sign is shown on top of the Configur-
ation Monitor icon. When pressing the icon, the Configuration Monitor will be
displayed. For more information regarding the Configuration Monitor, see
Section 7.4, “Configuration Monitor”.
User Specifies the user that is logged in to the desktop.
System Information Specifies the system name as well as the host and port that the desktop is con-
nected towards.

45
Desktop 7.1

3. Configuration
This section includes a detailed description of the following Configuration types:

• Alarm Detection

• Audit Profile

• Database Profile

• Redis Profile

• External Reference Profile

• Workflow

• Workflow Group

Other Configuration types are described separately in their respective user's guide.

To create a new MediationZone® configuration, click the New Configuration button . To open
an existing MediationZone® Configuration, double-click a Configuration in the Configuration Navig-
ator, or right-click a Configuration and then select Open Configuration(s).... The Configuration will
be visible in a tab in the right part of the Desktop window.

3.1. Menus and Buttons


3.1.1. Configuration Menus
The contents of the menus in the menu bar may change depending on which Configuration type that
has been opened in the currently displayed tab. The following sections describe the standard menus
for Configurations. Some Configuration types may have additional menus or menu items and those
will be described for each Configuration, in this document or in the respective user guide.

The Desktop standard menus are described in Section 2.3.2.1, “Desktop Standard Menus”.

3.1.1.1. The File Menu

Item Description
New Creates a new Configuration that will be visible in a new tab. You can only create
a Configuration of the same kind as the one in the tab you are working in. To create
another type of Configuration, click the New Configuration button in the upper
left part of the MediationZone® Desktop window.
Open... Opens a saved Configuration that will be visible in a new tab. You can only open
a Configuration of the same kind as the one in the tab you are working in. To open
another type of Configuration, double-click a Configuration in the Configuration
Navigator, or right-click a Configuration and then select Open Configuration(s)....
Save Saves the Configuration.

After clicking Save, a dialog box opens. In the Version Comment text box, type
a description of the changes that you have made, then click OK. This information
will be visible in the Historical Configurations panel, for further information see
Section 3.1.1.3, “The View Menu”.

46
Desktop 7.1

Note! Every time you save a Configuration, MediationZone® performs a


validation of the Configuration integrity. If the Configuration is not complete,
it is saved as "invalid". An invalid Configuration is either one that has not
been fully configured, or one with an external dependency that is either in-
valid or missing. You identify invalid Configurations by the red "X" on top
of their application icon.

Save As... Select to save the Configuration with a new name.

Note! Use only a-z, A-Z, 0-9, "-" and "_" to name a Configuration.

Close Select to close the tab that includes the Configuration. If you have not saved the
current Configuration before clicking Close, a pop-up message will remind you
to do so.
Change Pass- After clicking Change Password..., the Change Password dialog box opens.
word...

Figure 31. The Change Password Dialog Box

Exit Select to exit from MediationZone® Desktop. To login as another user, you have
to exit and then start the MediationZone® Desktop again.

3.1.1.2. The Edit Menu

Item Description
Set Permissions... To set the owner of the Configuration as well as Read, Write and Execute per-
missions for the groups accessing the Configuration. For further information, see
Section 7.2, “Access Controller”.

Figure 32. Setting Permissions for a Configuration

Validate Select to check that the Configuration is valid.

3.1.1.3. The View Menu

Item Description

47
Desktop 7.1

History Each time a Configuration is saved, a new version is created. Many versions of a Config-
uration may exist but only the last version can be modified and executed. The old versions
are kept for log and rollback reasons. Select History to examine old Configurations in
the Historical Configurations panel. The panel will appear at the bottom of the Config-
uration tab and holds a list of all versions. Arrow buttons are used to step back and for-
ward between the different versions. Rollback to an old version of a Configuration is
handled by opening and saving the old version. A comment is automatically added,
stating that the current version was created from a historic one.

Figure 33. A Configuration History

References Click to see the Reference Viewer listing references to and from the active Configuration.
The Reference Viewer includes the following tabs:

• Used By: Displays a list of other Configurations that refer to the Configuration. For
example: a workflow group that refers to a workflow.

• Uses: Displays a list of other configurations that the Configuration refers to. For ex-
ample: a workflow configuration that refers to a specific profile.

• Access: Displays the group of users that may access the configuration, and the user
that created (owns) the configuration.

3.1.2. Configuration Buttons


The contents of the button panel may change depending on which configuration type that has been
opened in the currently displayed tab. The following sections describe the standard buttons for Config-
urations. Some Configuration types may have additional buttons and those will be described for each
Configuration, in this document or in the respective user guide.

Button Description
New Click to create a new configuration that will be visible in a new tab. You can
only create a configuration of the same kind as the one in the tab you are working
in. To create another type of configuration, click the New Configuration button
in the upper left part of the MediationZone® Desktop window.
Open... Click to open a saved Configuration that will be visible in a new tab. You can
only open a Configuration of the same kind as the one in the tab you are working
in. To open another type of Configuration, double-click a Configuration in the
Configuration Navigator, or right-click a Configuration and then select Open
Configuration(s)....
Save Click to save the Configuration.

Save As Click to save the Configuration with a new name.

48
Desktop 7.1

Set Permissions... To set the owner of the Configuration as well as Read, Write and Execute per-
missions for the groups accessing the Configuration. For further information,
see Section 7.2, “Access Controller”.

Figure 34. Setting Permissions for a Configuration

Validate Click to check that the Configuration is valid. See also

References Click to see the Reference Viewer listing references to and from the active
Configuration. The Reference Viewer includes the following tabs:

• Used By: Displays a list of other Configurations that refer to the Configuration.
For example: a workflow group that refers to a workflow.

• Uses: Displays a list of other configurations that the configuration refers to.
For example: a workflow configuration that refers to a specific profile.

• Access: Displays the group of users that may access the Configuration, and
the user that created (owns) the configuration.

3.2. Alarm Detection


An Alarm Detection configuration enables you to define criteria for generation of alarm messages.
You select a condition, or combine a set of conditions, that within specific limits, generate an alarm
message. To monitor the system alarms you use the MediationZone® Web Interface. Note that Medi-
ationZone® enables you to deliver alarm messages to SNMP monitoring systems, as well.

An alarm can be in either one of two states: new or closed. An open alarm is an indication of a certain
occurrence or situation that has not been resolved yet. A closed alarm is a resolved indication.

To create a new Alarm Detection configuration, click the New Configuration button in the upper left
part of the MediationZone® Desktop window, and then select Alarm Detection from the menu.

To open an existing Alarm Detection Configuration, double-click the Configuration in the Configur-
ation Navigator, or right-click a Configuration and then select Open Configuration(s)....

3.2.1. Alarm Detection Menus


The contents of the menus in the menu bar may change depending on which Configuration type that
has been opened in the currently displayed tab. Alarm Detection uses the standard menu items that are
visible for all Configurations, and these are described in Section 3.1.1, “Configuration Menus”.

There is one menu item that is specific for Alarm Detection, and it is described in the next coming
section.

49
Desktop 7.1

3.2.1.1. The Edit Menu

Item Description
Workflow Alarm Value To to define a variable to use in the APL code, see the APL Refer-
Names... ence Guide, and Section 3.2.3.1.5, “Workflow Alarm Value” for
further information.

3.2.2. Alarm Detection Buttons


The contents of the button panel may change depending on which Configuration type that has been
opened in the currently displayed tab. Alarm Detection uses the standard buttons that are visible for
all Configurations, and these are described in Section 3.1.2, “Configuration Buttons”.

3.2.3. Defining an Alarm Detection


An Alarm Detection definition is made up of:

• A condition, or a set of conditions, see Section 3.2.3.1, “Alarm Conditions”

• An object such as host, pico instance, or workflow, that the alarm should supervise

• The parameter that you want the alarm to supervise. For example: Statistics value.

• Time- and value limits of supervision

To create a valid alarm detection configuration make sure that:

• The Alarm Detection includes at least one condition.

• Two conditions within an alarm guard the same object: WF, Host, or Pico Instance.

• Two conditions are set to the same time interval criteria.

To define an alarm:

1. Create an Alarm Detection configuration by clicking the New Configuration button in the upper
left part of the MediationZone® Desktop window, and then selecting Alarm Detection from the
menu.

Figure 35. The Alarm Detection

50
Desktop 7.1

2. Click on the Edit menu and select the Validate option to check if your Configuration is valid.

3. Click on the Edit menu and select the Workflow Alarm Value Names option to define a variable
you can use in the APL code, see the APL Reference Guide, and in the Workflow Alarm condition,
see Section 3.2.3.1.5, “Workflow Alarm Value”.

4. Enter a statement that describes the Alarm Detection that you are defining in the Description field.

5. Select the importance priority the alarm should have in the Severity drop-down list.

6. Use the Alarm Detection Enabled check box to run alarm detection on or off.

7. At the bottom of the Alarm Detection Configuration click on the Add button.

The Add Alarm Condition dialog box opens.

Figure 36. The Add Alarm Condition

8. Select a condition in the Alarm Condition drop-down list.

3.2.3.1. Alarm Conditions


The Alarm conditions enable you to define specific situations or events over which you want the system
to produce an alarm. You configure a condition to produce an alarm whenever a certain behaviour
occurs, within specific limits.

Note!

1. An alarm is generated only if ALL conditions in the Alarm Detection are met.

2. The Alarm Condition limits are reset:

• Every time you restart the MediationZone® platform

• Every time you save the alarm Configuration

• When you resolve the alarm

The Alarm Conditions that you can choose from are:

51
Desktop 7.1

• Host Statistic Value

• System Event

• Pico Instance Statistic Value

• Unreachable Execution Context

• Workflow Alarm Value

• Workflow Execution Time

• Workflow Group Execution Time

• Workflow Throughput

3.2.3.1.1. Host Statistic Value

The Host Statistic Value condition enables you to configure an alarm detection for the Host Statistic
parameters. For further information see Section 7.12.1, “Host Statistics”.

Figure 37. The Host Statistic Value Condition

Host Select a host server from the drop-down list


Statistic Value Select from the drop-down list the parameter that you want the alarm to watch over.
For detailed description of every Statistic Value see Section 7.12, “System Statistics”
Limits Select a limit, either Exceeds or Falls below, upon which alarm should be triggered.
Check During Last to specify the time frame during which the Limits value should
be compared. If a match is detected, an alarm is invoked.

52
Desktop 7.1

Example 5.

Note! The parameters in the following example do not apply to any specific system and are
presented here only to enhance understanding of the alarm condition.

You want the system to generate a warning if the primary host is being overworked.

1. Configure an Alarm Detection with the Host Statistic Value condition.

2. Select the Statistic Value Swapped in from Disk.

3. Enter a limit of 1200 swaps-a-second, during the last 3 hours.

Figure 38. Configure an Alarm Detection

Figure 39. Configure the Alarm Condition

The Alarm will be triggered only if the Statistic Value has been higher than 1200 throughout
the last 3 hours. Note that if a momentary drop in value has occurred during the last 3 hours,
the alarm will not be triggered.

3.2.3.1.2. System Event

The System Event condition enables you to setup an Alarm Detection for the various MediationZone®
Event types.

53
Desktop 7.1

Figure 40. The System Event Condition

Type Select an event-related reason for an alarm to be invoked. For detailed description of every
event type see Section 5.5, “Event Types”.
Filter Use this table to define a filter of criteria for the alarm messages that you are interested in.

To define an entry, double-click on the row.

The Edit Match Value dialog box opens. Click the Add button to add a value.
Limits See Section 3.2.3.1.1, “Host Statistic Value”.

54
Desktop 7.1

Example 6.

Note: The parameters in the following example do not apply to any specific system and are
presented here only to enhance understanding of the alarm condition.

A Telecom provider wants the MediationZone® system to generate an alarm if a certain workflow
fails to write to ECS , in more than 3 attempts during the last 24 hours.

1. Configure an Alarm Detection that applies the System Event condition.

Figure 41. Configure an Alarm Detection

2. On the Edit Alarm Condition dialog box, from the Event Type drop-down list, select Workflow
State Event.

3. On the Filter table double-click Workflow Name; the Edit Match Value dialog box opens.

4. Click Add to browse and look for the specific workflow.

5. Enter a limit of Occurred more than 3 times during the last 24 hours.

Figure 42. Select an Alarm Condition

The Alarm will be triggered by every 4th occurrence of a "Workflow State Event" during the
last 24 hours.

55
Desktop 7.1

3.2.3.1.3. Pico Instance Statistic Value

The Pico Instance Statistic Value condition enables you to configure an Alarm Detection that guards
the Pico Instance statistic value of a specific EC. For further information about the Pico Instance see
Section 7.8, “Pico Viewer”.

Figure 43. The Pico Instance Statistic Value Condition

Pico Instance From the drop-down list select the Pico Instance of which you want to collect
statistical data.
Statistic Value See Section 7.12.2, “Pico Instance”

56
Desktop 7.1

Example 7. For Example:

Note! The parameters in the following example do not apply to any specific system and
are presented here only to enhance understanding of the alarm condition.

A Telecom provider wants the system to generate an alarm if the following two events occur
simultaneously:

• The relevant Pico Instance (EC) memory is overloaded.

• Too many files are open on that same particular Pico Instance.

Figure 44. Configure an Alarm Detection

1. Configure an Alarm Detection that supervises EC1 with the Pico Instance Statistic Value
condition. Use this condition twice:

• With the Used Memory statistic value

• With the Open Files Count statistic value

2. Select the Alarm Condition Pico Instance Statistic Value.

Figure 45. Select an Alarm Condition

3. From the Statistic Value drop-down list select Used Memory.

57
Desktop 7.1

4. Enter a limit of 900000 KB with - Note!- no time limit. This means that whenever this limit
is exceeded, AND the other conditions are met, an alarm is generated.

5. From the Alarm Detection dialog select the alarm condition Pico Instance Statistic Value
once again.

Figure 46. Select Another Alarm Condition

6. This time use the statistic value Open Files Count.

7. Enter a limit of 10000 files, without any time limit.

An Alarm is triggered by every simultaneous occurrence of overloaded memory on EC1 AND


too many open files, at any time.

3.2.3.1.4. Unreachable Execution Context

The Unreachable Execution Context condition enables you to configure an Alarm Detection that will
alert you if the connection, between the platform and the EC that the alarm supervises, fails.

Figure 47. The Unreachable Execution Context Condition

Pico Instance See Section 3.2.3.1.3, “Pico Instance Statistic Value”.

Note: Selecting Any from the drop-down list applies the condition to
all the clients.
Unreachable due to normal Check to invoke an alarm whenever the connection between the
shutdown platform and the client fails due to a normal shutdown of the client

58
Desktop 7.1

Example 8. For Example:

Note! The parameters in the following example do not apply to any specific system and
are presented here only to enhance understanding of the alarm condition.

A telecom provider wants the system to generate an alarm if connection to any EC cannot be
re-established within 10 minutes.

1. Configure an Alarm Detection that uses the Unreachable Execution Context condition.

Figure 48. Configure an Alarm Detection

2. From the Pico Instance drop-down list select Any.

Figure 49. Define the Alarm Condition

3. Enter the time limit of During the last 10 minutes.

The Alarm will be triggered whenever the system detects a loss of connection between the
platform and one of its ECs, for a period that is longer than 10 minutes.

3.2.3.1.5. Workflow Alarm Value

The Workflow Alarm Value condition is a customizable alarm condition. It enables you to have the
Alarm Detection watch over a variable, that you create and assign through the APL code. To apply
the Workflow Alarm Condition use the following guidelines:

59
Desktop 7.1

• Create a variable

• Assign the variable with a value

• Setup the Workflow Alarm Value condition

To Create a Variable name:

1. From the Alarm Detection Editor Edit menu, select Workflow Alarm Value Names; The Workflow
Alarm Value dialog box opens.

2. Click the Add button and enter a variable name. For example: CountBillingFiles.

3. Click OK and then close the Workflow Alarm Value dialog box.

To Assign a Value to the Value Name:

In the APL code, include the command DispatchAlarmValue. For example:

consume {
dispatchAlarmValue("CountBillingFiles",1);
udrRoute(input);
}

To Configure the Workflow Alarm Value Condition:

1. At the bottom of the Alarm Detection Configuration, click Add; the Add Alarm Condition dialog
box opens.

2. From the Alarm Condition drop-down list select Workflow Alarm Value.

3. From the Value drop-down list select the name of the variable that you created.

4. Click Browse to select the Workflow that the Alarm Detection should guard.

5. Configure the Limits according to the description of Figure 50, “The Workflow Alarm Value” and
click OK.

Figure 50. The Workflow Alarm Value

Value Select an alarm value from the drop-down list.


Workflow Click Browse to enter the workflow instance(s) that you want to apply the alarm to
Limits Summation: Check to add up the dispatchAlarmValue variable (countBillingFiles in
Figure 50, “The Workflow Alarm Value”) whenever it is invoked. Alarm Detector

60
Desktop 7.1

compares this total value with the alarm limit (exceeds or falls below), and generates
an alarm message accordingly.

Note: Checking Summation means that the During last entry refers to the time period
during which a sum is added up. Once the set period has ended, that sum is compared
with the limit value.

For All workflows: Check to add up the values (see Summation above) of all the
workflows that the alarm supervises. Alarm Detector compares this total value with
the alarm limit (exceeds or falls below), and generates an alarm message accordingly.
Note: Can be checked only when workflow is set to Any.

For further information about Limits see Section 3.2.3.1.1, “Host Statistic Value”.

3.2.3.1.6. Workflow Execution Time

The Workflow Execution Time condition enables you to generate an alarm whenever the execution
time of a particular or all workflows exceed or fall below the time limit that you specify.

Figure 51. The Workflow Execution Time

Workflow The default workflow value is Any. Use this value when you want to apply the condition
to all the workflows. Otherwise, click Browse to select a workflow that you apply the
condition to.

61
Desktop 7.1

Example 9. For Example:

Note: The parameters in the following example do not apply to any specific system and are
presented here only to enhance understanding of the alarm condition.

A telecom provider wants the system to identify a workflow that has recently run out of input,
and to generate an alarm that warns about a too-short processing time.

1. Configure an Alarm Detection to use the Workflow Execution Time condition.

Figure 52. Configure an Alarm Detection

2. Click Browse; the Workflow Instance Selection dialog box opens.

3. At the bottom of the dialog box click Any.

4. Set a limit of Falls below 2 seconds.

Figure 53. Configure the Alarm Condition

An alarm is generated whenever an active workflow seems to process data too fast (in less than
2 seconds).

3.2.3.1.7. Workflow Group Execution Time

The Workflow Group Execution Time alarm condition enables you to generate an alarm whenever the
execution time of a workflow group exceeds or falls below the time limit that you specify.

62
Desktop 7.1

Figure 54. The Workflow Group Execution Time

Workflow Group Click Browse to enter the address of the workflow group to which you want to
apply the alarm

63
Desktop 7.1

Example 10. For Example:

Note! The parameters in the following example do not apply to any specific system and
are presented here only to enhance understanding of the alarm condition.

You want the system to generate an alarm if a billing workflow group has been active longer
than 3 hours.

1. Configure an Alarm Detection that uses the Workflow Group Execution Time condition.

Figure 55. Configure an Alarm Detection

2. On the Edit Alarm Condition dialog box click Browse to enter the workflow group you want
the alarm detection to supervise.

Figure 56. Configure the Alarm Condition

3. Enter a limit of Exceeds 3 hours.

The Alarm will be triggered if the workflow group has been active longer than 3 hours.

3.2.3.1.8. Workflow Throughput

The Workflow Throughput alarm condition enables you to create an alarm if the volume-per-time
processing rate of a particular workflow exceeds, or falls below, the throughput limit that you specify.

64
Desktop 7.1

Figure 57. The Workflow Throughput

Workflow Select a workflow which throughput value, the processing speed, is to be supervised.
For further information about the Throughput value calculation, see Throughput Calcu-
lation. An alarm is generated if the Throughput value is not within the condition limits.
Limits For information about Limits see Section 3.2.3.1.1, “Host Statistic Value”.

65
Desktop 7.1

Example 11. For Example:

Note! The parameters in the following example do not apply to any specific system and
are presented here only to enhance understanding of the alarm condition.

You want the system to warn you on detection of decreased processing rate.

1. Configure an Alarm Detection to use the Workflow throughput condition.

Figure 58. Configure an Alarm Detection

2. On the Edit Alarm Condition dialog box click Browse to select the workflow which processing
rate is to be supervised.

3. Enter a limit of Falls Below 50000 (batches, UDRs, Bytearray).

Figure 59. Configure an Alarm Condition

The Alarm will be triggered by every occurrence of a workflow slowing down its processing
rate to a throughput that is lower than 50000 units per second.

66
Desktop 7.1

4. Working with workflows


This chapter describes how you work with workflows and workflow groups.

4.1. Workflow
A workflow configuration enables you to create a workflow consisting of:

• A Workflow Template: the schema of the agents and routes that you draw on the Workflow template.

• A list of workflows that share the same settings and are included in the configuration.

• The Workflow properties:

1. Workflow Table: The appearance of the table that displays the list of workflows

2. Error handling

3. Audit settings

4. Execution options

Note! Every workflow configuration in MediationZone® is a single entity, regardless of the


number of workflows it contains.

To create a new workflow configuration, click the New Configuration button in the upper left part
of the MediationZone® Desktop window, and then select Workflow from the menu. Select Workflow
Type and then click Create.

To open an existing Workflow Configuration, double-click the Configuration in the Configuration


Navigator, or right-click a Configuration and then select Open Configuration(s)....

4.1.1. Workflow Types


Workflow configurations can be of four types:

• Batch

• Real-Time

• Task

• System Task

4.1.1.1. Batch Workflow


A batch workflow processes input that originates in a specific source, often a file. The workflow creates
batches from the data and processes them one by one.

In a batch workflow data is collected by a single collecting agent in a transaction-safe manner; A single
batch is collected (only once) and is fully processed by the workflow before the next batch is collected.

Batch workflows are mainly used either in post-paid billing systems, for example, for handling batches
of UDRs (files).

A batch workflow:

67
Desktop 7.1

• Processes batches, one at a time

• Is started either manually or by a scheduled trigger

• Stops either when it finishes processing the input, or when being aborted.

Note! If the workflow aborts, the current batch can be reprocessed.

4.1.1.2. Real-Time Workflow


Real-Time workflows are applicable in systems such as prepaid Billing, where instant processing re-
quests need to be addressed as they occur.

In Real-Time workflows most of the collecting agents communicate in a two-way manner; they receive
requests and provide replies.

A real-time workflow:

• Can have more than one collecting agent

• Can process several UDRs simultaneously. See Section 4.1.2, “Multithreading”

Note! Due to multithreading in real-time workflows, unlike batch workflows:

1. The order of UDR processing cannot be guaranteed

2. Only global MIMs are used

• Once started, is always active. A real-time workflow is started either manually or by a scheduled
trigger, and stops either manually or due to an error.

• Processes in memory. Transaction safety must be handled prior to collection and after distribution.

• Real-time workflow error handling rarely leads to aborting the workflow. Errors are registered in
system log and the workflow continues to run. Note that you cannot embed an exception within the
processing agents. For further information about real-time agent error handling, see the relevant
agent's user guide.

Note! Real-time workflows use the Inter Workflow agent to forward data to a batch Workflow.

4.1.1.3. Task Workflows


Task workflows are single agent workflows that execute user defined SQL or shell scripts.

4.1.1.4. System Task Workflows


MediationZone® is delivered to you with system task workflows included. System task workflows
and workflow groups that include system task workflows enable you to perform background routines
such as log- and run-time data clean up from the platform.

System task workflows include:

• Alarm Cleaner

68
Desktop 7.1

• Archive Cleaner

• Configuration Cleaner

• ECS Maintenance

• Statistics Cleaner

• System Backup

• System Log Cleaner

This section includes information about:

• Opening a System Task Workflow

• Modifying a System Task Workflow Configuration

4.1.1.4.1. Opening a System Task Workflow

You open a System Task Workflow from the Workflow Editor .

To Open a System Task:

1. To open a System Task, double-click a SystemTask workflow or workflow group in the Configur-
ation Browser pane.

4.1.1.4.2. Modifying a System Task Workflow Configuration

You modify all the System Task workflow configurations at template level. The workflow properties
are all set to Final and cannot be modified.

• You can modify a System Task configuration, including its scheduling criteria, but you cannot
create or remove a System Task Configuration.

• The Archive Cleaner Workflow lets you modify only its scheduling criteria. For further in-
formation see Section 4.2.2.5.3, “Scheduling”.

To Modify a System Task Workflow:

1. Open the System Task Workflow that you want to modify.

2. Double-click the agent icon; the agent's configuration view opens.

3. Perform the changes and click OK.

4. Save the workflow.

4.1.1.4.3. Alarm Cleaner

The Alarm Cleaner Workflow enables you to periodically delete old Alarm messages from the database.

Figure 60. The Alarm Cleaner System Task

69
Desktop 7.1

To configure the Alarm Cleaner System Task workflow, enter the number of days that define a period
during which an alarm message should remain in the database.

4.1.1.4.4. Archive Cleaner

The Archive Cleaner System Task enables you to remove old archived files from the file system.
Archive Cleaner operates according to data that it receives from the Archive profiles.

Note! You can modify only the scheduling criteria of the Archive Cleaner. See the Archive
profile manual. Since scheduling can only be applied to workflow groups, you modify the
Archive Cleaner scheduling from the workflow group configuration.

4.1.1.4.5. Configuration Cleaner

Enables you to specify the maximal age of an old Configuration before it is removed. See Section 3,
“Configuration”.

When the Configuration Cleaner is applied, every space is included. For further information on config-
uration spaces, see the Configuration Spaces documentation.

Note! You cannot remove the most recent Configuration with the Configuration Cleaner, only
historical ones.

To Configure the Configuration Cleaner:

1. Select a Configuration type from the table and then click the entry in the Keep column; a drop-down
list appears.

2. Select one of the following: Always, Days, or Versions.

Figure 61. The Configuration Cleaner System Task

Type The icon representation of the Configuration type.


Name The name of the Configuration type.
Keep Select the time unit with which you specify how long old configurations should remain in
the system.

• Always: Always keep configurations.

70
Desktop 7.1

• Days: Keep configurations for a specified number of days.

• Versions: Keep only a certain amount of versions of the configurations. For example, the
last 10 versions.

Value Specifies number of days or versions, that represent the period during which configurations
are kept.

4.1.1.4.6. ECS Maintenance

Enables you to remove old ECS data from the file system. For information about the ECS Maintenance
System Task workflow see the Error Correction System manual.

4.1.1.4.7. Statistics Cleaner

Enables you to remove old statistics data that has been collected by the Statistics server and stored in
the database.

Figure 62. The Statistics Cleaner System Task

Minute Level Records Specifies the number of days during which a minute-level record should
be kept in the database.
Hour Level Records Specifies the number of days during which an hour-level record should be
kept in the database.
Day Level Records Specifies the number of days during which a day-level record should be
kept in the database.

4.1.1.4.8. System Backup

Enables you to create a backup of all the Configurations in MediationZone® . A backup file is saved
on the host machine where the platform application is installed.

The System Backup files are stored under $MZ_HOME/backup/yyyy_MM, where yyyy_MM is
translated to the current year and month. The system saves a backup file and names it according to the
following format: backup_<date>.zip.

System Backup also enables you to specify the maximal age of backup files before they are removed
from the host disk.

The system backup Configuration dialog box is made up of two tabs:

• System Backup: Lets you enable the backup option

• Cleanup: Lets you configure the time during which a backup should be kept on disk before it is deleted

71
Desktop 7.1

Figure 63. The System Backup System Task Workflow

Enable System Backup Check to enable the system backup. The default value is On.
Use Encryption Check to enable encryption of the backup
Password Enter a password

Figure 64. The System Backup Cleaner System Task - The Cleanup Tab

Imported Files Every time the System Importer imports a Configuration to the system, Me-
diationZone® saves it as a backup on the platform. Enter the period, in days,
during which the imported files should remain on disk.
System Backup Files Defines the maximal age of system backup files before they are removed
from the host disk.

4.1.1.4.9. System Log Cleaner

The System Log Cleaner deletes the System Log periodically. You set the frequency values for deleting
different message types, on the System Log Cleaner dialog box.

Figure 65. The System Log Cleaner System Task

Error/Disaster Enter the maximal age of Error and Disaster messages before they are removed
from the database.

Max value: 99 days

Default value: 30 days


Warning Enter the number of days during which Warning messages should be kept in data-
base.

Max value: 99 days

Default value: 20 days

72
Desktop 7.1

Information Enter the number of days during which Information messages should be kept in the
database.

Max value: 99 days

Default value: 10 days

4.1.2. Multithreading
Multithreading enables a workflow to operate on more than one UDR at a time.

By default, while a batch workflow handles one active thread at a time, a Real-time workflow always
executes multithreaded.

A workflow that is configured to multithreading can only handle data of the UDR type amongst agents
that are configured with a Thread Buffer. Otherwise, during the same period of time, other data types
can be processed anywhere else within the workflow.

By using asynchronous agents in a workflow that is configured with multithreading, you increase the
workflow multithreading capabilities even further.

4.1.2.1. Threads in a Real-Time Workflow


In real-time workflows, the Collecting agent continuously stores UDRs in a buffer at the beginning of
a workflow. UDRs are processed concurrently, and the processing order cannot be guaranteed. This
way, an agent might handle as many UDRs as the number of configured threads, simultaneously.

Figure 66. Realtime Workflow Multithreading

Note! Agents that route bytearray data in a real-time workflow do not use a buffer.

4.1.2.2. Threads in a Batch Workflow


To apply multithreading in a batch workflow, a UDR storage buffer has to be configured ahead of an
Aggregation-, Analysis-, or ECS agent. A delivering thread stores a UDR in the buffer and then fetches
the next UDR in turn. A processing thread pulls a UDR from the buffer, forwards it to the agent, and
then pulls the next UDR in turn. This way, when you add another buffer to the next agent, you also
add another thread to the workflow.

73
Desktop 7.1

Figure 67. Batch Workflow Multithreading - A Buffer Adds a Thread

To configure a batch workflow agent with multithreading, use the Thread Buffer tab of the agent
Configuration. See Section 4.1.6.2.1, “Thread Buffer Tab”.

4.1.3. Workflow Menus


The contents of the menus in the menu bar may change depending on which configuration type that
has been opened in the currently displayed tab. A workflow configuration uses the standard menu
items that are visible for all configurations, and these are described in Section 3.1.1, “Configuration
Menus”.

The menu items that are specific for workflow configurations are described in the next coming sections:

4.1.3.1. The File Menu

Item Description
Print... Select to print out the workflow configuration.

4.1.3.2. The Edit Menu

Cut Cuts selections to the clipboard buffer.

Copy Copies selections to the clipboard buffer.

Paste Pastes the clipboard contents.

Workflow Properties Select this option to open the Workflow Properties dialog where workflow
related data is configured. For further information, see Section 4.1.8,
“Workflow Properties”.
Preferences Select this option to change the appearance of the workflow template. For
further information, see Section 4.1.6.3, “Visualization”.

4.1.3.3. Agents

Collection Use this option to select a Collection agent to include in the workflow. The menu to
choose from varies depending on the workflow type that you have opened, see Sec-
tion 4.1.1, “Workflow Types”. Note that you can also add an agent from the agent pane,
see Section 4.1.5, “Agent Pane” for more info.
Processing Use this option to select a Processing agent to include in the workflow. The menu to
choose from varies depending on the workflow type that you have opened, see Sec-

74
Desktop 7.1

tion 4.1.1, “Workflow Types”. Note that you can also add an agent from the agent pane,
see Section 4.1.5, “Agent Pane” for more info.
Forwarding Use this option to select a Forwarding agent to include in the workflow. The menu to
choose from varies depending on the workflow type that you have opened, see Sec-
tion 4.1.1, “Workflow Types”. Note that you can also add an agent from the agent pane,
see Section 4.1.5, “Agent Pane” for more info.

4.1.3.4. Workflows

Add Workflows... Use this option to select workflows to add to the Workflow table at the bottom
of the Workflow Editor. See Section 4.1.7, “Workflow Table” for further in-
formation.
Delete Workflow Select this option to remove the entire workflow that is associated to the cell
marked in the Workflow table. See Section 4.1.7, “Workflow Table” for further
information.
Filter and Search... Select this option to open a "filter and search bar" below the Workflow table.
See Section 4.1.7, “Workflow Table” for further information.

Open Monitor Select this option to open Workflow Monitor if the selected Workflow is
valid.
Import Table... Select this option to open an import dialog where you import an export file.
For further information, see Section 4.1.7, “Workflow Table”.
Export Table... Select this option to open an export dialog where you select and save workflow
configurations in a file. For further information, see Section 4.1.7, “Workflow
Table”.

4.1.4. Workflow Buttons


The contents of the button panel may change depending on which configuration type that has been
opened in the currently displayed tab. A workflow configuration uses the standard buttons that are
visible for all Configurations, and these are described in Section 3.1.2, “Configuration Buttons”.

The additional buttons that are specific for workflow configurations are described in the next coming
sections:

Button Description
Cut Cuts selections to the clipboard buffer.

Copy Copies selections to the clipboard buffer.

Paste Pastes the clipboard contents.

Workflow Properties Select this option to open the Workflow Properties dialog where workflow
related data is configured. For further information, see Section 4.1.8,
“Workflow Properties”.
Print... Select to print out the workflow configuration.

Zoom Out Zoom out the workflow illustration by modifying the zoom percentage
number that you find on the toolbar. The default value is 100(%). Clicking

75
Desktop 7.1

the button between the Zoom Out and Zoom In buttons will reset the zoom
level to the default value. Changing the view scale does not affect the Con-
figuration.
Zoom In Zoom in the workflow illustration by modifying the zoom percentage number
that you find on the toolbar. The default value is 100(%). Clicking the button
between the Zoom Out and Zoom In buttons will reset the zoom level to
the default value. Changing the view scale does not affect the Configuration.

4.1.5. Agent Pane


The area to the left is referred to as the agent pane, it contains available agents, represented as icons.
Different icons are available, depending on which kind of workflow type selected. The default setting
is batch workflow.

Figure 68. Agent pane - Collection Tab

Real-time and batch workflow configurations contain three types of agents, sorted under three different
tabs, depending on their introspection. Introspection is in MediationZone® referred to as the type of
data an agent expects and delivers. The tabs are called: Collection, Processing and Forwarding.

Figure 69. The Different Agent Types

4.1.5.1. Collection Agents


A collection agent is responsible for the gathering of data from external systems or devices, for example
by opening a file and reading it byte-by-byte and then pass it on into the workflow.

A collection agent must have one or several outgoing introspection types and forwarding agents must
have one or several incoming introspection types. The workflow editor validates that the introspection
types between two connected agents are compatible.

Note! Real-time collection agents may also be receivers within the workflow. This is called bi-
directional capability and is used when the collector must respond back to the network element.

76
Desktop 7.1

4.1.5.2. Processing Agents


A processing agent expects to be fed with data and to deliver data on one or several outgoing routes.
Inside a workflow, data propagates between agents as streams, that is, as a flow of bytes or UDRs. A
simple processing agent could for example be a counter counting the throughput. There are also more
complex types, for instance, agents that depending on the processed result delivers data on different
routes. The processing agent decodes (translates) an incoming byte stream into a UDR object and the
encoding agent does right the opposite.

4.1.5.3. Forwarding Agents


A forwarding agent is responsible for distributing the data from the workflow to other systems or
devices. An example of an activity forwarding agent is to create a file from a data stream and transfer
it to another system using FTP.

Different agents act differently on the Begin Batch and End Batch messages. The forwarding agents,
for instance, need a set of boundaries in order to close a file or commit a database transaction.

4.1.5.4. Task Workflow Agents


By default there are two types of Task agents, one for script and one for SQL tasks. If custom developed
Task agents exists they will also appear in the agent pane.

4.1.5.4.1. Script

A user may write a script to be executed regularly. Usually, to clean up directories and other instances
which need to be attended to periodically.

Warning! It is strongly recommended that you run script task workflows on a separate Execution
Context, especially if you are running Sun Solaris. Running script task workflows on the same
Execution Context as other workflows may cause unpredictable errors and loss of data.

During a short time before the exec() runs the actual script, the fork() functionality allocates
the same amount of memory for the script as used for the Execution Context. If the memory is
not available the Execution Context will abort with out of memory error and must then be restar-
ted.

For information about installation of an Execution Context, see the Installation Instructions.

Figure 70. Edit Task Configuration Window - Script

Script Name The name, including the full path, to the script performing the task.
Parameters Parameters expected by the Script Name. This field is optional.

4.1.5.4.2. SQL

A user may write SQL statements or SQL scripts to be executed regularly. Usually, involving cleaning
up tables which need to be attended to periodically.

77
Desktop 7.1

Figure 71. Edit Task Configuration Window - SQL

Database Click Browse to select a Database profile. Note: You create database profiles in
the Database profile configuration.
SQL Statement Enter a PL/SQL script or an SQL statements to this text box.

Note: Group several SQL statements within a block. For a single SQL statement
omit the semicolon (;) at the end.

4.1.6. Workflow Template


The workflow template is the area where the workflow is designed. The Workflow outline one or
several workflows.

Note! A workflow configuration cannot be activated, the workflows in it however can.

Figure 72. Workflow Configuration

You create a workflow configuration by using one of the following methods:

• Clicking the New Configuration button in the upper left part of the MediationZone® Desktop
window, and then selecting Workflow from the menu. Select a Workflow Type; Batch, Realtime,
or Task, and then Create.

78
Desktop 7.1

• Dragging agents from the agents pane onto the template

• Double-clicking an agent on the agent pane

• Selecting an agent from the Agents menu

When the first agent is placed on the workflow template one row is automatically created in the
Workflow table.

To create a data flow, agents need to be connected to each other. To do that click the left mouse button
on the center of the source agent and without releasing the left mouse button, move the pointer to the
target agent and released there. This will create a connection (route) between the two agents indicating
the data flow.

All editing and triggering from the workflow template generate changes to the workflow configuration
through the workflow configuration. Examples of this are adding and removing agents, altering agent
positions and editing agent settings and preferences. The workflow table will be affected if it includes
columns that correspond to an agent removed from the workflow template.

When an agent is deployed into the work template it receives a default name located underneath it.
The same applies to routes when they are added. These names may be modified to ease identification
in monitoring facilities and logs.

Resting the cursor over an agent in the template, displays parts of its Configuration in a tool-tip text.

Figure 73. An Agent Tool-tip of a Workflow Configuration

4.1.6.1. Configuration

Note! Due to the agents relationships within a workflow configuration, it is preferable that all
agents and routes are added before the Configuration is started.

Each agent in the workflow configuration has a specific Configuration window named after the agent
type. Each route in the workflow is by default either asynchronous or synchronous, but this can also
be configured per route. These Configurations can be accessed by double-clicking the agent or route,
or selecting the agent or route and right-clicking to reveal a pop-up menu. The popup menu contains
different options depending on if you have selected an agent or route.

4.1.6.1.1. Right click menu for agents

The right-click menu for agents contains the options Configuration, Copy/Paste/Cut, MIM Browser,
and Workflow Properties.

Figure 74. Right-Click Menu for Agents

79
Desktop 7.1

The Configuration dialog contains configuration information located in tabs. The leftmost tab contains
configuration parameters that are unique for each agent while possible additional tabs are of a more
generic kind and may be recognized in other agent windows. A description of components in the first
tab is available in the user guide of the respective agent, while the remaining tabs are described in
Section 4.1.6.2, “Agent Services”.

Agents and their configurations can be copied using the Copy/ Paste functions in the Edit menu. Select
the source agent followed by Copy and then Paste. This will deploy a copy of the selected agent and
its configuration.

An agent name is modified by selecting the name (clicking on it) and typing a new name. Agent names
can also be edited in the configuration dialog, displayed when the agent is double-clicked. Agent names
must be unique within a Workflow configuration and may only contain the a-z, A-Z, 0-9, "-" and "_"
characters. However, the agent name cannot begin with the characters "-" or "_".

4.1.6.1.2. Right click menu for routes

The right-click menu for routes contains the options Configuration, Copy/Paste/Cut, MIM Browser,
Workflow Properties, and Route Styles.

Figure 75. Right-Click Menu for Routes

Note! The Configuration option is only available for real-time workflows.

When selecting the Configuration, the Route dialog opens up.

Figure 76. Right-Click Menu for Routes

If you do not want to use the default configuration for the selected route, select the Override Default
check box and select either of the Asynchronous or Synchronous options, and click OK to save your
changes. A small A (for Asynchronous routes) or S (for Synchronous routes) will be visible at the start
of the route. If you want to see the type for all routes, and not just for the ones you have stated explicitly,
you can select to the option Show All Route Types in the Preferences dialog, see section Sec-
tion 4.1.6.3, “Visualization” for further information.

Note! Even though it is possible to configure bytearrays to be routed over an asynchronous


route, bytearrays will always be routed in synchronous mode, regardless of how the route is
configured, since the workflow UDR queues do not support bytearrays.

80
Desktop 7.1

If you click on the Route Styles option, you can determine the appearance of the route; Orthogonal,
Bezier, or Straight.

A route name is modified by selecting the name (clicking on it) and typing a new name. Route names
must be unique within a Workflow Configuration and may only contain the a-z, A-Z, 0-9, "-" and "_"
characters. However, the route name cannot begin with the characters "-" or "_".

4.1.6.2. Agent Services


Some agent configuration windows are optionally equipped with additional tabs holding configurations
for different common services. This section describes the general services that are available in Medi-
ationZone® . The user guide for each agent, will in turn describe what services it uses.

4.1.6.2.1. Thread Buffer Tab

By default, a batch workflow utilizes one active thread at a time. By configuring a buffer storage for
an agent it will be possible for yet another thread to be created, this is also called multithreading. One
thread will be populating the buffer, and another pulling it for data. Adding yet another buffer for an-
other agent will add yet another thread, and so on.

This is especially useful in complex workflows with many agents. All batch agents that receive UDRs
can utilize this functionality.

Note! A workflow that is configured with multithreading can only handle data of the UDR type.
If bytearrays are routed into an agent utilizing this service, an exception will be thrown.

Open the Configuration window of the agent and select the Thread Buffer tab. The tab is present in
batch processing and batch forwarding agents.

Figure 77. The Thread Buffer Tab

Use Buffer Enables multithreading. For further information, see Section 4.1.2, “Multithreading”.
Print Statistics Statistics to be used when trying out where to use the Thread Buffer in the workflow.
After each batch execution, the full and empty percentage of the threads utilizing
the buffer is logged in the in the event area in the bottom of the workflow monitor
window.

For information on how to interpret the results, see Section 4.1.6.2.1.1, “Analyzing
Thread Buffer Statistics”.

A UDR may be queued up while another thread is busy processing a reference to it. Workflows routing
the same UDR on several routes and involving further processing of its data, must consequently be
reconfigured to avoid this . A simple workaround is to route the UDR to an Analysis agent for cloning
before routing it to the other agents (one unique clone per route).

4.1.6.2.1.1. Analyzing Thread Buffer Statistics

By using the Print Statistics alternative in the Thread Buffer Tab, buffer statistics will be logged
for the whole batch execution and show the full and empty percentage for the threads utilizing the
thread buffer. For information about multithreading in a batch Workflow, refer to Section 4.1.2.2,
“Threads in a Batch Workflow”.

81
Desktop 7.1

Example 12. Example of a thread buffer printout

11:03:04: Buffer usage statistics [5] for 2089 turnover(s) of UDRs:


Incoming queue: Available 54%. Full 46%.
Outgoing queue: Available 59%. Empty 41%.

• The number within brackets, which is [5] in the example, is the batch counter id.

• Turnover is the total number of UDRs that have passed through the buffer.

• Available indicates how often (of the total turnover time) the buffer has been available for the
incoming queue to forward a UDR and for the outgoing queue to fetch a UDR.

• Incoming queue:

Full is logged for the incoming thread and indicates how often (of the total turnover time) the
buffer has been full and an incoming UDR had to wait for available buffer space.

In the example, Full indicates that for 46% of the incoming UDRs there was a delay because of
a full buffer.

• Outgoing queue:

Empty is logged for the outgoing thread and indicates how often (of the total turnover time) an
outgoing queue had to wait for data because of an empty buffer.

In the example, Empty indicates that for 41% of the attempts to fetch a UDR, the buffer was empty.

The percentage values for Empty and Full must be as low as possible, and as equal as possible. The
latter may be hard to achieve, since the agents may differ too much in processing complexity. If possible,
add and configure another agent to take over some of the processing steps from the most complex
agent.

See Section 4.1.6.2.1, “Thread Buffer Tab” for how to configure the thread buffer.

4.1.6.2.2. Filename Sequence Tab

For batch Collection agents such as Disk, FTP, and SFTP, there is a service available, found in the
agents Configuration dialog in the Filename Sequence tab. The Filename Sequence is used when
wanting to collect files containing a sequence number in the file name. The sequence number is expected
to be found on a specific position in the file name and will have a fixed or dynamic size.

Note! When collecting from several sources, the Filename Sequence service is applied on the
data that arrives from all the sources, as though all the information arrives from a single source.
This means that the Filename Sequence number count is not specific to any of the sources.

82
Desktop 7.1

Figure 78. Filename Sequence Tab

Enable Filename Se- Determines if the service will be used or not.


quence
Start Position The offset in the file name where the sequence number starts. The first character
has offset 0 (zero). In the example file name TTFILE0001 the start position
is 6.
Length The length of the sequence number, if the sequence number has a static length
(padded with leading zeros). If the sequence number length is dynamic this
value is set to 0 (zero).

Example 13.

TTFILE0001-TTFILE9999 Length: 4

TTFILE1-TTFILE9999 Length: 0

Wrap On If the Filename Sequence service is enabled, a value must be specified on


which the sequence will wrap. This number should be larger than or equal to
Next Sequence Number and it must be larger than Wrap To.
Wrap To Value that the sequence will wrap to. This value must be less than the Wrap
On value and less than or equal to Next Sequence Number.
Warn On Out Of Se- If enabled, the agent will log an informative message to the System Log when
quence detecting an out of sequence file, before deactivating. The collection agent
will not continue to collect any files upon the next activation, until either the
missing file is found, or the Next Sequence Number is set to a valid value.

Note! Next Sequence Number is set for every Workflow in the Workflow
table in the workflow configuration.

4.1.6.2.3. Sort Order Tab

The Sort Order service is available on some batch collection agents and is used to sort matched files
before collection.

The sort pattern is expected to occur on a specific position in the file name or to be located using a
regular expression.

83
Desktop 7.1

Note! Regular expressions according to Java syntax apply. For further information, see:

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

Figure 79. Sort Order Tab

Enable Sort Order Determines if the service will be utilized or not.


Modification Time Select to enable file collection order according to the modification time stamp.
If Sort Direction is set to Ascending, oldest files are collected first. The time
resolution depends on server and protocol.

Most FTP and SCP servers follow the Unix modification date format for
file time stamps. The modification date resolution is one minute for files
that are time stamped during the last six months. After six months a res-
olution of one day is applied.

Value Pattern The method used to locate the item (part of the file name) to be the target for
the sorting. This could be either Position that indicates that the item is located
at a fixed position in the file name or Regular Expression indicating that the
item will be fetched using a regular expression.
Position If Position is enabled, the Start Position value states the offset in the file name
where the sorting item starts. The first character has offset 0 (zero).

The Length value states the length of the sorting item (part of the file name) if
it has a static length (padded with leading zeros). If the length of the sorting item
(part of the file name) is dynamic, the default value zero (0) will be used.
Regular Expression If enabled, the sorting item is extracted from the file name using the regular ex-
pression. If the file name does not end with a digit this option is the proper
choice.

84
Desktop 7.1

Example 14.

\d+ locates the first digit sequence in a file name.

Sort the following files in an alphabetical or numerical order:

FILEA_1354.log
FILEB_23.log
FILEC_1254.log,

Use \d+ in the regular expression. Depending on the selected Sort Dir-
ection, the files are sorted in the following order:

FILEC_1254.log
FILEA_1354.log
FILEB_23.log

Sort Type Type of sorting. Could be either Alphanumeric or Numeric.


Ignore Case If enabled, the sorting is not case sensitive.
Sort Direction Indicates if the sort order will be Ascending or Descending.

4.1.6.2.4. Filename Template Tab

The Filename Template service is available to batch forwarding agents that are responsible for creating
a file. The configuration contains MIM resources for all available agents in the workflow, whose values
may be used when constructing a filename for the outgoing file.

Since this service includes a selection of MIM resources from available agents in the workflow, it is
advised to add all agents to the workflow, and to assign route and agent names, before the filename
template configuration is completed.

Note! Filename Template also provides you with the so called Dynamic Directory support. This
means that you can change the output directory during execution of a workflow, which input
data is bytearray. See Input Data in the configuration Target tab of your relevant forwarding
agent.

By creating directories and subdirectories, which names consist of MIM values, and by adding
appropriate APL code, you configure the output directory to sort the out data into directories
that are created during the workflow execution. For further information see Section 4.1.6.2.5,
“Defining a MIM Resource of FNTUDR Type”.

85
Desktop 7.1

Figure 80. Filename Template Tab

The list contains MIM resources or user defined values that will create the file name. The order of the
items in the list controls the order in the file name.

The table containing MIM resources, user defined values, separators and/or directory delimiters will
create the filepath or filename. The order of the items in the table define the filepath or filename.

Since the service utilizes a selection of MIM resources from available agents in the workflow, it is
advised to add all agents to the workflow before the filename template configuration is completed.

Create Non-Existing Directories When checked, non-existing directories stated in the path will
Checkbox be created. If unchecked the agent will abort if needed directory
is missing.

Figure 81. Filename Template Tab - Add or Edit Dialog

MIM Defined Determines if the Value will be selected from a MIM resource.

The MIM resource of type FNTUDR will be represented in the template table in
the same way as other MIMs, however will have an other appearance when the
filename or filepath is presented. A MIM FNTUDR value can represent a sub path
with delimiters or a part of a filename or a directory. For further information about

86
Desktop 7.1

how to use the FNTUDR in filename templates, see Section 4.1.6.2.5, “Defining
a MIM Resource of FNTUDR Type”.
User Defined Determines if the Value will be a user defined constant entered in the text field.
Directory Delim- Determines if the Value will be a directory delimiter indicating that the file sub
iter path will have a directory delimiter at that specified position. It is not allowed to
have two directory delimiters directly after each other or to have a delimiter in
the beginning or end of a filename template.

The MIM resource of the special UDR type FNTUDR can include, begin or/and
end with directory delimiters this must be noted when adding delimiters in the
template. For further information about using the FNTUDR in filename templates,
see Section 4.1.6.2.5, “Defining a MIM Resource of FNTUDR Type”.
Size Number of allocated characters in the file name for the selected MIM resource
(or user defined constant). If the actual value is smaller than this number the re-
maining space will be padded with the chosen padding character. If left empty the
number of characters allocated in the file name will be equal to the MIM value or
the constant.
Padding Character to pad remaining space with if Size is set. If Size is not set this value is
ignored.
Alignment Left or right alignment within the allocated size. If Size is not set this value is ig-
nored.
Separator Separating character to add to the file name after the MIM value or constant.
Date Format Adds a timestamp to the file name in the selected way. For further information
about the format, see Section 2.2.5, “Date and Time Format Codes”.

87
Desktop 7.1

Example 15.

Assume a workflow containing a Disk collection agent named DI1 and two Disk forwarding
agents named DO1 and DO2. The desired output file names from both forwarding agents are
as follows:

A-B-C.DAT Where A is the name of the Disk forwarding agent, B is the name of the currently
collected file, C is the number of UDRs in the outgoing file and .DAT is a customer suffix. It
is desired that the number of UDRs in the files takes up six characters and is aligned right.

The following file name template configuration applies to the first agent:

The following file name template configuration applies to the second agent:

If two files FILE1 and FILE2 are processed where 100 UDRs goes to DO1 and 250 goes to DO2
from each file, the resulting file names would be:

DO1-FILE1-000100.DAT
DO2-FILE1-000250.DAT
DO1-FILE2-000100.DAT
DO2-FILE2-000250.DAT

4.1.6.2.5. Defining a MIM Resource of FNTUDR Type

A MIM resource with a value of the FNTUDR type included in the filename template is treated
somewhat differently than other MIM resources. A FNTUDR value is a text string that can contain
delimiters. The delimiters in the FNTUDR value will be replaced by directory delimiters when determ-
ining the target file path. The FNTUDR is defined in the FNT folder.

88
Desktop 7.1

Example 16.

The following example shows a set of APL code that creates a FNTUDR value and publishes
it as a MIM resource. To use the FNTUDR value in a filename template, the MIM resource
must be added in the filename template configuration.

import ultra.FNT;

mimPublish( global, "FNTUDR Value" , any );

consume {

FNTUDR fntudr = udrCreate(FNTUDR);

fntAddString(fntudr, "dir");
fntAddString(fntudr, "1");
fntAddDirDelimiter(fntudr);
fntAddString(fntudr, "dir2");
fntAddDirDelimiter(fntudr);
fntAddString(fntudr, "partOfFileName");

mimSet ("FNTUDR Value", fntudr);

udrRoute(input);

The following filename template configuration utilizes the FNTUDR published by the APL
code.

The resulting output file in the previous example will be saved in a file with the following sub
path from the root directory.

dir0/dir1/dir2/partOfFileName20070101_1

The part dir1/dir2/partOfFileName derives from the FNTUDR Value.

For further information about how to manipulate FNTUDRs with APL functions and how to publish
MIM resources, see the APL Reference Guide and Section 2.2.10, “Meta Information Model”.

4.1.6.3. Visualization
In the workflow editor, zoom in or out the workflow illustration by modifying the zoom percentage
number that you find on the tool bar. The default value is 100(%). To change the zoom value click the

89
Desktop 7.1

increase or decrease icons. Clicking the button between these icons will reset the zoom level to the
default value. Changing the view scale does not affect the configuration.

It is also possible to change the appearance of the workflow template in other ways. It is done by
opening the Preferences dialog found in the Edit menu.

Figure 82. The Preferences Dialog Box

Route Style Sets the style of the routes in the workflow. Route style can also be changed for
one specific route, not affecting the entire workflow. This is done by right-clicking
the route in the workflow and selecting Route Styles. The default route style is
Bezier.

Note! In a real-time workflow a fourth kind of routing type appears by


default when a response is returned to an agent that sent out a request. The
route is shown as a dot-dashed line.

Grid Style Determines how the gris should be displayed; Invisible, Dot or Line. Invisible
is default.
Grid Size With this slider you can change the grid density. A large number will increase
the distance between agents.
Show All Route The route type, asynchronous or synchronous, is indicated with a small bold A
Types or S for all routes where the type has been configured explicitly, see Sec-
tion 4.1.6.1.2, “Right click menu for routes”. However, if you want to display the
route type for routes with default configuration as well, you can select this option

Note! This option is only visible when you are in a real-time workflow.

4.1.7. Workflow Table


The workflow table is located at the bottom of the Workflow window Configuration.

There must always be at least one runnable workflow per workflow configuration otherwise it will not
be valid.

The three leftmost columns gather the workflow meta data: Valid, ID, and Name. A workflow table
will always contain these columns. The ID is automatically generated starting at 1. The ID is unique
within the workflow configuration. The Name will be generated based on the ID, for example 'Work-
flow_1'. The names can be edited, however if two workflows have exactly the same name a validation
error will occur.

The workflow table will be populated depending on settings made in the Workflow Properties dialog
Workflow Table tab. For example adding rows and field type settings is done there and will propagate
changes in the workflow table.

90
Desktop 7.1

Apart from the three first columns, columns in the table represent fields of default and per workflow
type. See Section 4.1.8, “Workflow Properties” for further information about the field types. Default
fields have the default value displayed in the instance table as <Val>, where Val represent the actual
default value. If no default value is set in the agent in the template, < > is displayed. If the field is of
a per workflow type and yet not defined, an error message is shown in the cell pointing out that the
cell is not valid.

Different icons that can appear in the workflow table.

Workflow is not validated yet


Workflow is valid and runnable
Workflow is not valid
The cell is not valid
Click this button within the cell; the Edit Execution Settings
dialog box opens.
The field contains an External Reference

The columns can be rearranged by clicking the column heading. Both headings for one single column
and ones that collect a number of headings can be used to change the order, either descending or as-
cending.

4.1.7.1. Right-Click Menu

Edit cell The command is used to put the cell in edit mode if the content is allowed to be
altered. If it is not the Edit Cell command will be grayed out and not possible to
select. Enabling editing can be done either by double-clicking in the cell or by
typing any key on the keyboard.

A cell can be locked to a certain input type and some cells will only accept numbers
or a string.
Clear cell The command is used to remove the content of the selected cell. If it is a field of
Default type this command will change the cell content to the default value set
in the template. If the cell content must not be cleared, the command will be
grayed out from the menu and will not be possible to select.
Edit from default The command is used to edit the cell by inserting the default value for that field
that you set in the template. The command is only available for cells that render
from fields of Default type.

Note! The default value inserted is considered to be a changed value. If


the reason to set the value is to return to the template default value mode,
Clear Cell is the appropriate choice.

Enable External Select this to mark the field as an External Reference. Then, through Edit Cell,
Reference you enter the Local Key reference. The value is applied during the workflow run-
time. For further information see Section 9.5, “External Reference Profile”.
Disable External Select this to remove the External Reference value as well as mode. For further
Reference information see Section 9.5, “External Reference Profile”.
Add Workflow The command adds a workflow (a row) at the bottom of the table. The added
workflow will instantly get an ID and Name.

If a greater number of workflows is to be added at the same time, use Number


of Workflows to Add in the Workflow Properties dialog.

91
Desktop 7.1

Add Workflows The command adds the number of workflows (rows) that you specify.
Delete Workflow The command removes the entire workflow that is associated to the cell marked.
If removed the ID number of that workflow will never again return within that
workflow configuration. To remove more than one workflow, select all the relevant
cells.
Duplicate Work- The command duplicates the entire workflow that is associated with the marked
flow cell. The new workflow is added at the bottom of the table. More than one cell
can be marked to enable duplication of several workflows at a time. Note: New
IDs and Names are generated.
Show Validation The command will open an information dialog were a message regarding the
Message validity of the template, workflow and cell is stated. The dialog must be closed
to return to the Configuration.
Show Specific Select to see the references of the specific workflow.
References
Note: Selecting Show References from the View menu, displays references that
are relevant to the workflow configuration.
Open Monitor Opens Workflow Monitor if the selected workflow is valid.
Export Table Opens an export dialog-box where you select and save workflow configurations
in a file. With this export file you transfer and update workflow table data either
on your current machine or on a different client. The export file can be created
in any of the following formats:

• .csv

• .ssv

• .tsv

The .csv export file contains a header row, comma (,) delimited fields, and text
values that are delimited by a quotation mark(").

Exported fields that contain profiles are given a unique string identifier. The ID
and Name fields are exported as well.

In the export file, External References are enclosed in braces ({}) and preceded
by a dollar symbol ($). For example: ${mywf_abcd}. For further information
see Section 9.5, “External Reference Profile”.
Import Table Opens an import dialog-box where you import an export file. This file might
contain, for example, data that has been saved in the workflow table, locally or
on a different client. This command supports, the following file formats:

• .csv

• .ssv

• .tsv

Importing a table may result in any of the following scenarios:

1. If the ID number of the imported workflow (a table row), is identical to the ID


number of one of the rows that are already in the table, the imported entry is
written over the existing one.

2. If the ID number of the imported workflow is -1, the imported entry is added
to the bottom of the table.

92
Desktop 7.1

Note: MediationZone® keeps track of the number of rows that you add to the
workflow table by using a row counter. If the row counter number is 98, and
the imported workflow's ID is -1, the imported workflow is stored with 99 as
the ID number.

3. If the ID number of the imported workflow does not exist in the table and is
not equal to -1, the imported entry is added to the table. The ID number remains
the same number as it was in the import file.

Filter and Search The command opens a search and filter bar below the workflow table. The search
can be performed for all columns. In the search field the search words or numbers
can be entered. When Find next is selected the search starts. The filter feature
works on workflow name only. The workflow table is updated as you're typing
text in the Filter Name field.

Using all lower case letters in the search and filter text field will result in case
insensitive search and filtering. If upper case letters are used anywhere in the text
field the search will be case sensitive.

The search and filter bar is closed by selecting the x symbol to the left of the bar.

4.1.8. Workflow Properties


Workflow specific data is altered in the Workflow Properties dialog, opened by selecting Workflow
Properties in either the Edit menu or the pop-up menu, found by right-clicking anywhere on the
template.

4.1.8.1. Workflow Table Tab


This tab enables you to select the information that you want displayed on the workflow configuration
table, and set specific table cells to different value modes such as Final, Default, or Per Workflow.

Figure 83. The Workflow Table Tab

Field Selection Use this drop-down list to display in the table below either fields of a certain
agent, or by selecting Show All, all the fields in the table below.
Show Final Check to list in the table below only the fields that are set to Final.

93
Desktop 7.1

Show Unavailable Check to list the fields that are write-protected; The unavailable fields are
listed grayed-out and set to Final.
Name The names of fields of the workflow table are listed on this column and include:

• Execution Settings

• Throughput MIM

• Debug Type

• All the agents and their fields in the Workflow Configuration

Final Check to block this variable from appearance on the workflow editor table.

Note: This variable can still be modified, but only from its configuration. For
example: if the variable belongs to an agent, open the agent configuration
dialog-box to modify the variable.
Default You can check to set the field value to Default only if it is already set to a
certain value in the configuration. You can modify a Default value from the
workflow table; the default value remains in the field and appears grayed-out,
and the new value appears in black text on its left, within the same field.
Per Workflow Check to be able to set the value of the relevant field for each workflow on
the workflow table, separately.

Note: If you cannot set the profile per workflow in the Workflow table, see
the user guide of the agent that the profile is assigned to, for further informa-
tion.
Enable External Ref- Check to enable the use of external reference values from within the workflow
erence table. For example: a properties file. For further information see Section 9.5,
“External Reference Profile”.
Profile Click Browse to specify the External Reference profile. For further inform-
ation see Section 9.5.1.3.1, “To create an External Reference profile:”.
Numbers of Rows to Enter the number of workflows that you want to add to this configuration; the
Add workflow table will grow longer accordingly.

Note: The workflow table is limited to max 500 rows.

To Edit the Execution Context Field:

1. On the Workflow Table tab check either Default or Per Workflow for the Execution Context
field.

2. Click OK.

3. On the workflow table, right-click the Execution Context field that you want to edit, and select Edit
cell; the table cell now includes a ... button.

4. Click the ... button; the Edit Execution Settings dialog-box opens. For a detailed description see
Execution Settings in Section 4.1.8.4, “Execution Tab”.

4.1.8.2. Error Tab


The Error tab contains configurations related to error handling of the workflow. Since the batch concept
does not exist for real-time workflows batch related options in the tab is only valid for batch workflows.

94
Desktop 7.1

Figure 84. The Error Tab

Abort Immediately If enabled, the workflow immediately aborts on the first Cancel Batch message
from any agent in the workflow. The erroneous data batch is kept in its ori-
ginal place and must be moved/deleted manually before the workflow can be
started again.
Abort After X Consec- If enabled, the value of X indicates the number of allowed Cancel Batch calls,
utive Cancel Batch from any agent in a workflow before the workflow will be aborted. The
counter is reset between each successfully processed data batch. Thus, if 5
is entered, the workflow will abort on the 6th data batch in a row that is repor-
ted erroneous. All erroneous files, but the last one, are removed from the
stream and placed in ECS.
Never Abort The workflow will never abort. However, as with the other error handling
options, the System Log is always updated for each Cancel Batch message,
and files are sent to ECS.

ECS Batch Error UDR

A UDR that contains information on selected MIMs can be associated with the batch. This is useful
when reprocessing a batch from ECS, the fields of the Error UDR will appear as MIMs in the collecting
workflow.

The batch UDR may be populated from Analysis or Aggregation agents as well. This is useful in case
of wanting to enter other values than MIMs.

The ECS Batch Error UDR section will be grayed out until one of the Abort after X consecutive cancel
batch or Never abort alternatives is selected.

Error Code Drop-down list where an Error Code as defined in the Error Correction System
Inspector can be selected.
Error UDR Type The error UDR to be associated with the batch. The appropriate format can be
selected from the UDR Internal Format Browser dialog opened by selecting
the browser button.

Depending on the selected UDR type the columns UDR Field and MIM Resource
will be populated.
UDR Field A list of all the fields available for the selected Error UDR Type.

95
Desktop 7.1

MIM Resource The MIM Resource column will be populated by clicking (Map to MIM...). The
preferred MIM to map to the Error UDR Type fields can then be selected from
the MIM Browser dialog.
Logged MIMs The column Error MIMs holds information on what MIM resources to be logged
in the System Log when the workflow aborts or sends UDRs and batches to ECS.
These values may also be viewed from ECS (the MIM column).

The most relevant resources to select are things that identifies the data batch,
such as the Source Filename, if available.

Note that this is only a short summary of the functionality description is given. For further information,
see the MediationZone® Error Correction System user's guide.

4.1.8.3. Audit Tab


The Audit tab in the Workflow Properties window defines what information to enter in the user
table(s) as defined in the Audit profile configuration. Either the MIM values are entered from this
window, or anything may be sent on with the APL audit functions (from Analysis or Aggregation
agents).

Note! The audit tab is only valid for Batch workflows.

Figure 85. Preferences dialog - Audit Tab

Enable Audit Turns on audit for the workflow.


Profile Click Browse to select an audit profile from the drop-down list. For further
information see Section 9.1, “Audit Profile”.
Log Audit on Cancel If enabled, audit is logged for canceled batches. Note, counter column values
Batch will be reset for canceled batches, while value columns will keep the value
they had when the batch was canceled.
Column Name The name of the columns as defined in the selected Audit profile.
Type The type of the columns as defined in the Audit profile.
Assign By Indicates if the column will be populated using APL functions or via config-
uration using existing MIM values. Valid options are:

Method - The column will be populated through any of the APL functions
constraining or auditSet. The first is used on Counter columns, and
the latter on Value columns.

MIM - The column will be populated with the MIM value selected in the
MIM Resource column.

96
Desktop 7.1

System - Reserved for the Transaction Id.


MIM Resource Valid only if MIM is selected in Assign By. Double-click the cell to display
a dialog from which existing MIM values may be selected.

4.1.8.4. Execution Tab


The Execution tab contains settings that are related to where the workflow will be executed.

In Batch Workflow:

Figure 86. The Batch Workflow Execution Tab

Execu- Check Enable to enable setup of the execution parameters.


tion Set-
tings
Distribu- A Workflow executes on an Execution Context (EC). Specific ECs, or groups of ECs, may
tion be specified by the user, or the MediationZone® system can handle it automatically.

Note! If you select to configure the distribution using EC groups, the selected distri-
bution type will also be applied on the ECs within the groups.

Hint You can combine both individual ECs and EC groups in the Execution Con-
texts list. The selected distribution will then be applied for all ECs stated either in-
dividually or in groups.

The following options exist:

• Sequential - Valid only if Execution Contexts are defined. Starts the workflow on the
first EC/EC group in the list. If this EC/EC group is not available, it will proceed with
the next in line.

97
Desktop 7.1

• Workflow Count - Starts the workflow on the EC running the fewest number of
workflows. If the Execution Contexts list contains at least one entry, only this/these
ECs/EC groups will be considered.

• Machine Load - Starts the workflow on the EC with the lowest machine load. If the
Execution Contexts list contains at least one entry, only this/these ECs/EC groups will
be considered. Which EC to select is based on information from the System Statistics
sub-system.

• Round Robin - Starts the workflow on the available ECs/EC groups in turn, but not
necessarily in a specific order. If EC1, EC2 and EC3 are defined, the workflow may
first attempt to start on EC2. The next time it may start on EC3 and then finally on EC1.
This order is then repeated. If an EC is not available, the workflow will be started on
any other available EC.

Debug Select Event to channel debug results (see Debug in APL coding) as any other event.
Type
Select File to save debug results in MZ_HOME/tmp/debug. The file name is made up of
the names of the workflow template and of the workflow itself, for example:
MZ_HOME/tmp/debug/Default.radius_wf.workflow_2.

If you save debug results in a file, and you restart the workflow, this file gets overwritten
by the debug information that is generated by the second execution. To avoid losing debug
data of earlier executions, set Number of Files to Keep to a number that is higher than 0
(zero).

• Number of Files to Keep: Enter the number of debug output files that you want to save.
When this limit is reached, the oldest file is overwritten. If you set this limit to 0 (zero),
the log file is overwritten everytime the workflow starts.

98
Desktop 7.1

Example 17.

The workflow template Default.radius_wf includes a workflow that is called


workflow_2. Number of Files to Keep is set to 10.

The debug output folder contains the following files :

Default.radius_wf.workflow_2 (current debug file)


Default.radius_wf.workflow_2.1 (newest rotated file)
Default.radius_wf.workflow_2.2
Default.radius_wf.workflow_2.3
Default.radius_wf.workflow_2.4
Default.radius_wf.workflow_2.5
Default.radius_wf.workflow_2.6
Default.radius_wf.workflow_2.7
Default.radius_wf.workflow_2.8
Default.radius_wf.workflow_2.9
Default.radius_wf.workflow_2.10 (oldest rotated file)

According to this example there are totally 11 files that are being overwritten
one-by-one and the rotation order is :

Default.radius_wf.workflow_2
|
V
Default.radius_wf.workflow_2.1
|
V
Default.radius_wf.workflow_2.2
|
V
:
:
|
V
Default.radius_wf.workflow_2.n
|
V
Deleted

• Always Create a New Log File - Use this option to create a new debug output file each
time the workflow executes. In this case a timestamp will be appended to the file name
described above.

Example 18.

If we have a workflow named Default.radius_wf with an instance called work-


flow_2 and are creating new debug output files everytime the workflow executes
the debug output folder will contain files looking something like this :

Default.radius_wf.workflow_2.1279102896375
Default.radius_wf.workflow_2.1279102902908
Default.radius_wf.workflow_2.1279102907149

99
Desktop 7.1

Note! MediationZone® will not manage the debug output files when this option
is used. It is up to the user to make sure that the disk does not fill up.

Through- MediationZone® contains an algorithm to calculate the throughput of a running workflow.


put Cal- It locates the first agent in the workflow configuration that delivers UDRs, usually the
culation decoder and counts the number of passed UDRs per second. In case no UDRs are passing
through the workflow, the first agent delivering raw data will be used. The statistics can
be viewed in the System Statistics.

If another MIM value than the default is preferred for calculating the throughput the User
Defined checkbox is ticked. From the browser button a MIM Browser dialog is opened
and available MIM values for the workflow configuration is shown and a new calculation
point can be selected.

Since the MIM value shall represent the amount of data entered into the workflow since
the start (for batch workflows from the start of the current transaction) the MIM value
must be of a dynamic numeric type, as it will change as the workflow is running.

In Real-time workflow:

Figure 87. The Realtime Execution Tab

Execution Check Enable to enable setup of the execution parameters.


Settings
Distribu- A Workflow executes on an Execution Context (EC). Specific ECs, or groups of ECs,
tion may be defined by the user, or the MediationZone® system can handle it automatically.

Note! If you select to configure the distribution using EC groups, the selected
distribution type will also be applied on the ECs within the groups.

100
Desktop 7.1

Hint You can combine both individual ECs and EC groups in the Execution
Contexts list. The selected distribution will then be applied for all ECs stated
either individually or in groups.

The following options exist:

• Sequential - Valid only if ECs/EC groups are defined. Starts the workflow on the
first EC in the list. If this EC is not available, it will proceed with the next in line.

• Workflow Count - Starts the workflow on the EC running the fewest number of
workflows. If the Execution Contexts list contains at least one entry, only this/these
ECs/EC groups will be considered.

• Machine Load - Starts the workflow on the EC with the lowest machine load. If the
Execution Contexts list contains any entries, only this/these ECs/EC groups will be
considered. Which EC to select is based on information from the System Statistics
sub-system.

• Round Robin - Starts the workflow on the available ECs/EC groups in turn, but not
necessarily in a specific order. If EC1, EC2 and EC3 are defined, the workflow may
first attempt to start on EC2. The next time it may start on EC3 and then finally on
EC1. This order is then repeated. If an EC is not available, the workflow will be
started on any other available EC.

EC Determines on what Execution Context(s), EC(s), the workflow may execute. If several
are entered, the selected Distribution is considered. If no EC is selected, MediationZone®
will consider all available ECs as possible targets.

An EC is added by selecting Add and in the list presenting available ECs select one.

A stand-alone workflow must be configured to run on a stand-alone EC. Only one stand-
alone EC can be configured.

Note! Agents in a stand-alone workflow will validate against the configured


ECSA. If an agent depends on another ECSA, the workflow will be invalid since
it in this case relies on another ECSA and a network failure could cause the
workflow to abort.

ECSA Check to enable execution that is independent of the platform


Threads Default value is 8
Queue Size The number of unprocessed entries (backlog) that the workflow can store in a buffer
before the collector is slowed down. The workflow and its back-end systems might slow
its processing activity when the number of requests rises. To avoid congestion, while
the records or decoding tasks are in the queue, the queue intake is delayed in order to
limit the backlog from growing too fast. Default value is 1000.

The value that you enter here is the size of each route's queue in the workflow.
Queue By selecting Queue Worker Strategy, you can determine how the workflow should
Worker handle queue selection, which may be useful if you have several different collectors.
Strategy
You have the following options:

• Default

101
Desktop 7.1

With the Default strategy, queues are selected in route insertion order. As long as
there are queued UDRs available on the first queue, that queue will be polled. This
means that routes with later insertion order may not receive as many UDRs as they
have capacity for, and get no or little throughput. This type of condition may be de-
tected by looking at the Queue Throughput for workflows in the System Statistics
view.

Use this strategy only if this is not an issue.

This is the preferred choice when you work synchronously with responses and process
small amounts of UDRs at any given time (which is not the same as low throughput).

• RoundRobin

The RoundRobin strategy works in the same way as the Default strategy, except
that each workflow thread will be given its own starting position in the routing queue
list. This means that as long as the number of workflow threads is equal to, or greater
than, the number of routing queues, no queue will suffer from starvation.

Faster routes will get more load than slower ones.

This option provides pretty fair distribution.

Use this strategy if the number of workflow threads is equal to, or greater than, the
number of routing queues, and it is desirable to prioritize faster routes before slower
ones.

Note! The insertion order depends on how close to an "exit", i e an agent without
any configured output, the queues are. The queues that are closest to an exit will
be inserted first, and the further a queue is from an exit, the further back in the
insertion list the queue will be.

4.1.8.5. Services Tab


In the Services tab you have two predefined services, Supervision Service and Couchbase Monitor
Service. You can also add and configure different workflow services that have been developed using
the MediationZone® Development Toolkit.

Note! The Services tab is only available for real-time workflows.

102
Desktop 7.1

Figure 88. Preferences dialog - Services Tab

Add Workflow Service Click the Add... icon to select a service to be used by the workflow. In
the Add Workflow Service dialog, select a service from the list and
click Apply after each selected service. Click OK when finished.
Remove Workflow Service Select a service from the Services list and click the Remove icon to
remove the service from the workflow.

4.1.8.5.1. Couchbase Monitor Service

4.1.8.5.1.1. Overview

This section describes the Couchbase Monitor Service. With this service you can access the current
status of Cluster Nodes that belong to a configured Couchbase profile.

The service publishes MIM values that enable workflows to detect if a cluster is online and the number
of nodes that are available.

Based on this information the workflows can be configured to mitigate connection problems e g by
attempting to connect to a different Couchbase Cluster.

4.1.8.5.1.2. Configuration of the Couchbase Monitor Service

To open the configuration for the Couchbase Monitor Service, open the Workflow Properties dialog
in a real-time workflow configuration, click on the Services tab, click on the Add button, select the
Couchbase Monitor Service option and click OK.

Figure 89. Configuration for Couchbase Monitor Service

Click Browse... and select the Couchbase profile you want to apply.

103
Desktop 7.1

Note! The Monitoring setting must be enabled in the selected Couchbase profile in order to
use the Couchbase Monitor Service.

4.1.8.5.1.3. Meta Information Model

For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

4.1.8.5.1.3.1. Publishes

MIM Value Description


Cluster Available This MIM parameter contains the number of available Cluster Nodes that are
Node Count referenced by the selected Couchbase profile.

Cluster Available Node Count is of the int type and is defined as


a Global MIM context type.
Cluster ID This MIM parameter contains a unique identifier for the cluster that is refer-
enced by the selected Couchbase profile.

Cluster ID is of the string type and is defined as a Global MIM context


type.
Cluster Monitored This MIM parameter indicates if the cluster is monitored. When the value of
this parameter is false, the cluster is not monitored and the other MIM
parameters pertaining to the Couchbase Monitor Service may not be updated.

Cluster Monitored is of the boolean type and is defined as a Global


MIM context type.
Cluster Online This MIM parameter contains the status of the cluster that is referenced by the
selected Couchbase profile. The value true means that at least one of the
Cluster Nodes is available.

Cluster Online is of the boolean type and is defined as a Global MIM


context type.
Profile Name This MIM parameter contains the name of the selected Couchbase Profile.

Profile Name is of the string type and is defined as a Global MIM


context type.

4.1.8.5.1.3.2. Accesses

The service does not itself access any MIM resources.

4.1.8.5.2. Supervision Service

This section describes the Supervision Service. With this service you can create decision tables for
triggering different actions to be executed based on current MIM values. This may, for example, be
useful for overload protection purposes.

4.1.8.5.2.1. Overview

The Supervision Service uses decision tables, where you can define different actions to be taken de-
pending on which conditions are met; log in System Log, use an overload protection configuration,
or generate an event. You can use any MIMs available in the workflow for configuration of conditions,
e g throughput, queue size, etc.

104
Desktop 7.1

Figure 90. Supervision Service Concept

The Supervision Service is available for real-time workflows, and can be configured in the Services
tab in the Workflow Properties dialog. The supervision service with action overload can also be
manually triggered with mzsh commands, in case it is needed for maintenance or other purposes. If
the service is manually triggered, it has to be reverted to automatic mode for the settings in the Services
tab to take effect once again. See the Command Line Tool user's guide for further information.

4.1.8.5.2.2. Configuration of the Supervision Service

To open the configuration for the Supervision Service, open the Workflow Properties dialog in a
real-time workflow configuration, click on the Services tab, click on the Add button, select the Super-
vision option and click OK.

The configuration for the Supervision service will now appear on the right side of the Services tab.

105
Desktop 7.1

Figure 91. Configuration for Supervision Workflow Service

Execution Interval (ms) Enter the time interval, in milliseconds, with which current MIM values
should be checked against the conditions in the decision tables. This config-
uration will be valid for all decision tables.
Decision Tables All the decision tables you have configured are listed in this section. Click
on the Add button to add a new decision table. It may be a good idea to
have different decision tables for different purposes.

Note! Even though you can change the order of your decision tables, this will not affect the
functionality. All decision tables will be applied.

4.1.8.5.2.2.1. Creating a Decision Table

When you select to add a new decision table, the Add Decision Tables dialog will open.

Figure 92. Creating a Decision Table

106
Desktop 7.1

In this dialog you configure your decision table. In a decision table you determine which action to take
depending on which conditions are met. These conditions and actions are configured in separate lists
and will then be available for selection in the decision table configuration.

Decision Table Enter a name for your decision table in the Name field.
Table Parameters Click on the buttons Action Lists and Conditions Lists to configure the different
conditions and actions for this table.
Decisions Each configured condition list will be displayed in the Conditions column. Each
of these conditions can be set to either True, False or -. If you want to add more
columns for setting up different combinations of conditions, you can right click
on the Action column heading and select to add more columns. For each column
with a condition combination, you can then select which action to take in the
drop-down list containing all configured actions.

You configure your decision tables by following these steps:

1. Configure conditions. The conditions you configure are based on different MIM parameters having
defined values.

2. Configure actions. When configuring the actions you can select to have a Supervision event gener-
ated, to reject messages, or to log an entry in the System Log. Rejection can be made on all messages
or on a certain percentage; 0, 25, 50, or 100 %.

Hint! If you are using Diameter or Radius agents in your workflow, you can also select from
a range of Diameter and Radius specific overload protection strategies in order to only reject
specific types of messages. See Section 13.2, “Diameter Agents” and Section 13.1, “Radius
Agents” for further information about these strategies.

3. Create the decision table, i e set the conditions to either True, False or - (which means ignore) and
select which action to take.

4. Set a name.

5. Click Add and repeat steps 1 to 4 for all the decision tables you want to create.

Note! When a condition is evaluated to true, the corresponding action will be performed only
once, until any other condition is also evaluated to true. Generally, this means that a minimum
of two conditions is required in the decision table.

4.1.8.5.2.2.1.1. Configure conditions

To configure conditions:

107
Desktop 7.1

Figure 93. Add Condition

Left Operand Select a MIM parameter you want to use for you condition in this section.
Operator list This is the drop-down list located between the two operands. Select either > (larger
than),< (smaller than), == (equals), or != (not equal).
Right Operand Select what the selected MIM parameter and operator should match; either another
MIM parameter, or a constant.

1. In the Add Decision Tables dialog, click on the Condition Lists button.

2. Click on the Add button to open the Add Conditions dialog.

3. Click on the Add button to add conditions.

4. In this dialog, select a MIM value for the left operand.

5. Select an operator.

6. Enter a constant, or select a MIM value, for the right operand.

7. Click Add to add the condition to the condition list.

8. Repeat step 4 to 7 until you have added all the conditions you want to have in the condition list and
then click Close when you are finished.

You will return to the Add Conditions dialog.

Figure 94. Add Conditions

List Enter a name for the condition list in the Name field.
Match Select if you want all the conditions in the list to be matched or, if only one condition
is required to match by selecting either of the buttons Any of the Following or All
of the Following.
Conditions This section contains all the different conditions you have added to the list.

108
Desktop 7.1

9. Select if you want to match all conditions in the list, or if you want to match one of the conditions
in the list.

10. Give the list a name and click on the Add button to add the condition list in the Create Decision
Tables dialog.

11. Repeat steps 3 to 10 until you have created all the condition lists you want to have and then click
Close when you are finished.

You will return to the Create Decision Tables dialog.

4.1.8.5.2.2.1.2. Configure actions

To configure actions:

1. In the Add Decision Tables dialog, click on the Action Lists button.

2. Click on the Add button to open the Add Actions dialog.

3. Click on the Add button to add actions.

Figure 95. Add Action

Action In this drop-down-list you select if you want an entry to be logged in the System
Log, or if you want an Overload Protection configuration to be configured, or if
you want to generate a Supervision Event that can be sent to various targets depend-
ing on how you configure your Event Notifications.

Note! The Overload Protection option is only available if you have Diameter
or Radius agents in your workflow.

Description Enter a description for this action.


Reject This option is only available when you have selected to configure an Overload
Protection action and determines the percentage of requests that should be rejected;
0, 25, 50, or 100 %.
Strategy This option is only available when you have selected to configure an Overload
Protection action and determines if you want to apply this action on all types of
requests or if you only want to apply them to the requests following a selected
Diameter or Radius overload protection strategy. See Section 13.2, “Diameter
Agents” and Section 13.1, “Radius Agents” for further information about the Dia-
meter and Radius overload protection strategies.
Content This option is only available when you have selected to configure a Supervision
Event action and determines the content of the event. See Section 5.5, “Event Types”
for further information about event notification configuration.
Severity This option is only available when you have selected to configure a System Log
action and determines the severity of the log entry; Information, Warning, Error, or
Disaster.
Message This option is only available when you have selected to configure a System Log
action and allows you to enter a message that will be visible in the System Log.

109
Desktop 7.1

4. In this dialog, select which type of action you want to use; System Log, Overload Protection or
Supervision Event. Depending on what you choose, the options in the dialog differs.

5. If you have selected an Overload Protection action:

• Enter a description in the Description field.

• Select the percentage of messages you want to reject; 0, 25, 50 or 100 % in the Reject drop-down
list.

• In the Strategy drop-down list, you select if you want the action to be applied for all requests,
or only for requests following any of the Diameter overload protection strategies.

6. If you have selected a Supervision Event action:

• Enter a description in the Description field.

• Enter the event content in the Content field. This content can then be used when configuring
Event Notifications for this event.

7. If you have selected a System Log action:

• Enter a description in the Description field.

• Select a severity in the Severity drop-down-list.

• Enter an optional message in the Message field.

8. Click Add to add the action to the action list.

9. Repeat step 4 to 8 until you have added all the actions you want to have in the action list and then
click Close when you are finished.

You will return to the Add Actions dialog.

Figure 96. Add Actions

List Enter a name for the action list in the Name field.
Action This section contains all the actions you have added in this list.

10. Give the list a name and click on the Add button to add the action list in the Create Decision Tables
dialog.

11. Repeat steps 3 to 10 until you have created all the action lists you want to have and then click Close
when you are finished.

You will return to the Create Decision Tables dialog.

4.1.8.5.2.2.1.3. Configuring a Decision Table

In the Decision Table tab you will now have two columns; Conditions and Actions.

110
Desktop 7.1

Figure 97. Creating a Decision Table

The Conditions column contains all the condition lists you have created, and in the Actions column
you can set a condition to either True, False (false), or - (Ingore), and then select which action you
want to trigger when the settings in the decision table match.

Depending on how many conditions you have configured, there may be many different combinations
that you may want to configure different actions for. To add another column, right click on the Action
column heading and select the option Add Column.... A new column will then be added. This can be
repeated for all the different combinations you want to have.

Note! Only one action can be selected for each set of combinations.

111
Desktop 7.1

4.1.8.5.2.2.1.4. Example

Figure 98. Decision table example

The following actions will be taken during the following conditions:

Case 1

If the Incoming messages exceeds 50, a Supervision Event will be triggered.

Case 2

If the Incoming messages exceeds 100, 25 % of the incoming Diameter Credit-Control Initial requests
will be rejected.

Case 3

If the Incoming messages exceeds 150, 100 % of the incoming Diameter Credit-Control Initial requests
will be rejected.

112
Desktop 7.1

4.1.8.5.2.3. MZSH Commands

In case you need to manually trigger or clear the supervision service with action overload, e g for
maintenance or other purposes, you can use the mzsh wfcommand. See the Commandline Tool
User's Guide for further information about this.

4.1.9. Validation
Workflow configurations may be designed, configured, and saved step-by-step, but still not be valid
for activation until fully configured and valid. A valid workflow configuration contains three types of
Configuration data:

• Workflow data: General information related to the workflow configuration for instance, error
handling.

• Workflow structure data: Contains the agents and routes. A route indicates the flow of data depending
on the name of the route and the internal behavior of its source agent.

• Agent specific data: Each agent has a different behavior. Thus, each agent in the workflow config-
uration requires different configuration data in order to operate.

When clicking the Validate button or the Validate menu item the workflow configuration validation
is started. The validation is done in two steps:

1. Validation of the workflow configuration. If the workflow configuration is invalid, a Validation


Error dialog is opened showing details, such as if configuration data or routes are missing or if
referenced MIM resources are no longer available or if configuration data in an agent is missing.
The details can be changed by modifying the agent configuration, after Close is selected. If the
workflow configuration is not valid the validation process will end there.

2. If the workflow configuration is valid, the validation of the workflow table starts. The values in the
table are validated according to each agent's specifications. It is also controlled that values have
been added in all cells in the per workflow columns. The result is presented in a validation dialog
and possible workflow errors are indicated in the workflow table. The validation message for a
specific workflow can be viewed by selecting the corresponding action in the pop up-menu. If none
of the workflows in the workflow configuration is valid the following error message is shown:

This configuration does not have any valid workflows.

If the reason why the workflow configuration is erroneous, is not evident, the button Validate can be
applied for a row, or rows, and will display a dialog with error message(s).

When data is imported to the workflow table the content will not be validated, only the correct number
of columns and types will be controlled. If validation errors occurs during the import the user is asked
whether the import should be aborted or continued (that is importing with errors). Aborting an import
results in restoration/ rollback to the previous table.

When a workflow is saved it will be silently validated and if some of its configuration is invalid or
missing, a dialog will state this and ask whether to still save the workflow or not. Validity is not neces-
sary in order to save a workflow configuration. The workflow can be incomplete or the agent config-
uration can be faulty. The only exception is that all workflows in the workflow configuration must
have unique names. The workflow symbol on the window border and in the Open dialog will be
marked with a red cross if it is not valid.

Note! External References are validated only during run-time.

113
Desktop 7.1

4.1.10. Version Management


The user may save new versions of a workflow while they are running or scheduled. In such case the
workflow will be reloaded to the most recent version upon activation or when it has finished the current
execution. If the most recent workflow version is invalid, the Workflow will abort when trying to load
that version.

A real-time workflow does not usually fall back to scheduled mode and therefore will not automatically
pick up changes made to the workflow. Some real-time agents can be modified while they are running
however will not be saved. See Section 2.2.2, “Dynamic Update”.

4.1.11. Workflow Monitor


The Workflow Monitor is an application controlling workflow execution and presenting a detailed
view of the workflow execution status.

Agent states and events can be monitored during workflow execution and the monitor also allows for
dynamic updates of the configuration for certain agents, through sending of commands. A command
can, for example, tell an agent to flush or reset data in memory. For further information about applicable
commands, see the relevant agent user's guide. See also Section 2.2.2, “Dynamic Update”.

Note! The workflow monitor can apply commands only on one workflow at a time, in one
monitor window per workflow. The monitor functionality is not available for groups or the
whole workflow configuration. The workflow monitor window displays the active version of
the workflow.

Monitoring a workflow does not imply exclusive rights to start or stop it, the workflow can be activated
and deactivated by another user while monitored, or by scheduling.

4.1.11.1. Opening the Workflow Monitor


The Workflow Monitor is opened from either a Workflow configuration or the Execution Manager.
To open the monitor from a Workflow Configuration, select the Workflow to execute, right-click on
it, and then select Open Monitor.

To open the monitor from Execution Manager, see Section 7.6.1.3, “ The Right-Click Menu”.

114
Desktop 7.1

Figure 99. Workflow Monitor Window

4.1.11.1.1. Menus

The Workflow Monitor has three different menus; File, Edit and Event which are all described in
further detail below.

File

Print Select to print the workflow.

Close Select to close the Workflow Monitor.

Edit

Menu option Description


Start Starts the currently loaded workflow.

Profiler Starts the currently loaded workflow using workflow profiling, see
Section 4.1.11.2, “Workflow Profiling”.

Stop Stops the currently executing workflow.

115
Desktop 7.1

Dynamic Update Updates a running workflow with a new configuration that has been
entered in monitor mode. Agents that support update of the configura-
tion while running will be updated with the new configuration. Once
the update has been introduced to the workflow, the user will be shown
if anything was affected. Only the current executing workflow is af-
fected by the dynamic update. See Section 2.2.2, “Dynamic Update”.
Toggle Debug Mode On/Off Turns on or off debug information for a workflow.

Note that turning on the Debug mode might slow down the workflow
due to log filing.
View/Edit Workflow Click to open the workflow editor view.

Event

Events for Selected Agents Monitors events for selected Agents.

For further information, see Section 4.1.11.3, “Viewing Agent Events”.


Events for All Agents Monitors events for all agents.

For further information, see Section 4.1.11.3, “Viewing Agent Events”.

4.1.11.2. Workflow Profiling


When a workflow is executed using "Workflow Profiling", MediationZone® continuously samples
the load, i.e. the amount of processed UDRs, or raw data, per second. The average load for a workflow
is calculated on a regular basis and the status for all routes and agents will be determined based on the
average.

The workflow monitor visually shows the load status for agents and routes using symbols to indicate
which agents that are the slowest and which routes the data usually takes though the Workflow. This
way, it is possible to find bottlenecks in the Workflow.

Since the load status gives an indication of slow agents and routes, this does not necessarily mean a
critical problem. An Aggregation agent, for example, often has a higher load than many other agents,
since this agent might access disk storage and can be configured with complex business logic.

Note! The workflow profiling is only active when executing a workflow from the workflow
monitor, using the "Profiler" button (see Section 4.1.11.1.1, “Menus”).

When running a Workflow through scheduling (see Section 4.2.2.5.3, “Scheduling”), or using
the "Start" button in workflow monitor, there will be no profiling active.

4.1.11.2.1. Profiling for Agents

The following formula is used to calculate the average load for agents:

100 % / Number of agents in the workflow = Average Load

The load status is calculated based on the average load and is indicated by a colored symbol close to
the agent symbol:

116
Desktop 7.1

Normal Load:

Load is lower than Average Load x 2

High Load:

Load is equal to Average Load x 2 or between Average Load x 2 and Average Load x 3

Very High Load:

Load is equal to or higher than Average Load x 3

Unknown Load:

There is no known data load. This could be because the agent is a Collection Agent which
cannot publish statistics, or there is not enough data load to be able to calculate an average.

4.1.11.2.2. Profiling for Routes

The following formula is used to calculate the average load for routes:

100 % / Number of routes in the workflow = Average Load

The load status is calculated based on the average load and is shown using different thickness on the
routes:

Normal Load:

Load is lower than Average Load x 2


High Load:

Load is equal to Average Load x 2 or between Average Load x 2 and Average Load
x3
Very High Load:

Load is equal to or higher than Average Load x 3


Unknown Load:

There is no known data load. This could be because there is not enough data load to
be able to calculate an average.

4.1.11.2.3. Debug Considerations

If Debug is turned on, the profiling result might become misleading since the debugging increases the
data load through agents and routes. To get a more correct result, turn off the Debug capability by using
the option Toggle Debug Mode On/Off.

4.1.11.3. Viewing Agent Events


While a workflow is running, agents report progress in terms of events. These events can be monitored
in the event area in the bottom of the window. The following steps have to be performed in order to
view these events.

1. To view events for all agents in the workflow, select Events for All Agents from the Event menu.

2. To only view events for some selected agents, click each agent while holding down the <Ctrl> key
on the keyboard.

117
Desktop 7.1

a. Select Events for Selected Agents from the Event menu.

b. If desired, rearrange the size and position of each column.

4.1.11.4. Workflow States


All running workflows are monitored by the workflow server on the Platform. A workflow can be in
any one of the following states:

Figure 100. The Workflow State Diagram

State Description
Aborted At least one of the workflow agents has aborted. You track the reason for the error
either by double-clicking the aborted agent, or by examining the System Log.
Building When a workflow, or any of a workflow's referenced configurations, are being re-built,
for example when saving or recompiling, the workflow will be in Building state.

Figure 101.

When a workflow is in Building state, the Configuration Monitor icon in the status
bar will indicate that operations are in progress, and in the Worklow Monitor, the text
"Workflow is building" will also be displayed. Workflows started by scheduling con-
figurations will wait until the workflow leaves the Building state before they start.
Executed A workflow becomes Executed after one of the following:

Running: A successful execution has been completed


Unreachable: After the platform re-establishes connection with the Execution Con-
text, and the workflow has finished execution during the period of
disconnection.
Manual stop: A User stopped the execution

Hold A workflow that is in the Idle state and is being imported either by the mzsh systemim-
port r | sr | sir | wr or by the System Importer configured to Hold Execution,

118
Desktop 7.1

enters the Hold state until the import activity is finished. The workflow group then
resumes its Idle state.
Idle Until you execute the workflow for the first time it is in the Idle state. After execution,
although the workflow is indeed idle, the state space on the display might remain as
any of the following states: Executed, Completed, Aborted, or Not Started.
Invalid The workflow configuration is erroneous. Once you correct the error the workflow
assumes the Idle state. Note: A workflow in the Invalid state cannot be executed.
Loading The platform is uploading the workflow to the Execution Context. When the transfer
is complete Execution Context initializes the agents. When the workflow starts running
the state changes to Running.
Running The Workflow is currently executing
Unreachable If the platform fails to establish connection with the EC where a workflow is executing,
the workflow will enter the unreachable state. When the workflow server successfully
reestablishes the connection, the workflow will be marked as Running, Aborted, or
Executed, depending on the state that the workflow is in. An Unreachable workflow
may require manual intervention if the workflow is not running any more. For further
information see Section 1.2, “Execution Context”.
Waiting The Waiting state applies only to workflows that are included as members in a workflow
group. In the Waiting state the workflow cannot start execution due to two parameters
in the workflow group configuration: The Startup Delay parameter, and the Max
Simultaneous Running quota. A Workflow in the Waiting state will change to Running
when triggered either by a user, the scheduling criteria of its parent workflow group,
or by a more distant ancestor's scheduling criteria.

4.1.11.4.1. Viewing Abort Reasons

In most cases if a workflow has aborted, one of its agents will have the state Aborted displayed above
it. Double-clicking such an agent will display a dialog containing the abort reason. Also, the System
Log holds valid information for these cases.

In a batch workflow a detected error will cause the workflow to abort and the detected error will be
shown as part of the abort reason and inserted into the System Log. A real-time workflow handles errors
by only sending them to the System Log. The only time a real-time workflow will abort is when an
internal error has occurred. It is therefore important to pay attention to the System Log or subscribe
to workflow error events to fully understand the state of a real-time workflow.

Note! Although a workflow has aborted, its scheduling will still be valid. Thus, if it is scheduled
to execute periodically, it will be automatically started again the next time it is due to commence.
This since the cause of the abortion might be a lost connection to a network element, which
could be available again later. Therefore, a periodically scheduled workflow, which has aborted,
is treated as Active until it is manually deactivated.

Figure 102. Status Details for an Aborted Agent

Click Show Trace; the Stack Trace Viewer opens. Use the information that it provides when consulting
DigitalRoute® Global Support.

119
Desktop 7.1

4.1.11.5. Agent States


An agent state is displayed above it. An agent can have one of the following states:

Created The agent is starting up. No data may be received during this phase. This state only exists
for a short while during workflow startup.
Idle The agent is started, awaiting data to process. This state is not available for a real-time
workflow.
Running The agent has received data and is executing.
Stopped The agent has successfully finished processing data.
Aborted The agent has terminated erroneously.

The error reason can be tracked either by double-clicking the agent or by examining the
System Log.

4.1.11.6. Workflow Execution State


The Workflow Execution State provides granular information about a workflow's current execution
status. In APL, there are function blocks that are called for each workflow execution state. For more
information about which APL functions that may be used in each state, see the APL Reference Guide.

Real-time agents only has three different execution states, that will occur at every execution of the
workflow.

Figure 103. Execution flow for realtime agents

The batch agents are more complex and contain additional states in order to guarantee transaction
safety.

Figure 104. Execution flow for batch agents

State Description

120
Desktop 7.1

initialize The initialize state is entered once for each invocation of the workflow.
During this phase the workflow is being instantiated and all agents are set up ac-
cording to their configuration.
beginBatch This state is only applicable for batch workflows.

At every start of a new batch, the batch collection agent will emit a beginBatch
call. All agents will then prepare for a new batch. This is normally done every time
a new file is collected, but can differ depending on the collection agent.
consume The agents will handle all incoming UDRs or bytearrays during the consume
state.
drain This state is only applicable for batch workflows.

When all UDRs within a batch has been executed, the agents will enter the drain
state. This state can be seen as a final consume state with the difference that there
is no incoming UDR or bytearray. The agent may however send additional inform-
ation before the endBatch state.
endBatch This state is only applicable for batch workflows.

The Collection agent will call for the endBatch state when all UDRs or the byte
arrays has been transferred in to the workflow. This is normally done at the file
end, but is dependent on the collection agent or when a hintEndBatch call is
received from any agent capable of utilizing APL code.
commit This state is only applicable for batch workflows.

Once the batch is successfully processed or sent to ECS, the commit state is
entered. During this phase, all actions that concern transaction safety will be ex-
ecuted.
deinitialize This is the last execution state for each workflow invocation. The agents will clean
and release resources, such as memory and ports, and stop execution.
cancelBatch This state is only applicable for batch workflows.

If an agent fails the processing of a batch it may emit a cancelBatch call and
the setting in Workflow Properties will define how the workflow should act. For
more information regarding the Workflow Properties, see Section 4.1.8.2, “Error
Tab”.

Note! The execution states cancelBatch and endBatch are mutually


exclusive - only one per batch can be executed.

rollback This state is only applicable for batch workflows.

If the last execution of the workflow aborted, the agents will enter the rollback
execution state right after the initialize state. The agents will recover the state
prior to the failing transaction and then enter beginBatch or deinitialize
depending on if there is a additional batches to process.

4.1.11.7. Agent Configuration


Some agents allows parts of them to be reconfigured while active. When double-clicking the agent in
monitor mode, the agent's configuration is shown in the displayed window if the agent can be recon-
figured. When the configuration has been updated it can be committed to the active workflow using
the Dynamic Update button in the toolbar.

121
Desktop 7.1

4.1.11.8. Transactions
A workflow operates on a data stream and MediationZone® supplies a transaction model where per-
sistent synchronization is made from the workflow data and agent specific counters. In theory the
synchronization could be performed continuously on a byte level but done in practice this would
drastically decrease the performance of the system.

The MediationZone® transaction model is based on the premises that Collection agents are free to
initiate a transaction to the Transaction server. At the moment the complete workflow is frozen and
the Transaction server saves the state of the workflow data that is queued for each agent. In practice,
agents indirectly emit a transaction when an End Batch is propagated. When all data is secured, the
workflow continues the execution.

4.1.12. Deactivation Issues


When a deactivation request is issued from the Execution Manager (or directly from the Workflow
Monitor), different dialogs appear depending on the type of workflow to deactivate.

Some agents are designed to wait for acknowledgment from sources they communicate with. Thus, a
stop request may take a while before acknowledged. In case of a network element connected to a Me-
diationZone® Collection agent has terminated in a bad state, causing the Collection agent to hang, the
Execution Context on which the workflow is running must be restarted.

UDRs already in the workflow, will be processed if they can be processed within the time interval set
in the parameter named: ec.shutdown.time found in the executioncontext.xml file.

The parameter specifies the maximum time in milliseconds the execution context will wait before a
real-time workflow stops after a shutdown has been initiated. This is to enable the workflow to stop
all input and drain all UDRs in the workflow before stopping.

Note!

<property name="ec.shutdown.time" value="20000"/>

The wait time is initially set to 20 seconds. If this value is set to 0 all draining is ignored and
the workflow will stop immediately.

The parameter can be changed anytime however the execution context must be restarted before the
changes will take effect. For further information see the System Administration user guide.

If the workflow is unable to drain the data within the specified time the workflow will still stop and
any remaining data in the workflow will be lost. If this occurs a log note will be added in the System
Log.

4.1.12.1. Real-Time Workflows


Real-time workflows are deactivated immediately, accepting no more input data.

In case an Inter Workflow forwarding agent is included in the workflow, the last file might be incom-
plete. For these cases, the error handling is taken care of by the corresponding Inter Workflow collection
agent.

4.1.12.2. Batch Workflows


Batch workflows have two termination possibilities, indicating if the End Batch will be waited for or
not. If the batches are large, and the batch has just being loaded by the workflow, the Immediate option
will terminate the workflow without waiting for the current batch to finish.

122
Desktop 7.1

Figure 105. Confirmation Dialog When Batch Workflows are Deactivated

Batch Awaits the next End Batch before unloading the workflow, that is, when the current
batch is fully processed.
Immediate Deactivates the workflow immediately, causing the current batch to be terminated. This
may still take a while, however, it is still faster than the Batch termination option.

4.2. Workflow Group


The workflow group configuration enables you to manage workflow groups. A workflow group can
consist of one or several workflows, or workflow group members, which enables you to configure
several workflows as a single entity. The group consists of several workflows, and/or workflow groups,
each with a diverse setup of scheduling, load balance, and event notification.

In this section you will find all the information you need to create, configure, and execute a workflow
group.

4.2.1. Creating a Workflow Group Configuration


To create a new workflow group configuration, click the New Configuration button in the upper left
part of the MediationZone® Desktop window, and then select Workflow Group from the menu.

To open an existing Workflow Group configuration, double-click the configuration in the Configur-
ation Navigator, or right-click a configuration and then select Open Configuration(s)....

123
Desktop 7.1

Figure 106. A Workflow Group Configuration

4.2.1.1. Workflow Group Menus


The contents of the menus in the menu bar may change depending on which Configuration type that
has been opened in the currently displayed tab. The workflow group configuration uses the standard
menu items that are visible for all Configurations, and these are described in Section 3.1.1, “Configur-
ation Menus”.

The menu items that are specific for a workflow group configuration are described in the next coming
sections:

4.2.1.1.1. Edit

Menu option Description


Prerequisites Select this option to open the Prerequisites dialog where you can decide the execu-
tion order of the different group members.

4.2.1.1.2. View

Menu option Description


Show Member Tool Bar The Member toolbar is located beneath the Group Members table, and
consists of three buttons; Remove Up and Down.

Make sure this option is selected if you want to have these buttons visible
in the view. To remove the buttons, clear the check box for this option.

124
Desktop 7.1

Configuration Filter Enables you to include or exclude the following from the Available to
Add list:

• Workflow Groups

• Workflows

• Batch and Task workflows

• Realtime workflows

4.2.1.2. Workflow Group Buttons


The contents of the button panel may change depending on which Configuration type that has been
opened in the currently displayed tab. The workflow group configuration uses the standard buttons
that are visible for all Configurations, and these are described in Section 3.1.2, “Configuration Buttons”.

4.2.1.3. Workflow Group Tabs


A Workflow Group Configuration has three different folder tabs; Members, Execution and
Scheduling which are all described in further detail below.

4.2.1.3.1. Members

Item Description
Available to add Upper pane: Displays a tree view of the workflows and workflow groups that are
saved within their respective configurations, and are available to you to add as
members when creating a new workflow group.

Lower pane: Displays a list of workflows that are included in the workflow config-
uration that you select from the upper pane.

A workflow group can be a member of another workflow group.

Group Members Shows the currently added workflow group members.

4.2.1.3.2. Execution Tab

Contains settings described in section Section 4.2.2.4, “Executing a Workflow Group”

4.2.1.3.3. Scheduling Tab

Contains settings described in Figure 110, “The Workflow Group Editor Scheduling Tab”

4.2.2. Managing a Workflow Group


Managing a workflow group includes:

• Opening a saved workflow group in the Workflow Group Editor.

• Creating a workflow group

• Removing members

• Execution: Manual or Scheduled

125
Desktop 7.1

• Configuration

4.2.2.1. Opening a Workflow Group Configuration


To create a new workflow group configuration, click the New Configuration button in the upper left
part of the MediationZone® Desktop window, and then select Workflow Group from the menu.

To open an existing workflow group configuration, double-click on the workflow group.

The workflow group configuration opens in a tab.

4.2.2.2. Creating a Workflow Group


You create a workflow group by adding a workflow, or a workflow group, as a member to the group.

To Create a workflow group:

1. In the Available to Add pane, select a workflow or a workflow group.

Note! An invalid workflow member will not affect the validity of the workflow group.

2. Either right-click the selected item and select Add as Member or click on the upper Add button.

The member is added in the Group Members list.

Note! Batch-, task- and system task workflow members can be combined in a workflow,
group, but real-time workflow members can only be combined with other realtime workflow
members. However, for real-time workflows, we recommend that only one workflow is in-
cluded in each workflow group.

3. Click on the Save As button and give the new workflow group a name.

4.2.2.3. Removing a Member from a workflow group


When you remove a member from a workflow group, the member will not cease to exist:

• A workflow member might still be running according to its configuration, or as a member of a


workflow group in the system

• A workflow group member might still run as a member of another workflow group in the system

To Remove a Member from a workflow Group:

1. Right-click on the member you want to remove in the in the Group Members list in the Members
tab in Workflow Group configuration.

A menu appears on your screen.

2. Select the option Remove Member.

You will get a question if you are sure you want to remove the member.

3. Click Yes if you are sure.

The member is removed from the group.

126
Desktop 7.1

4.2.2.4. Executing a Workflow Group


You execute the workflow group either manually, from the Execution Manager - see Section 7.6,
“Execution Manager” - or, you can schedule an automatic execution - see Section 4.2.2.5.3,
“Scheduling”.

4.2.2.5. Configuring a Workflow Group


Configuring a workflow group includes:

• Planning members' execution order

• Setting the workflow group execution parameters

• Setting the workflow group scheduling parameters

4.2.2.5.1. Members Execution Order

When planning the execution order of the members in your workflow group, use the Prerequisites
column in the Group Members table. By doing so you ensure:

• A linear execution

• A certain execution order

• That every member is fully executed before the next member starts running

To Configure Members Execution Order:

1. Right-click on a member in the Group Members pane in the Members tab.

A drop-down menu opens.

2. Select the option Prerequisites.

The Prerequisites dialog box opens.

Figure 107. The Prerequisites Dialog Box

3. Select the check boxes for the members that the current member should follow.

127
Desktop 7.1

Note! Apply Prerequisites settings for all members except for the first one in the execution
order.

4. Click OK.

See the image below for an example of how it may look like.

Figure 108. Workflow Group Members Execution Prerequisites

You can rearrange the members' order of appearance in the Group Members list, by using the
Up and Down buttons. When rearranging a list, that is already configured with Prerequisites
you will notice that the Prerequisites parameter is removed and a yellow warning icon will
appear instead. Note that this will not affect the workflow group validity. To remove the notific-
ation sign, either open the Prerequisites dialog box and click OK, or - to remove all the notific-
ation signs - save the workflow group configuration, and reopen it.

4.2.2.5.2. Execution

Click on the Execution tab in the workflow group configuration.

128
Desktop 7.1

Figure 109. The Workflow Group Editor Execution Tab

Entry Description
Max Simultan- Enter the maximal number of workflows you want to be able to run concurrently.
eous Running
Workflows
Note!

• If you do not specify a limit, your specific work environment and equipment
will determine the maximal number of workflows that can run simultaneously.

• This value applies only to the workflow group that you are defining and will
not affect members that are workflow groups.

Startup Delay If Max Simultaneous Running Workflows is set to a value larger than 1, enter the
delay (in seconds) of the execution start for each of the workflows that may run sim-
ultaneously.

Note!

• If you do not enter any value, a very short delay will be applied by the system,
by default.

• You can assign a Startup Delay regardless of the members status. Once the
delay is up, if the member in turn is disabled, the workflow group attempts
to execute the next member.

Continue This option activates the default behavior on member abort, which means that the
workflow group will run until all its members are fully executed and/or all are aborted.

129
Desktop 7.1

Note! This means that groups with Real-time workflow members will continue
to run until all the members are aborted or stopped manually.

Stop Select this option to have the workflow group stop when a member aborts. A batch
workflow will finish the current batch and then stop.
Stop Immedi- Select this option to have the workflow group stop immediately when a member aborts.
ately A batch workflow will stop even in the middle of processing a batch.
Enable Select this check box to enable the workflow group Execution Settings.

Note!

• Execution Settings that you configure here, will only apply for workflow
members for which Execution Settings have not been enabled in the config-
urations that they are part of.

• Workflow groups cannot run as stand-alones, and will be executed on the


platform. For further information about stand-alone, see Section 1.2, “Execu-
tion Context”.

Distribution A workflow executes on an Execution Context (EC). Specific ECs, or groups of ECs,
may be defined by the user, or the MediationZone® system can handle it automatically.

The Distribution settings are applied to all included group members i e workflow-
and workflow group configurations. When there are conflicting settings, the members
that are lowest in the workflow group hierarchy have precedence.

When the Distribution settings of workflow group configurations are set on the same
level in the hierarchy, they do not conflict with each other.

Note! If you select to configure the distribution using EC groups, the selected
distribution type will also be applied on the ECs within the groups.

Hint You can combine both individual ECs and EC groups in the Execution
Contexts list. The selected distribution will then be applied for all ECs stated
either individually or in groups.

The following options exist:

• Sequential - Valid only if ECs/EC groups are defined. Starts the workflow on the
first EC in the list. If this EC is not available, it will proceed with the next in line.

• Workflow Count - Starts the workflow on the EC running the fewest number of
workflows. If the Execution Contexts list contains at least one entry, only this/these
ECs/EC groups will be considered.

• Machine Load - Starts the workflow on the EC with the lowest machine load. If
the Execution Contexts list contains any entries, only this/these ECs/EC groups
will be considered. Which EC to select is based on information from the System
Statistics sub-system.

130
Desktop 7.1

• Round Robin - Starts the workflow on the available ECs/EC groups in turn, but
not necessarily in a specific order. If EC1, EC2 and EC3 are defined, the workflow
may first attempt to start on EC2. The next time it may start on EC3 and then finally
on EC1. This order is then repeated. If an EC is not available, the workflow will be
started on any other available EC.

4.2.2.5.3. Scheduling

The cause of execution for a workflow group can either be a planned time scheme or a specific event.
You can configure the cause of execution in the Scheduling tab.

Note!Changes to a running workflow group will not apply until the group has finished running,
which means that a real-time workflow will have to be stopped manually for changes to apply.

Figure 110. The Workflow Group Editor Scheduling Tab

Entry Description
Day Plans Use this table to plan timed triggers that will execute your workflow group. Note that
you can define a list of various plans. MediationZone® will pick the plan that meets
the top priority according to the section called “Day Plans Priority Rule”.

Click on the Show... button to open a calender that displays the workflow group ex-
ecution plan, see Figure 112, “The Execution Calendar”
Event Trigger Use this table to define an event execution trigger for the workflow group, see the
section called “ Event Triggers ”.

Day Plans Priority Rule

The Day Plans table enables you to create a list of different execution schemes of the workflow group.
You configure each Day Plan to any interval between executions.

Note! Two Day Plans should not contradict each another. An example of an invalid configuration:
Day Plan A is set to Tuesdays Off, while Day Plan B is set to Every 5 minutes between 15:00:00
and 15:05:00 on Tuesdays.

MediationZone® applies the following priority rule for picking a Day Plan out of the list:

1. Last day of month

2. Day of month (1-31)

131
Desktop 7.1

3. Weekday (Monday-Sunday)

4. Every day

To Configure a Day Plan Schedule:

Click on the Add button in the Day Plan table in the Scheduling tab.

The Add Day Plan dialog box opens.

Figure 111. The Add Day Plan Dialog Box

Entry Description
Day Select the target day. Valid options are:

• Every day

• A specific weekday

• A specific day of the month (1-31)

• The last day of the month

Day Off Select this check box to avoid execution on the day specified in the Day list.
Start At Enter a start time for the first execution
Stop At Enter the time for when execution should stop

If these fields are left empty, the default stop time, which is 23:59, will be ap-
plied.

Repeat Every Enter the interval between execution start time in seconds, minutes, or hours.

If this field is left empty, only one execution session will run at the specified
start time.

To View an Execution Plan:

1. Click on the Show... button beneath the Day Plan table.

The Execution Calendar opens.

132
Desktop 7.1

Figure 112. The Execution Calendar

A green colored cell in the calender represents at least one scheduled execution during that time.

2. Click on a green cell.

The Hourly Execution Plan opens.

Figure 113. The Hourly Execution Plan

Event Triggers

To trigger the execution of a workflow group you add a row to the Event Trigger table. A row can
be either a certain event, or a chain of events, that must occur in order for the workflow group execution
to set off.

Note! An Event Trigger that is comprised of a chain of events will take effect only when all
the events that it includes have occurred.

The events that have occurred are stored in memory. When MediationZone® is restarted this
information is lost and none of the events on the event chain are considered to have occurred.

To Configure an Event Trigger:

1. Click on the Add button beneath the Event Trigger table.

The Add Event Chain Trigger dialog box opens.

133
Desktop 7.1

Figure 114. The Add Event Chain Trigger Dialog Box

2. Click on the Add button.

The Add Event Selection dialog box opens.

Figure 115. The Add Event Selection Dialog Box

3. Select an Event Type from the drop-down list, see Section 5.4, “Event Fields”

4. Double-click on an entry in the Event Filter table.

The Edit Match Value dialog box opens.

5. Click on the Add button.

The Add Match Value dialog box opens.

6. If you want to filter all the events based on specific values of the selected type, enter the values in
the Match Value(s) column. Otherwise, if you leave the default value, All, all the events of the
selected event type will trigger the execution of the workflow group.

7. Close all four dialog boxes.

Note! There are no referential constraints for Event Triggers nor any way to track relations
between workflows that are triggered by one another. For example: workflow A is defined to
be activated when workflow B is activated. Workflow B might be deleted without any warnings,
leaving Workflow A, still a valid workflow, without a trigger. This might happen since value
matching is based on a regular expression of the workflow name, and not on a precise link
match.

134
Desktop 7.1

4.2.3. Workflow Group States


When a workflow group is executed and stopped depends on its configuration as well as on events
that occur while running. To understand how a workflow group operates, see the state diagram below
and the detailed description of the different states that follows.

Figure 116. The Workflow Group State Diagram

State Description
Aborted The default behaviour is that a workflow group will not assume the Aborted state until
all of its members are back to Idle When one member is in the Aborted state, the
workflow will continue until all the other members in the workflow group have finished
execution. Then the workflow group gets into an Aborted state.

Note! You can change the default behaviour for when a member aborts by using
the Behaviour when member abort settings in the Execution tab, see Sec-
tion 4.2.2.5.2, “Execution”.

When you stop a workflow group, it will first assume the Stopping state and take
care of all transactions. Only then will the workflow group state change to Idle

Hold A workflow group that is in the Idle state and is being imported either by the mzsh
systemimport r | sr | sir | wr or by the System Importer configured to Hold Exe-
cution, enters the Hold state until the import activity is finished. The workflow group
then resumes its Idle state.
Idle The workflow group configuration is valid , and none of its members is currently being
executed from within the workflow group.
Invalid There is an error in the workflow group configuration.

The workflow group cannot be executed in the Invalid state.

135
Desktop 7.1

Running The workflow group is running, controlling the execution of its members according to
the configuration settings.
Stopping A manual stop of the workflow group, or of the parent workflow group, makes the
workflow group enter the Stopping state. The workflow group remains in the Stopping
state while all the members are finishing their data transactions. Then the workflow
group will go into either the Idle or the Aborted state.
Suppressed Workflow groups that are in the Running state while configurations are being imported
by the mzsh systemimport r | sr | sir | wr command, or by the System Importer
configured to Hold Execution, enter the Suppressed state. In this state any scheduled
members are prevented from being started. The workflow group remains in this state
until the import activity is finished. Then, if the workflow members are still running,
the real-time workflow group returns to the Running state. Batch workflow groups re-
main in the Suppressed state until their members complete their execution. Then, the
workflow group state becomes Idle.

Note! If the workflow group is in the Suppressed state, and you stop all the
workflow group members, the workflow group will enter the Stopping state. If
this happens while an import process is going on, the workflow group will move
from the Stopping state to the Idle and then to the Hold.

4.2.4. Suspend Execution


This section includes information about the configuration option Suspend Execution.

The Suspend Execution configuration enables you to apply a restriction that prevents specific workflows
and/or workflow groups from running in specific periods of time.

Note! Grouping workflows is possible in the Suspend Execution for the sole purpose of suspend-
ing them during a defined period of time. These groups are not workflow group configurations.

4.2.5. Suspend Execution Editor


To open the editor, click the New Configuration button in the upper left part of the MediationZone®
Desktop window, and then select Suspend Execution from the menu.

136
Desktop 7.1

Figure 117. The Suspend Execution Editor

4.2.5.1. Suspend Execution Menu


The main menu changes depending on which configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all configurations and these are
described in Section 3.1.1, “Configuration Menus”.

The menu items that are specific for Suspend Execution configurations are described in the following
sections:

4.2.5.1.1. View

Menu option Description


Show Member Tool Bar The Member toolbar is located beneath the Members table, and consists
of the Remove button.

Make sure this option is selected if you want to have the button visible
in the view. To remove the button, clear the check box for this option.
Configuration Filter Enables you to include or exclude the following from the Available to
Add list:

• Workflow Groups

• Workflows

• Batch and Task workflows

• Realtime workflows

4.2.5.2. Suspend Execution Buttons


The toolbar changes depending on which configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.

137
Desktop 7.1

There are no additional buttons for Suspend Execution.

4.2.5.3. Suspend Execution Tabs


Suspend Execution includes two tabs:

• Members Tab

• Scheduling Tab

4.2.5.3.1. Members Tab

On the Members tab you select the workflows which execution you want to suspend during specific
periods of time.

Figure 118. The Suspend Execution Editor - Members Tab

Item Description
Available to add Upper pane: A tree view of the workflows and workflow groups that are saved
within their respective configurations, are available to you to apply execution
suspension for.

Lower pane: A list of workflows that are included in the workflow configuration
that you select from, in the upper pane.

A workflow group can be a member of another workflow group.

Members A list of the current workflow group members.

Button Description
Click to add a member to the list.

4.2.5.3.2. Scheduling Tab

From the Scheduling tab you suspend and enable the activation of workflows that you select on the
Members tab, if they are executed during the suspension interval.

138
Desktop 7.1

Figure 119. The Suspend Execution Editor - Scheduling Tab

The Scheduling tab contains the following table columns:

Time When you click the Add Row button that is located at the bottom of the Scheduling
tab, a new row appears in the Scheduling tab table. This row includes the current time
stamp. You change the time stamp to a future date by first double-clicking the row and
then clicking the button that appears in the selected row. Then, from the Time Chooser
dialog box, you select a time and a date.

Note! As soon as a specified date has passed, according to the Desktop (client)
clock, the text in that row becomes italicized.

Enable Double-click the table cell to select it, and then check to enable the activation of the
workflow at the specified time stamp.
Disable Double-click the table cell to select it, and then check to suspend the workflow at the
specified time stamp.

4.2.6. Execution Suspension


You configure execution suspension of workflows, and/or their enabled activation, from the Suspend
Execution Editor.

4.2.6.1. To Suspend a Workflow


1. On the Members tab, from the Available to Add list, select the workflow, or the workflow group,
that you want to suspend during a certain period of time.

2. Click to select the row; the button appears.

3. Click the button to move each selection into the Members list on the right hand side of the tab.

4. On the Scheduling tab, click the Add Row button ; the current time stamp is added to the table.

5. At this point you can either suspend the workflow immediately by checking Disable, or you can
edit the time and date to have the suspension start later. To do that select the relevant row and click
the threeDot ...; the Date Chooser dialog box opens.

139
Desktop 7.1

6. Select the year, month, day, hour, and minutes and click OK; the row is updated with a later time
stamp.

7. Check Enable, to remove the execution suspension, or Disable, to suspend a workflow at the specified
time.

8. Save the Suspend Execution configuration.

Note! The MediationZone® platform should be running when both the suspend- and the enabled
activation dates occur, for these actions to be effective.

140
Desktop 7.1

5. Event Notifications
An Event Notification configuration offers the possibility to route information from events generated
in the system, to various targets. These targets include:

• Database

• Log file

• e-Mail

• SNMP trap receivers

• System Log

An event is an incident of importance that occurs in the MediationZone® system. There are several
different event types that all contain specific data about the particular event. Besides being logged,
events may be split up and selected parts may be embedded in user defined strings. For instance, consider
an event originating from a user, updating an existing Notifier:

userName: mzadmin3, userAction: Notifier AnumberEvents updated.

This is the default event message string for User Events. However, it is also possible to select parts of
the information, or other information residing inside the event. Each type of event contains a predefined
set of fields. For instance, the event message previously exemplified, contains the userName and
userAction fields which may be used to customize event messages to suit the target to which they will
be logged:

Figure 120. Events Can Be Customized to Suit Any Target

Note! The Category field in the above picture is left empty intentionally, since it does not have
a value for this specific event. A category is user defined and is entered in the Event Categories
dialog. It is a string which will route messages sent with the dispatchMessage APL function.

The event types form a hierarchy, where each event type adds its own fields and inherits all fields from
its ancestors.

The event hierarchy is structured as follows:

• Base

• Alarm

• Code Manager

• Group

141
Desktop 7.1

• System

• User

• Workflow

• Agent

• Agent Failure

• Agent Message

• User Agent Message

• Agent State

• ECS Insert

• Debug

• Dynamic Update

• Workflow State

• External Reference

• <User Defined>

Each event type and its fields are described in Section 5.4, “Event Fields”.

5.1. Event Notification Menus


The contents of the menus in the menu bar may change depending on which configuration type that
has been opened in the currently displayed tab. The Event Notification configuration uses the standard
menu items that are visible for all configurations, and these are described in Section 3.1.1, “Configur-
ation Menus”.

The menu items that are specific for Event Notification configurations are described in the following
sections:

5.1.1. The Edit Menu


Item Description
Event Categories... To define an Event Category, to send any kind of information to a Column.
Please refer to Section 5.6, “Event Category” for further information.
External References To Enable External References in an Agent Profile Field. Please refer to
Section 9.5.3, “Enabling External References in an Agent Profile Field” for
further information.

5.2. Event Notification Buttons


The contents of the button panel may change depending on which configuration type that has been
opened in the currently displayed tab. The Event Notification configuration uses the standard buttons
that are visible for all configurations, and these are described in Section 3.1.2, “Configuration Buttons”.

142
Desktop 7.1

5.3. Configuration
A notifier is a selected target, receiving event data when one or several selected event types are generated
in the system. In addition, filters may be applied for each selected event type. Notifiers are configured
in the Event Notification Editor.

To create a new Event Notification configuration, click the New Configuration button in the upper
left part of the MediationZone® Desktop window, and then select Event Notification from the menu.

Figure 121. An Event Notification Configuration

Event Notifier Enabled Check to enable event notification. Note, undirected events are not saved
by the system, and can therefore not be retrieved.
Notifier Setup A Notifier is the target where event messages, configured in the Event
Setup, are sent. For instance,to a database table, a log file or to the Medi-
ationZone® System Log.

The overall appearance of the message string is also defined in this tab.
Event Setup In the Event Setup tab, events to catch are defined. If necessary, the message
string defined in the Notifier Setup is also modified.

5.3.1. The Notifier Setup Tab


A notifier is defined as the target where event messages are sent. A notifier may output information
to any type of database table, text file, SNMP trap, mail or the standard MediationZone® System Log.

143
Desktop 7.1

Figure 122. Event Notification Editor, Notifier Setup Tab

Notification Type See detailed description in Section 5.3.1.1, “Notification Type Con-
figurations”.
Duplicate Suppression (sec) Enter the number of seconds during which an identical event is sup-
pressed from logging. Default value is 0.
Base Configuration Enter the event target definition parameters. For further information
see Section 5.3.1.1, “Notification Type Configurations”.
Target Field Configuration See Section 5.3.1.2, “Target Field Configuration”.

5.3.1.1. Notification Type Configurations


The event notification types that you can configure in MediationZone® are:

• Database

• Log File

• Send Mail

• Send SNMP Trap

• Send SNMP Trap, Alarm

• System Log

5.3.1.1.1. Database

Event fields may be inserted into database tables using either plain SQL statements or calls to stored
procedures.

144
Desktop 7.1

Figure 123. Event Notification Editor - Notification Type Database

Database The name of the database in which the table resides. Databases are defined in the
Database profile configuration.
SQL Statement Type in any SQL statement, using '?' for variables which are to be mapped against
event fields in the Event Setup tab. Note that trailing semicolons are not used. In
case of running several statements, they must be embedded in a block.

5.3.1.1.2. Log File

Messages may be routed to ordinary text files on the local file system.

Figure 124. Event Notification Editor - Notification Type Log File

Directory The path to the directory where the file, to append to, resides.
Filename The name of the file. In case the file does not exist, it will be created when the first
message for the specific event map arrives. New messages are appended.
Size The maximum size of the file. When this parameter is exceeded, the existing file is re-
named and a new is created upon the arrival of the next event.

The old file will receive an extension to the file name, according to
<date_time_milliseconds_timezone>.
Time The maximum lifetime of a file before it is rotated. When this parameter is exceeded,
the existing file is renamed and a new is created upon the arrival of the next event.

If the time is set to rotate every:

• hour, rotation is made at the first full hour shift that is xx:59.

• 2 hour, rotation is made in predefined two hour intervals (0,2,4...22) when turning to
next full hour. For example after 01:59.

145
Desktop 7.1

• 3 hour, rotation is made in predefined three hour intervals (0,3,6, ...21) when turning
to next full hour. For example after 02:59.

• 4 hour, rotation is made in predefined four hour intervals (0,4,8, ...20) when turning
to next full hour. For example after 03:59.

• 6 hour, rotation is made in predefined six hour intervals (0,6,12 ...21) when turning
to next full hour. For example after 05:59.

• 8 hour, rotation is made in predefined eight hour intervals (0,8,16) when turning to
next full hour. For example after 07:59.

• 12 hour, rotation is made at high noon and at midnight.

• day, rotation is made at midnight.

• week, rotation will be done at midnight at the last day of the week.

• month, rotation will be done at midnight at the last day of the month.

The old file will receive an extension to the file name, according to
<date_time_milliseconds_timezone>.

If both Size and Time are set, both behaviors apply.


Separator Indicates how logged events are separated from one another. Valid options are:

• Linefeed

• CR + LF (Carriage return + Linefeed)

• Comma

• Colon

• (None)

Log Line Text to go into the log file.

For further information, see Section 5.3.1.2, “Target Field Configuration”.

Note! Do not configure two different Event Notifiers to log information to the same file. Messages
may be lost since only one notifier at a time can write to a file. Define one Event Notifier with
several Event Setups instead.

5.3.1.1.3. Send Mail

It is also possible to send mails to one or several recipients when the specified events occur. Make
sure the correct parameters, mz.mailserver and mz.notifier.mailfrom, have been configured
in $MZ_HOME/etc/platform.xml.

146
Desktop 7.1

Figure 125. Event Notification Editor - Notification Type Send Mail

Recipient Mail address to one or several recipients. Use comma to separate, if several. Select Add
to select mail addresses, configured for available users.

For further information about how to obtain the text, see Section 5.3.1.2, “Target Field
Configuration”.
Subject The subject/heading of the mail. If Event Contents is selected, newlines will be replaced
with spaces to make the subject readable. If the string exceeds 100 characters, it is trun-
cated.

For further information about how to obtain the text, see Section 5.3.1.2, “Target Field
Configuration”.
Message The body of the mail message.

For further information about how to obtain the text, see Section 5.3.1.2, “Target Field
Configuration”.

5.3.1.1.4. Send SNMP Trap

Events may be sent in form of SNMP traps to systems configured to receive such information. For the
MIB definition, see the $MZ_HOME/etc/mz_trap_mib.txt file.

Note! A new SNMP trap format is now available. For backward compatibility purposes, the
previous invalid format will still be used by default. However, if you want to use the new format
you can add the property snmp.trap.format.b in platform.xml, and set it to true
in order to activate the new values.

The value of the agentAddress field will be taken from the parameter pico.rcp.server.host.

147
Desktop 7.1

Figure 126. Event Notification Editor - Notification Type Send SNMP Trap

IP Address The IP address of the target host.


Port The port on the target host defined for the SNMP communication.
Community The community name string used for the authentication process.
Version A list containing the supported SNMP versions.
User Message A string which will be sent out as SNMP traps.

For further information about how to obtain the text, see Section 5.3.1.2, “Target
Field Configuration”.

5.3.1.1.5. Send SNMP Trap, Alarm

This notification type is similar to Section 5.3.1.1.4, “Send SNMP Trap ”, with one difference: It is
specifically designed to work for Alarm events.

Figure 127. Event Notification Editor - Notification Type Send SNMP Trap Alarm

Target Field Configuration See Figure 129, “Target Field Configuration - Log Line”.

5.3.1.1.6. System Log

Selecting this target will route messages, produced by the selected events, to the standard Medi-
ationZone® System Log. The Contents field from each event will be used as the message in the log.

148
Desktop 7.1

Figure 128. Event Notification Editor - Notification Type System Log

Note! Do not route frequent events to the System Log. Purging a large log might turn into a
performance issue. If still doing that, keep the log size at a reasonable level by applying the
System Task System Log Cleaner. For further details see Section 4.1.1.4.9, “System Log
Cleaner”.

5.3.1.2. Target Field Configuration


Some of the notifier types have one or many parameters that may be dynamically populated by data
from an event. These parameters are configured in the Target Field Configuration panel.

Depending on the parameter type, there will be one or several population types available in the list
next to it.

5.3.1.2.1. Manual

Selecting Manual allows the user to hard code a value, and thus gives no possibility to select any dy-
namic values to be embedded in the message. The value entered will be assigned to the parameter exactly
as typed.

Figure 129. Target Field Configuration - Log Line

5.3.1.2.2. Event Field

Selecting Event Field allows the user to assign the value of one specific event field to the parameter.
For further information about fields valid for selections, see Section 5.4, “Event Fields”.

5.3.1.2.3. Event Contents

Selecting Event Contents assigns the event value of each event Contents field to the parameter. All
event types have a suitable event content text. For instance, referring to the example in Figure 120,
“Events Can Be Customized to Suit Any Target”, the Event Contents string will be:

Username: mzadmin3, Action: Notifier AnumberEvents updated.

Another example; the following string is reported for a User Defined Event:

Workflow name: <name>, Agent name: <name>, Message: <string>

The Message string originates from the dispatchMessage function. Note that nothing will be
logged unless dispatchMessage is used. Also in this case, the Field Maps in the Event Setup
will be disabled.

Note! The same result is achieved when selecting Event Field as Log Line, and then selecting
Contents as mapping (the Event Setup tab).

149
Desktop 7.1

5.3.1.2.4. Formatted

Selecting Formatted allows the user to enter text combined with variable names, which are assigned
event field values in the Event Setup tab.

Figure 130. Each variable in Notifier Setup will have its own Notifier Field in Event Setup - Send
Mail

For each variable entered in a field with the Formatted option selected, a notifier field will be added
in the Event Setup tab where you can then assign event field values.

Variable names must be preceded by a $, started with a letter, and be comprised of a sequence of letters,
digits, or underscores.

Figure 131. Target Field Configuration - Log Line

The settings in the screenshot above will be interpreted as containing the variables 'NO' and 'ANUM'.

5.3.2. The Event Setup Tab


In the Event Setup tab, parameters that are populated by using the type Event Field, Formatted (with
variables) or SQL, can be assigned with their values from selected event fields. Also, the event types
to catch are configured.

150
Desktop 7.1

Figure 132. Event Notification Editor, Event Setup Tab

Filter The Filter table enables you to configure the event types to catch. For each event
type, a filter may be defined to allow, for instance, a specific workflow and two
specific event severities to pass.

Besides filtering on existing values (for instance workflow name) it is possible


to filter using regular expressions and hard coded strings. Regular expressions
according to Java syntax applies. For further information, see:

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
Event Field Name The contents of this column varies depending on the selected event type. For
further information about different events, see Section 5.4, “Event Fields”.

Note that only fields of string type are visible.


Match Value(s) The filtering criteria that defines what events to catch. Double-clicking a cell
opens the Edit Match Value dialog box, from which you select a value.

However, these values are only suggestions. You can also use hard coded strings
and regular expressions.

151
Desktop 7.1

Example 19.

For example, entering:

.*idle.*

will match any single lines containing "idle".

Some fields also contain several lines, so entering:

(?m).*idle.*

will match any multiline content containing "idle".

The default value for each of the components is All.

Note! Some of the Event Fields let you select from four Match Value types:
Information, Warning, Error, or Disaster. For the rest of the Event Fields
Match Value you use a string. Make sure you enter the exact string format
of the relevant Match value. For example: the Event Filed timeStamp can
be matched against the string format yyyy-mm-dd.

Field Map Maps variables against event fields. The Field Map table exists only if any of
the parameters for the selected notifier type is set to Formatted, Event Field or
SQL.
Notifier Field States the Notifier parameter available in the Notifier Setup tab.

If a specific parameter has more than one variable, it will claim one line per
variable.
Variable The name of the variable, as entered in the Notifier Setup tab. If the parameter
type is Event Field, this field will be empty.
Event Field Double-clicking a cell will display a list, from which the event field, to obtain
values from, is selected.
Add Event... Enables you to add an event that you generate by a call from an APL- or Ultra
code in the workflow configuration. Click Add Event to configure your event;
the new event type is added as a separate tab in the Event Setup.
Remove Event... Removes the currently selected event type tab in the Event Setup dialog.
Refresh Field Updates the Field Map table. Required if parameter population types or formatting
Map fields have been modified in the Notifier Setup tab.

5.4. Event Fields


An event is an object containing information related to an event in MediationZone® . For example,
there are workflow state events that are emitted each time the state of a workflow is changed.

The Event Notification Editor subscribes to these events and routes them to notifiers, for example, log
files or a database.

An Event Type is comprised of a set of fields containing the original event message, a set of standard
workflow related information, and event specific fields that are the parameters the original event
message.

152
Desktop 7.1

Figure 133. An Event Message Contains Information Ordered in Fields

All Event types in the MediationZone® system inherit fields from the Base Event type. Workflow
related events inherit fields related to the workflow event as well, such as agentName. In addition
to these, User Defined Events will receive any fields as defined by the user.

There are two types (hierarchies) of Events:

• User Defined Events.

• Events inherited from a Base Event (all other events), with additional information added.

The user-defined event must be configured in the Ultra Format Editor. Other than the fields entered
by the user, MediationZone® will automatically add basic fields. User Defined Events may only be
dispatched from an agent utilizing APL, that is, Analysis or Aggregation.

Figure 134. User Defined Events are Sent from Analysis or Aggregation Agents

Fields added by the user must be populated manually by using APL commands, while the basic fields
are populated automatically. From the basic fields, only category and severity may be assigned values.
The other basic fields are read only, hence it is not possible to assign values to them.

5.5. Event Types


This section describes the event types and their fields in MediationZone® .

153
Desktop 7.1

5.5.1. Base Event


A Base Event is the parent of all events, except for the User Defined Events (described in this table).
Since it is the parent it means that all events will inherit the fields of the Base Event.

Note! Subscribing to Base Events is not recommended since it will match every event produced
in the system, which may generate a high volume of events.

Base Events contains the following information:

• category - Not utilized for Base Events.

• contents - A hard coded string containing event specific information; the original event message.
For instance, for the ECS Insert Event, this string will contain the type of data sent to ECS, the
workflow name, the agent name, and the UDR count. For information about the contents field, see
the specific event types (this table).

• eventName - The name of the Event, that is, any of the types described in this section, for example,
Base Event, Code Manager Event or Alarm Event.

• origin - The IP address of one of the following:

• Execution Context - On which the workflow that issues the event is running.

• Platform - If this is not a workflow event.

• Desktop - If this is a User Event.

• receiveTimeStamp - The date and time for when an event is inserted in the platform database.
This is the time used in for example the System Log.

• severity - The severity of the event. May be any of: Information, Warning, Error or Disaster.
The default value is Information.

• timeStamp - The date and time taken from the host where the event is issued.

5.5.2. Alarm Event


This event occurs whenever an alarm starts, or once an alarm is closed.

The following fields are included:

• alarmConditionDescription - The contents of the Condition Criteria column on the


condition list table. See Figure 35, “The Alarm Detection”.

• alarmDescription - The contents of the Description text-box in the Alarm Detection Config-
uration. See Figure 35, “The Alarm Detection”.

• alarmDetectionName - The name by which the alarm detection is saved.

• alarmId - The unique number that the system uses to identify saved configurations.

• alarmModifier - The user that closes the alarm.

• alarmModifierComment - The annotation that the user enters when closing the alarm.

• alarmSeverity - See Severity in Figure 35, “The Alarm Detection”.

• alarmSeverityForSNMP - A numeric representation of alarmSeverityForSNMP.

154
Desktop 7.1

• alarmState - Open or Closed.

• alarmSupervisedArea - A configuration that is supervised by the alarm that occurred. Note:


Marked with a red check symbol on the Web Interface.

• alarmSupervisedObject - The specific object within the alarmSupervisedArea that is


guarded by the alarm that occurred.

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

5.5.3. Code Manager Event


These messages are logged when software is changed, removed or added; when pico instances are
added/removed; or when the code server reports an error. The following fields are included:

• cmAction - States the part of the system that is affected.

• cmActionType - States if code was changed, removed or added.

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents - Code: <message>

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

5.5.4. Couchbase Monitor Event


The Couchbase Monitor event is triggered when there is a change in the availability of a monitored
Couchbase cluster, or in the ability to monitor it. These changes can be caused by, for example, failures
in the Couchbase cluster or in a MediationZone® configuration.

For information about how to configure the Couchbase Monitor Service, see Section 4.1.8.5.1,
“Couchbase Monitor Service”.

155
Desktop 7.1

5.5.4.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time a Couchbase Monitor Event is generated.
Double-click on the field to open the Match Values dialog where you can click on the Add button to
add which values you want to filter on. If there are specific values available, these will appear in a
drop-down list. Alternatively, you can enter a hard coded string or a regular expression.

The following fields are available for filtering Couchbase Monitor events in the Event Setup tab:

Couchbase Monitor event specific fields

• clusterId - This fields contains the id of the monitored Couchbase Cluster. In some situations, e g
when the event is caused by an incorrectly configured Couchbase profile, the id may be unavailable.

• clusterNode - When the triggered event is related to a specific Couchbase node, this field contains
the name of the node.

• eventMessage - This field contains text that describes the event.

• eventType - This field contains the type of event that was triggered. For information about the
available types see Section 5.5.4.2, “Couchbase Event Types”.

• profileKey - When the triggered event is related to a specific configuration, this field contains its
unique configuration Key. You can right-click on a configuration in the Configuration Navigator
pane to view its Key.

• profileName - When the triggered event is related to a specific configuration, this field contains its
name.

Fields inherited from the Base event

The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:

• category - If you have configured any Event Categories, you can select to only generate notific-
ations for Couchbase Monitor events with the selected categories. See Section 5.6, “Event Category”
for further information about Event Categories.

• contents - This field contains a string with event specific information. If you want to use this
field for filtering you can enter a part of the contents as a hard coded string.

• eventName - This field can be used to specify which event types you want to generate notifications
for. This may be useful if the selected event type is a parent to other event types. However, since
the Couchbase Monitor event is not a parent to any other event, this field will typically not be used
for this event.

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all Couchbase Monitor events from 1st
of June, 2014, to 30th of June, 2014.

• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster.

• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,

156
Desktop 7.1

for example, "2014-06-15 09:.*" for catching all Couchbase Monitor events from 9:00 to 9:59 on
the 15th of June, 2014.

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

5.5.4.2. Couchbase Event Types


This section describes the event types that are available for filtering Couchbase Monitor events.

Event Type Description


COUCHBASE_NOT_AVAIL- Couchbase is unavailable.
ABLE
Communication with an actively monitored Couchbase cluster has
failed. All nodes listed in the Couchbase profile have stopped respond-
ing. More detail of the cause is included in the eventMessage. The
communication is retried after a short pause and when the error
condition is corrected, the CLUSTER_MONITORED event is sent.
CLUSTER_MONITORED Cluster is monitored.

The Couchbase cluster is available and monitoring has started. This


event is typically sent once but could be sent multiple times if con-
nectivity to the cluster has been re-established after connection loss
or if the Couchbase Monitoring Service has been moved to another
Execution Context during monitoring.
CLUSTER_MONITOR- Monitoring of cluster was re-established.
ING_REESTABLISHED
This event occurs when monitoring is resumed after a CONFIGURA-
TION_LOST or a MONITOR_LOST event, i e when a connection to
ZooKeeper is recovered.
CLUSTER_NOT_MONITORED Cluster is unavailable and therefore currently not monitored.

The Couchbase cluster is unavailable. None of the Couchbase nodes


listed in the Couchbase profile respond to requests. This could happen
for instance if connectivity from the Couchbase Monitoring Service
to the Couchbase cluster is lost, or if the wrong addresses to the nodes
have been entered in the Couchbase profile. The connection will be
retried periodically, if established, the CLUSTER_MONITORED
event is sent.
CLUSTER_NO_LONGER_MON- Cluster is no longer monitored.
ITORED
A Couchbase profile with active monitoring has been updated and
saved with monitoring disabled. This event occurs if there are no
other profiles configured to actively monitor the same cluster.
CONFIGURATION_LOST Lost or suspended connection for monitoring configurations.

The connection to ZooKeeper has been lost. Manual investigation


is required to find the cause of the connection loss. One cause could
be that a majority of ZooKeeper nodes running in the configured
Execution Contexts have shut down and you may need to restart
them.
MONITORING_FAILED Monitoring resulted in a failure.

An unexpected error was detected during monitoring. The details of


the error are included in the message. One cause could be an incor-

157
Desktop 7.1

rectly formatted or unexpected response from the Couchbase cluster.


Monitoring continues automatically after a short pause.
MONITORING_RE- Failed to refresh configurations to be monitored.
FRESH_FAILED
An exception was thrown when listing all the Couchbase profiles
defined in MediationZone® and comparing them to information
stored in ZooKeeper. One cause for this event could be communica-
tion problems with ZooKeeper. The operation is retried automatically
after a short pause. This event does not imply that there is a problem
with the actual monitoring.
MONITOR_ADD_FAILED Failed to add configuration to be monitored.

An unexpected error occurred when adding a Couchbase profile with


active monitoring to the list of actively monitored Couchbase clusters.
One cause for the event is communication problems with ZooKeeper.
The operation is retried automatically after a short pause.
MONITOR_LOST Lost or suspended connection for monitoring cluster.

The connection to ZooKeeper has been lost and the monitoring will
be stopped until the connection is re-established. Manual investigation
is required to find the cause of the connection loss. One cause could
be that a majority of ZooKeeper nodes running in the configured
Execution Contexts have shut down and you may need to restart
them.
MONITOR_REMOVE_FAILED Failed to remove monitored configuration.

A Couchbase profile has been removed but the monitor associated


with the profile might not have been removed. If this event occurs
when updating or removing a Couchbase profile with active monit-
oring, the state of the monitoring is unknown and therefore monitor-
ing may or may not have been stopped.
NODE_BACK_FROM_UN- Node is back from unhealthy state, rebalance may be required.
HEALTHY_STATE
A Couchbase node that was reported by the cluster as unhealthy, has
become healthy again. If the node was failed over, you may need to
rebalance the cluster.
NODE_FAILED_OVER Node failed over.

Failover of a Couchbase node was initiated by the Couchbase Mon-


itoring Service. The failover command was issued without errors.
Manual investigation is required to find the cause of the failover and
to bring the Couchbase cluster back to a healthy state. You may also
need to rebalance the cluster.
NODE_FAILOVER_FAILED Failed to fail over node.

Failover of a Couchbase node was initiated by the Couchbase Mon-


itoring Service. The failover command responded with an error. The
failed Couchbase node may or may not have been failed over by the
cluster. The operation will be retried after a short pause if the node
is not reported by the cluster as failed over and if it remains in an
unhealthy state.
NODE_FAIL- Failed to fail over node, rebalance is in progress.
OVER_FAILED_REBALANCE
When a rebalance operation is in progress in the Couchbase cluster,
no failovers will be attempted. This event occurs when failover of a
Couchbase node is initiated by the Couchbase Monitoring Service

158
Desktop 7.1

and a rebalance operation is in progress. The Couchbase Monitor


Service will wait until the rebalance is completed before continuing
the monitoring.
NO_REPLICAS Number of replicas is zero, fail over will not be possible.

This event occurs when a Couchbase cluster is being monitored but


the number of replicas is set to zero in the Couchbase profile. The
monitor will still monitor the cluster but no failovers are performed
even if the monitor detects node failures. Setting the number of rep-
licas to zero in the Couchbase profile therefore allows the monitor
to do a “dry run”, reporting only TOO_MANY_FAILED when node
failures are detected.
TOO_MANY_FAILED More nodes have failed than there are replicas, fail over is not pos-
sible.

This event occurs when failover of a Couchbase node is initiated by


the Couchbase Monitoring Service but the number of nodes failed
over exceeds the number of replicas specified in the Couchbase
profile. A manual investigation is required to find the cause of the
failing cluster. You may also need to rebalance the cluster.
UNMANAGED_FAILED Failed to add unmanaged cluster.

An exception was thrown when processing unmanaged clusters. One


cause for this event is communication problems with ZooKeeper.
The operation will be retried automatically after a short pause. An
unmanaged cluster is a cluster that has monitoring set to active in
the Couchbase profile but that does not respond to management re-
quests.
UNMANAGED_LIST_FAILED Failed to list unmanaged clusters.

An exception was thrown when listing all unmanaged clusters from


ZooKeeper. The cause for this error is communication problems with
ZooKeeper. The operation is retried automatically after a short pause.
An unmanaged cluster is a cluster that has monitoring set to active
in the Couchbase profile but that does not respond to requests.

159
Desktop 7.1

5.5.4.3. Examples Couchbase Monitor Event Configuration

Example 20. Couchbase Monitor event notification saved in a Log File

Figure 135.

This configuration will give you the following notification setup:

• When monitoring of a Couchbase Cluster fails, a notification will be generated.

• When this notification is generated, a new line with information will be logged in the
couchbase_monitor.txt file located in the /home/user/couchbase_monitor folder, containing
the following data:

• The timestamp for when the event was triggered.

• The content string i e Couchbase Monitor Event.

• The name of the cluster.

• The event message.

160
Desktop 7.1

Example 21. Couchbase Monitor Event notification saved in a database

Figure 136.

This configuration will give you the following notification setup:

• When monitoring of a Couchbase Cluster fails, a notification will be generated.

• When this notification is generated, an entry will be added in the cbmonitor table in the
database configured in MyDatabase profile with the following data:

• The event message, will be inserted in the message column in the database table.

• The timestamp from the EC will be inserted in the time column in the database table.

5.5.5. Diameter Dynamic Event


The Diameter Dynamic event is triggered by DNS lookups for Diameter peers. If dynamic peer discovery
is enabled, these DNS lookups are performed when:

• A Diameter workflow is started.

• The routing table of a Diameter_Stack agent is dynamically updated.

• The TTL (time to live) of a cached DNS record expires.

For information about how to enable dynamic peer discovery, see Section 13.2.3.2.1.3, “Realm Routing
Table”.

5.5.5.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time a Diameter dynamic event is generated.
Double-click on the field to open the Match Values dialog where you can click on the Add button to

161
Desktop 7.1

add which values you want to filter on. If there are specific values available, these will appear in a
drop-down list. Alternatively, you can enter a hard coded string or a regular expression.

The following fields are available for filtering Diameter dynamic events in the Event Setup tab:

Diameter dynamic event specific fields

• dynamicPeers - This field contains a comma separated list of dynamically discovered peers in
a realm and their settings.

The field is formatted as follows:


[<hostname, <port>, <protocol>],...

• realmName - This field contains the name of the realm for which the event is generated.

Fields inherited from the Base event

The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:

• category - If you have configured any Event Categories, you can select to only generate notific-
ations for Diameter dynamic events with the selected categories. See Section 5.6, “Event Category”
for further information about Event Categories.

• contents - This field contains a string with event specific information. If you want to use this
field for filtering you can enter a part of the contents as a hard coded string.

• eventName - This field can be used to specify which event types you want to generate notifications
for. This may be useful if the selected event type is a parent to other event types. However, since
the Diameter dynamic event is not a parent to any other event, this field will typically not be used
for this event.

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all Diameter dynamic events from 1st of
June, 2014, to 30th of June, 2014.

• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster.

• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all Diameter dynamic events from 9:00 to 9:59 on the
15th of June, 2014.

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

162
Desktop 7.1

5.5.5.2. Examples Diameter Dynamic Event Configuration

Example 22. Diameter Dynamic event notification saved in a Log File

Figure 137.

This configuration will give you the following notification setup:

• When dynamic peer discovery is enabled and the peers of a realm are looked up in DNS, a
notification will be generated.

• When this notification is generated, a new line with information will be logged in the diamet-
er_dynamic_event.txt file located in the /home/user/diameter folder, containing the following
data:

• The timestamp for when the event was triggered.

• The realm name.

• A list of peers in the realm.

163
Desktop 7.1

Example 23. Diameter Dynamic event notification saved in a database

Figure 138.

This configuration will give you the following notification setup:

• When dynamic peer discovery is enabled and the peers of a realm are looked up in DNS, a
notification will be generated.

• When this notification is generated, an entry will be added in the diameterdynamic table in
the database configured in MyDatabase profile with the following data:

• The timestamp from the EC will be inserted in the timestamp column in the database table.

• The realm name will be inserted in the realm column in the database table.

• The peer information (hostname, port, protocol) will be inserted in the peers column in
the database table.

5.5.6. Group State Event


The Group State event is triggered when a workflow group changes from one state to another.

The following states are available:

1. Idle - This is the state of a workflow group that is valid but where no workflows are being executed.

2. Invalid - A workflow group will change state from Idle to Invalid if the configuration is made in-
valid. Once the configuration is valid again, the Workflow Group will change back to the Idle state.

3. Hold - A workflow group will change state from Idle to Hold if configurations are being imported
with certain options selected. Once the import is finished, the state will change back to Idle again.
If a default import is made, the workflow group will not change into the Hold state.

164
Desktop 7.1

4. Running - A workflow group will change state from Idle to Running as soon as it is being executed.
If the execution is allowed to finish, the state will change back to Idle.

5. Suppressed - If configurations are being imported with certain options selected while a workflow
group is in Running state, the state will change to Suppressed. For real-time workflows, the state
will change back to Running again once the import is finished. Batch workflows will remain in
Suppressed state until all members have finished execution and will then change to Idle state. If
the workflow group is manually stopped, the state will change to Stopping. If a default import is
made, the workflow group will not change into the Suppressed state.

6. Stopping - If the execution of a workflow group is manually stopped, the state will change from
Running or Suppressed to Stopping.

7. Aborted - If one of the members of the workflow group aborts, the workflow group will change
state from Running or Suppressed to Aborted once all of its members have finished execution.

See section Section 4.2.3, “Workflow Group States” for further information about workflow group
states.

5.5.6.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications for all state changes for all workflow groups. Double
click on the field to open the Match Values dialog where you can click on the Add button to add
which values you want to filter on. If there are specific values available, these will appear in a drop-
down list. Alternatively, you can enter a hard coded string or a regular expression.

The following fields are available for filtering Group State events in the Event Setup tab:

Group State event specific fields

• groupName - This field enables you to select which workflow groups you want Group State event
notifications to be generated for.

Figure 139. Adding Workflow Groups

• groupState - This field determines for which states you want Group State event notifications to
be generated. If the state for one of the matching workflow groups changes into any of the states
added for this field, a group state event notification will be generated.

165
Desktop 7.1

Figure 140. Adding workflow states

Fields inherited from the Base event

The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:

• category - If you have configured any Event Categories, you can select to only generate notific-
ations for Group State events with the selected categories. See Section 5.6, “Event Category” for
further information about Event Categories.

• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string, e g the
state you are interested in Idle/Running/Stopping/etc. However, for Group State events, almost
everything in the content is available for filtering by using the other event fields, e g groupName,
groupState, etc.

• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the Group State event is not a parent to any other event, this field will typically not be used
for this event.

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field. However, since
the Group State events are only issued from the Platform, this event field should typically not be
used for filtering.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all Group State events from 1st of June,
2014, to 30th of June, 2014.

• severity - With this field you can determine to only generate notifications for state changes with
a certain severity; Information, Warning, Error or Disaster. For example, a state change from Idle
to Running will typically be of severity Information, while a state change to Abort state will typically
be of severity Error.

• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all Group State events from 9:00 to 9:59 on the 15th
of June, 2014.

166
Desktop 7.1

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

5.5.6.2. Examples Group State Event Configuration

Example 24. Group State event notification sent to Database

Figure 141.

This configuration will give you the following notification setup:

• When the workflow group MyGroup, located in the Default folder, changes state to either
Aborted or Stopping, a Group State Event notification will be generated.

• When this Group State Event is generated an entry will be added in the groupStates table in
the database configured in MyDatabaseProfile with the following data:

• The workflow group name will be inserted in the wfg column in the database table.

• The state will be inserted in the state column in the database table.

• The timestamp from the EC will be inserted in the timestamp column in the database table.

167
Desktop 7.1

Example 25. Group State event notification sent to Mail

Figure 142.

This configuration will give you the following notification setup:

• When the workflow groups MyGroup and MySecondGroup, located in the Default folder,
changes state during the time period 01:00 and 01:59 on the 21st of June, 2012, a Group State
Event notification will be generated.

• When this Group State Event is generated a mail will be sent to the [email protected]
e-mail address with the following data:

• The subject will contain the following text: "Event Notification: GroupState".

• The message will contain the following text: "An event of type Group State has occurred
with the following contents: <the content of the event>".

5.5.7. Suppressed Event


Occurs during execution of the command systemimport -holdexecution. The following fields are
included:

• Name - The name of the workflow group that is mentioned in the eventMessage.

• Message - A textual description of the events that take place while systemimport -holdexecution
is executing.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents

• eventName

168
Desktop 7.1

• origin

• receiveTimeStamp

• severity

• timeStamp

5.5.8. Suspend Execution Event


Can be monitored on the Execution Manager and occurs whenever the execution of a workflow, or of
a workflow group, is either suspended or enabled for execution in the Suspend Execution configuration.
This event type includes the following fields:

• Groups - A comma separated list of names of suspended or enabled workflow groups.

• SuspendExecutionAction - (Boolean) True for enabled, and False for suspended.

• SuspendExecutionConfiguration - (string) The name of the Suspend Execution configur-


ation which scheduling settings triggered this event.

• Workflows - A comma separated list of names of suspended or enabled workflows.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

5.5.9. System Event


In this section some of the circumstances during which the System event is triggered that may be
useful for Event Notification configurations are listed:

• If the Event Server cannot be reached.

Message: Cannot access EventServer.

• If an attempt to resend an event to the EventServer is made.

Message: Retrying to send event to listener <listener> at <host>. Retry count <attempt number>.

• If an event listener is abandoned.

Message: The listener <listener> at <host> is abandoned.

• If a license violation has occurred.

169
Desktop 7.1

Message: Violation report <message>.

• If a pico instance was disconnected from the Platform by a user.

Message: The pico instance <pico instance> at host <host> was disconnected from platform by
<user>.

• When the Platform thread pool size has been set for workflows and workflow groups.

Message: The platform thread pool size for workflows and groups is set to <thread pool size>.

• If the mz.platform.wf.threadpool property cannot be parsed with default thread pool size.

Message: Failed to parse the <mz.platform.wf.threadpool> property. Using default size <default
thread pool size>.

• If the mz.platform.wf.threadpool property has been set to a value outside of valid range.

Message: The property <mz.platform.wf.threadpool> is outside the appropriate range, have set size
to <thread pool size>.

• If configuration data is missing for a workflow in a workflow group that is being loaded to the
GroupServer.

Message: Configuration data is missing for workflow <workflow>.

• If the GroupServer cannot register to the EventServer in order for workflow groups to be loaded or
updated.

Message: Group server is unable to register to the event server. Groups will not be loaded or updated.

• If the GroupServer cannot load workflow groups at startup.

Message: Group server was unable to load groups during startup.

• If a workflow group is manually stopped.

Message: Group <workflow group> is being manually stopped by the user.

• If a workflow group is stopping.

Message: Group <workflow group> and all its members are stopping.

• If a workflow group has stopped.

Message: Group <workflow group> have stopped.

• If scheduling cannot be set for a workflow group.

Message: Cannot schedule group <workflow group>.

• If the scheduling cannot be removed for a workflow group.

Message: Cannot remove scheduling for group <workflow group>.

• If a configuration with broken serialization is removed.

Message: <DR Exception>.

• If a workflow configuration cannot be loaded.

Message: Failed to load configuration for <workflow>, changing state to invalid state.

170
Desktop 7.1

• If an old version of a workflow has been detected running on an EC.

Message: Found one old version of the workflow <workflow> with session id <session id> running
on an ec. The workflow have been shut down..

• If a reconnect attempt to an unreachable workflow has failed and a new attempt is made.

Message: <workflow> is unable to recconect, reconnect is restarted.

• If the connection to an unreachable workflow has been re-established.

Message: Connection to unreachable workflow has been re-established.

• If a workflow that is supposed to be closed and killed is trying to communicate with the Platform.

Message: Warning, a presumed closed and killed workflow <workflow> tried to communicate with
the platform, ignoring the message.

• If an old workflow is detected on an EC when attempting to start a workflow.

Message: Warning an old workflow where found on ec <ec> when the workflow <workflow> where
to be started, the old one have been forced to stop.

• If a workflow cannot be shut down on an EC.

Message: Workflow <workflow> was unable to shut down on ec <ec>.

• If a workflow is being manually stopped by a user.

Message: Workflow <workflow> is being manually stopped by the user.

• If a workflow is stopped.

Message: <stop message>.

• If a workflow aborts and the exception in the abort is not used.

Message: Workflow <abortException>

• If trying to retrieve a list with valid workflow configurations, and failing to retrieve any of the
workflows.

Message: Unable to retrive workflows from the configuration <workflow>. Due to <cause>.

• If a workflow or workflow group cannot be enabled or disabled in a Suspend Execution configuration,


see the Suspend Execution User's Guide for further information.

Message: Unable to send enable/disable signal to the workflows/groups in configuration <configur-


ation>.

• If configurations are missing in a Suspend Execution configuration, see the Suspend Execution
User's Guide for further information.

Message: Warning, The following members of the suspend execution configuration where not found
and they where not <enabled/disabled>. <configurations>.

• If a Code Manager event occurs, see Section 5.5.3, “Code Manager Event” for further information.

Message: <Code Manager event message>.

• If a Redis HA event occurs, see Section 5.5.27, “Redis HA Event” for further information.

171
Desktop 7.1

Message: <Redis HA event message>.

5.5.9.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications for all state changes for all workflow groups. Double
click on the field to open the Match Values dialog where you can click on the Add button to add
which values you want to filter on. If there are specific values available, these will appear in a drop-
down list. Alternatively, you can enter a hard coded string or a regular expression.

The following fields are available for filtering Group State events in the Event Setup tab:

System event specific fields

• systemMessage - This field contains the message appended with the System event, as described
in the previous section.

Fields inherited from the Base event

The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:

• category - If you have configured any Event Categories, you can select to only generate notific-
ations for System events with the selected categories. See Section 5.6, “Event Category” for further
information about Event Categories.

• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string, e g the
state you are interested in Idle/Running/Stopping/etc. However, for System events, the content
consists of the text "System message:" and then the system message itself, see the description of
the system message above.

• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the System event is not a parent to any other event, this field will typically not be used for this
event.

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-04.*" for catching all System events from 1st of April, 2014,
to 30th of April, 2014.

• severity - With this field you can determine to only generate notifications for state changes with
a certain severity; Information, Warning, Error or Disaster. This may be useful to filter on if you
only want to view System events that generate Warnings for example.

• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all System events from 9:00 to 9:59 on the 15th of
June, 2014.

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

172
Desktop 7.1

5.5.9.2. Examples System Event Configuration

Example 26. System Event sent to Log File

Figure 143.

This configuration will give you the following notification setup:

• When a System event occurs, a System Event notification will be generated.

• When this notification is generated a new log line will be added in the systemevent.txt file
located in the /home/MyDirectory/systemevent/ directory,with the following data:

• The timestamp for when the System Event was registered.

• The System Event message.

173
Desktop 7.1

Example 27. System Event sent to Mail

Figure 144.

This configuration will give you the following notification setup:

• When a System Event with a message containing the text "Warning" is registered, a System
Event notification will be generated.

• When this notification is generated a mail will be sent to the [email protected] e-


mail address with the following data:

• The subject will say: "System Event".

• The message will contain the following text: "The following warning has been detected:
<the system event message>".

5.5.10. System External Reference Event


When configuring notifications with Notification Type Log File, you have the option to use External
References for configuring Directory, Filename and Size, see Section 5.3.1.1.2, “ Log File”.

Whenever a notification with notification type log file, that uses external references, is generated, the
System External Reference event is triggered.

5.5.10.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time data a System External Reference event
occurs. Double click on the field to open the Match Values dialog where you can click on the Add
button to add which values you want to filter on. If there are specific values available, these will appear
in a drop-down list. Alternatively, you can enter a hard coded string or a regular expression.

174
Desktop 7.1

The following fields are available for filtering System External Reference events in the Event Setup
tab:
Fields inherited from the Base event

The System External Reference event inherits all its fields from the base event,. These fields can be
used for filtering and are described in more detail in Section 5.5.1, “Base Event”:

• category - If you have configured any Event Categories, you can select to only generate notific-
ations for System External Reference events with the selected categories. See Section 5.6, “Event
Category” for further information about Event Categories.

• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string.

• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if you have configured notifications for several different events with
notification type log file, that uses external references, and you only want notifications to be generated
for a specific event type.

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all events from 1st of June, 2014, to 30th
of June, 2014.

• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster.

• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all events from 9:00 to 9:59 on the 15th of June, 2014.

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

175
Desktop 7.1

5.5.10.2. Examples System External Reference Event Configuration

Example 28. System External Reference Event Notification for Group State events

Figure 145.

This configuration will give you the following notification setup:

• When a Group State event occurs, a Group State event AND a System External Reference
event notification will be generated

• When these notifications are generated, a log entry will be added in a file with a name coming
from an external reference, located in the /home/mydirectory/groupstate/ directory,
containing the contents of the events.

176
Desktop 7.1

Example 29. System External Reference Event notification with variables

Figure 146.

This configuration will give you the following notification setup:

• When a Group State event occurs, a Group State event AND a System External Reference
event notification will be generated

• When these notifications are generated, a log entry will be added in a file with a name coming
from an external reference, located in the /home/mydirectory/groupstate/ directory,
containing the following text:

"<timestamp> An event of type <event type> has been generated at <timestamp>."

5.5.11. User Event


This event is dispatched when the user changes something, for instance, updates a configuration. The
following fields are included:

• userName - The name of the user.

• userAction - Action performed by the user.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

177
Desktop 7.1

• contents - Username: <username>, Action: <action>, Workflow: <Workflow


name>

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

5.5.12. SharedTables Event


When using Shared Tables, there will be three different types of operations; Create, Refresh and Re-
leased, which have different actions. The SharedTables event will be triggered each time one of these
actions occur.

Create operation

The Create operation has three different actions:

• Create Started - which is triggered when a workflow calls the tableCreateShared function.

• Create Finished - which is triggered when the shared table has been loaded from the database.

• Create Failed - which is triggered if the table fails to be created.

A Create operation will always consist of two actions; either Create Started and Create Finished, or
Create Started and Create Failed.

Refresh operation

The Refresh operation has three different actions:

• Refresh Started - which is triggered when a workflow calls the tableRefreshShared function or when
the Shared Table profile has been configured with a Refresh Interval.

• Refresh Finished - which is triggered when the shared table has been refreshed.

• Refresh Failed - which is triggered if the table fails to be refreshed.

A Refresh operation will always consist of two actions; either Refresh Started and Refresh Finished,
or Refresh Started and Refresh Failed.

Released operation

The Released operation only has one action, i e to release the table when no references to the table has
existed for a certain time interval.

See Section 9.7, “Shared Table Profile” for further information about shared tables.

5.5.12.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time data a SharedTables event is triggered.
Double click on the field to open the Match Values dialog where you can click on the Add button to
add which values you want to filter on. If there are specific values available, these will appear in a
drop-down list. Alternatively, you can enter a hard coded string or a regular expression.

178
Desktop 7.1

The following fields are available for filtering SharedTables events in the Event Setup tab:

SharedTables event specific fields

• actionType - With this field you can configure notifications to be sent only for certain actions.
Use regular expressions to filter on this field.

• agentName - This field contains the name of the agent issuing the action. In case a Refresh or
Create action is initiated based on the Refresh Interval setting in the Shared Tables Profile, this
field will be empty. You can use this field to specify notifications to be generated only for certain
agents. Use regular expressions to filter on this field.

• duration - The duration is the amount of time in milliseconds it takes to perform a Create or
Refresh operation, and this field is included in the Create Finished and Refresh Finished actions. If
you select to filter on this field, you can specify to only generate notifications for a certain duration.
This will also mean that notifications will only be generated for Create Finished and Refresh Finished
actions. Use regular expressions to filter on this field.

• errorMessage - In case a Create Failed, or a Refresh Failed action is triggered, this field will
contain an error message. If you select to filter on this field, you can specify to only generate noti-
fications for certain error messages, or just select to have notifications generated for actions containing
error messages. This will also mean that notifications will only be generated for Create Failed and
Refresh Failed actions. Use regular expressions to filter on this field.

• workflowName - This field contains the name of the workflow issuing the action. In case a Refresh
or Create action is initiated based on the Refresh Interval setting in the Shared Tables profile, this
field will be empty. You can use this field to specify notifications to be generated only for certain
workflows. Use regular expressions to filter on this field.

• workflowVersion - This field contains the version number of the workflow issuing the action.
In case a Refresh or Create action is initiated based on the Refresh Interval setting in the Shared
Tables profile, this field will be "0". Use regular expressions to filter on this field.

• rowCount - This indicates the number of rows that were created or refreshed in the database. Use
regular expressions to filter on this field.

• ShareTablesProfileName - This field contains the name of the Shared Tables profile issuing
the action. You can use this field to specify notifications to be generated only for workflows using
a certain SharedTablesProfile. Use regular expressions to filter on this field.

Fields inherited from the Base event

The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:

• category - If you have configured any Event Categories, you can select to only generate notific-
ations for SharedTables events with the selected categories. See Section 5.6, “Event Category” for
further information about Event Categories.

• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string.

• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the SharedTables event is not a parent to any other event, this field will typically not be used
for this event. However, if you have several different event types configured for generating notific-
ations in the same event notification configuration, it may be useful to include this field in the noti-
fication itself to differentiate between the event types.

179
Desktop 7.1

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all SharedTables events from 1st of June,
2014, to 30th of June, 2014.

• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster. For the SharedTables event, the actions
Create Started, Create Finished, Refresh Started, Refresh Finished, and Released have severity In-
formation, and the actions Create Failed and Refresh Failed have the severity Error.

• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all SharedTables events from 9:00 to 9:59 on the 15th
of June, 2014.

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

180
Desktop 7.1

5.5.12.2. Examples SharedTables event configuration

Example 30. SharedTables event notification sent to log

Figure 147.

This configuration will give you the following notification setup:

• When a Create Started, Create Finished, or Create Failed action occurs, a SharedTables event
notification will be generated

• When this notification is generated, an entry will be logged in the sharedtables.txt file located
in the /home/user/sharedtables folder, containing the following data:

• The timestamp for when the event was triggered.

• The contents of the the event.

181
Desktop 7.1

Example 31. SharedTables event notification sent as mail

Figure 148.

This configuration will give you the following notification setup:

• When a SharedTables action is issues by any workflow using a Shared Tables Profile with a
name containing "SharedTableProfile", a SharedTables event notification will be generated

• When this notification is generated, a mail will be sent to the mail address mymail@my-
company.com, containing the following data:

• A subject line saying: Shared Table event

• A message saying: A SharedTables event for a <action type> action has been issued by a
workflow using the <name of the Shared Table Profile>, at <the timestamp for when the
event was triggered> with the following content:.

• The content of the event will also be included in the message.

5.5.13. Workflow Event


This is the parent for all workflow events. The following fields are included:

• workflowKey - The name of the internal MediationZone® workflow key.

• workflowName - A list indicating what workflow(s) information to select.

• workflowGroupName - The name of the workflow group.

Fields inherited from the Base event

182
Desktop 7.1

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents - Workflow: <Workflow name>

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

5.5.14. Agent Event


The following fields are included:

• agentName - The name of the agent issuing the event.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents - Workflow: <Workflow name>, Agent: <Agent name>

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

Fields inherited from the Workflow event

The following fields are inherited from the workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:

• workflowKey

• workflowName

• workflowGroupName

5.5.15. Agent Failure Event


An agent failure event message is not a failure literally speaking, but is reported when an agent acts
upon a predefined Error Case. For instance, when duplicates are found by the Duplicate UDR Detection
agent. Not all agents can issue these sort of events. For further information, see the relevant agent
user's guide. The following fields are included:

• agentErrorMessage - Error message issued by the agent.

183
Desktop 7.1

• agentName - The name of the agent.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents - Workflow: <Workflow name>, Agent: <Agent name>, Message:


<errorMsg>

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

Fields inherited from the Workflow event

The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:

• workflowKey

• workflowName

• workflowGroupName

5.5.16. Agent Message Event


An agent message is of informative type. An example of such message is when a database collection
agent starts collecting data, or when a Disk forwarding agent is finished with a batch.

Not all agents can issue these sort of events. For further information, see the relevant agent user's guide.

The following fields are included:

• agentMessage - Message issued by the agent.

• agentName - The name of the agent.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents - Workflow: <Workflow name>, Agent: <Agent name>, Message:


<message>

• eventName

• origin

• receiveTimeStamp

184
Desktop 7.1

• severity

• timeStamp

Fields inherited from the Workflow event

The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:

• workflowKey

• workflowName

• workflowGroupName

5.5.17. User Agent Message Event


This event holds a value only if the APL function dispatchMessage has been used. The following
fields are included:

• agentMessage - Message issued by the agent.

• agentName - The name of the agent issuing the event.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents - Workflow: <Workflow name> Agent name: <Agent name>, Mes-


sage: <message>

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

Fields inherited from the Workflow event

The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:

• workflowKey

• workflowName

• workflowGroupName

5.5.18. Agent State Event


This event is dispatched when an agent state changes. The following fields are included:

• agentName - The name of the agent issuing the event.

185
Desktop 7.1

• agentState - The state of the agent. The following are available: Aborted, Active, Created, Idle,
Stopped.

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents - Workflow: <Workflow name>, Agent: <Agent name>, State:


<state>

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:

• workflowKey

• workflowName

• workflowGroupName

5.5.19. Diameter Peer State Changed Event


When you run a Diameter workflow, the peer connections of the Diameter_Stack agents are monitored
through the standard Diameter watchdog as described in RFC 6733 and RFC 3539. Possible states of
the connection are: OKAY, SUSPECT, DOWN, REOPEN, INITIAL. The Diameter peer state changed
event is triggered whenever there is a change of peer state.

5.5.19.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time a Diameter Peer State Changed event is
generated. Double-click on the field to open the Match Values dialog where you can click on the Add
button to add which values you want to filter on. If there are specific values available, these will appear
in a drop-down list. Alternatively, you can enter a hard coded string or a regular expression.

Diameter Peer State Changed specific fields

The following fields are available for filtering Diameter Peer State Changed events in the Event Setup
tab:

• newState - The new state of the connection.

• peerName - The name of the Diameter peer for which the connnection state has changed.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

186
Desktop 7.1

• category - If you have configured any Event Categories, you can select to only generate notific-
ations for Diameter peer state changed events with the selected categories. See Section 5.6, “Event
Category” for further information about Event Categories.

• contents - This field contains a string with event specific information. If you want to use this
field for filtering you can enter a part of the contents as a hard coded string.

• eventName - This field can be used to specify which event types you want to generate notifications
for. This may be useful if the selected event type is a parent to other event types. However, since
the Diameter Peer State Changed event is not a parent to any other event, this field will typically
not be used for this event.

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all Diameter Peer State Changed events
from 1st of June, 2014, to 30th of June, 2014.

• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster.

• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all Diameter Peer State Changed events from 9:00 to
9:59 on the 15th of June, 2014.

Fields inherited from the Workflow event

The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:

• workflowKey - The name of the internal workflow key.

• workflowName - A list indicating what workflow(s) information to select.

• workflowGroupName - The name of the workflow group.

Fields inherited from the Agent event

The following fields are inherited from the Agent event, and described in more detail in Section 5.5.14,
“Agent Event”:

• agentName

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

187
Desktop 7.1

5.5.19.2. Examples Diameter Peer State Changed Event Configuration

Example 32. Diameter Peer State Changed event notification saved in a Log File

Figure 149.

This configuration will give you the following notification setup:

• When the state of a Diameter peer is changed, a notification will be generated.

• When this notification is generated, a new line with information will be logged in the diamet-
er_peer_state.txt file located in the /home/user/diameter folder, containing the following
data:

• The timestamp for when the event was triggered.

• The workflow in which the event was issued.

• The agent that issued the event.

• The new state of the peer connection.

• The name of the of the peer for which the connection state has changed.

188
Desktop 7.1

Example 33. Diameter Peer State Changed event notification saved in a database

Figure 150.

This configuration will give you the following notification setup:

• When the state of a Diameter peer is changed, a notification will be generated.

• When this notification is generated, an entry will be added in the diameterdynamic table in
the database configured in MyDatabase profile with the following data:

• The timestamp from the EC will be inserted in the timestamp column in the database table.

• The peer name will be inserted in the peer column in the database table.

• The new state will be inserted in the state column in the database table.

5.5.20. ECS Insert Event


The ECS Insert event is triggered when data is inserted into ECS, i e:

• When cancelBatch is called from a batch workflow

• When UDRs are sent to ECS via an ECS Forwarding agent.

See Section 16.1, “Error Correction System” for further information about how data is inserted into
ECS.

5.5.20.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time data is inserted into ECS. Double click on
the field to open the Match Values dialog where you can click on the Add button to add which values

189
Desktop 7.1

you want to filter on. If there are specific values available, these will appear in a drop-down list. Al-
ternatively, you can enter a hard coded string or a regular expression.

The following fields are available for filtering ECS Insert events in the Event Setup tab:

ECS Insert event specific fields

• ecsMessage - With this field you can configure notifications to be sent only for certain messages
associated with the cancelBatch function. If UDRs are inserted, the message will be "None".
Use regular expressions to filter on this field.

• ecsMIM - This field enables you create a regular expression based filter for specific MIM values,
i e notifications will only be generated for data containing the specified MIMs.

• ecsSourceNodeName - This field enables you configure notifications to be sent only for insertions
made from specified agents. For batches, this will be the agent issuing the cancelBatch, while
for UDRs this will be the ECS Forwarding agent. Use regular expressions to filter on this field.

• ecsType - For this field you can select if you want notifications to be generated for only batches,
only UDRs or both, i e All.

• ecsUDRCount - This field enables you to configure notifications to be sent only for batches con-
taining a certain amount of UDRs. Use regular expressions to filter on this field.

• agentName - This field enables you configure notifications to be sent only for events issued from
specified agents. Use regular expressions to filter on this field.

Fields inherited from the Agent event


<listitem>
agentName - The name of the agent issuing the event.
</listitem>
Fields inherited from the Base Event

The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:

• category - If you have configured any Event Categories, you can select to only generate notific-
ations for ECS Statistics events with the selected categories. See Section 5.6, “Event Category” for
further information about Event Categories.

• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string. However,
for ECS Statistics events, everything in the content is available for filtering by using the other event
fields, i e eventName, errorCodeCountForNewUDRs, etc.

• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the ECS Statistics event is not a parent to any other event, this field will typically not be used
for this event.

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all ECS Statistics events from 1st of June,
2014, to 30th of June, 2014.

190
Desktop 7.1

• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster. However, since ECS Statistics events only
have severity Information, this field may not be very useful for filtering.

• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all ECS Statistics events from 9:00 to 9:59 on the 15th
of June, 2014.

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

5.5.20.2. Examples ECS Insert Event Configuration

Example 34. ECS Insert Event notification sent as mail

Figure 151.

This configuration will give you the following notification setup:

• When a batch is inserted in ECS, an ECS Insert event notification will be generated

• When this notification is generated, a mail will be sent to the mail address my.mail@my-
company.com, containing the following data:

• A subject line saying: ECS Insert event - Batch

• A message saying: A batch has been inserted into ECS: <the ECS message>.

191
Desktop 7.1

Example 35. ECS Insert Event notification saved in a database

Figure 152.

This configuration will give you the following notification setup:

• When an ECS Insert event occurs, a notification will be generated.

• When this notification is generated, an entry will be added in the ecsInsert table in the database
configured in MyDatabase profile with the following data:

• The ECS Type, i e Batch or UDR, will be inserted in the tp column in the database table.

• The timestamp from the EC will be inserted in the time column in the database table.

5.5.21. ECS Statistics Event


The ECS Statistics event is triggered when the ECS_Maintenance system task is executed, see Sec-
tion 16.1.4, “ECS_Maintenance System Task” for further information about this system task.

5.5.21.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time the ECS_Maintenance system task is ex-
ecuted. Double click on the field to open the Match Values dialog where you can click on the Add
button to add which values you want to filter on. If there are specific values available, these will appear
in a drop-down list. Alternatively, you can enter a hard coded string or a regular expression.

The following fields are available for filtering ECS Statistics events in the Event Setup tab:

ECS Statistics event specific fields

• errorCodeCountNewUDRs - This field enables you create a regular expression based filter for
UDRs in state New in order to only generate notifications for UDRs passing the filter. This may be

192
Desktop 7.1

useful for specifying that notifications should only be generated for certain error codes and/or when
a certain amount of UDRs have be registered, for example.

Figure 153. Specifying error codes

• errorCodeCountReprocessedUDRs - This field enables you create a regular expression


based filter for UDRs in state Reprocessed in order to only generate notifications for UDRs passing
the filter. This may be useful for specifying that notifications should only be generated for certain
error codes and/or when a certain amount of UDRs have be registered, for example.

Fields inherited from the Base Event

The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:

• category - If you have configured any Event Categories, you can select to only generate notific-
ations for ECS Statistics events with the selected categories. See Section 5.6, “Event Category” for
further information about Event Categories.

• contents - The contents field contains a hard coded string with event specific information. If you
want to use this field for filtering you can enter a part of the contents as a hard coded string. However,
for ECS Statistics events, everything in the content is available for filtering by using the other event
fields, i e eventName, errorCodeCountForNewUDRs, etc.

• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the ECS Statistics event is not a parent to any other event, this field will typically not be used
for this event.

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all ECS Statistics events from 1st of June,
2014, to 30th of June, 2014.

• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster. However, since ECS Statistics events only
have severity Information, this field may not be very useful for filtering.

• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all ECS Statistics events from 9:00 to 9:59 on the 15th
of June, 2014.

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

193
Desktop 7.1

5.5.21.2. Examples ECS Statistics Event Configuration

Example 36. ECS Statistics Event notification saved in Log File

Figure 154.

This configuration will give you the following notification setup:

• When the ECS_Maintenance system task is run, a notification will be generated if any UDRs
in New state with error code myErrorCode, or if any UDRs in Reprocessed state are detected.

• When this notification is generated, a new line with information will be logged in the error-
codenew.txt file located in the /home/myDirectory/ecs folder, containing the following data:

• The timestamp for when the event was triggered.

• The event name, i e ECS Statistics Event.

• The number of UDRs in state New with error code MyErrorCode

• The number of UDRs registered for each error code for UDRs in Reprocessed state.

• The state of the UDRs.

194
Desktop 7.1

Example 37. ECS Statistics Event notification sent as mail

Figure 155.

This configuration will give you the following notification setup:

• When the ECS_Maintenance system task is run, a notification will be generated if more than
100 UDRs in New state with error code CriticalError, or if any UDRs in Reprocessed state
are detected.

• When this notification is generated, a mail will be sent to the mail address my.mail@my-
company.com, containing the following data:

• A subject line saying: ECS Alert - ECS Statistics Event

• A message saying: At <the timestamp for when the event was generated> more than 100
UDRs with error code CriticalError was detected.

• The the entire content of the notification will also be included in the message.

5.5.22. Debug Event


Dispatched when debug is used. The event is of workflow type and therefore includes the following
fields:

• agentMessage - Message issued by the agent.

• agentName - The name of the agent issuing the event.

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

195
Desktop 7.1

• category

• contents

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:

• workflowKey

• workflowName

• workflowGroupName

5.5.23. Dynamic Update Event


Occurs whenever you dynamically update a workflow configuration. The following fields are included:

• systemMessage - The message string.

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

Fields inherited from the Workflow event

The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:

• workflowKey

• workflowName

• workflowGroupName

5.5.24. Workflow State Event


This event is dispatched when the workflow state changes. The following fields are defined:

• abortReason - Describes the abort reason for a state event of type aborted.

196
Desktop 7.1

• executionTime - A workflow execution time.

Note: The match value precision is of milliseconds.

• workflowState - The new state that the workflow is in now. Valid options are: Aborted, Executed,
Hold, Idle, Invalid, Loading, Running, Unreachable, Waiting.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents - Workflow: <Workflow name>, State: <state>, Execution time:


<time in milliseconds>, Abort reason: <error msg>

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:

• workflowKey

• workflowName

• workflowGroupName

5.5.25. Workflow External Reference Event


This workflow event occurs as soon as a workflow that applies an External Reference is executed. As
the event is triggered by each and every such workflow, if two External Reference applied workflows
are members of a workflow group, when the workflow group is executed, two events are generated.

This event inherits all its fields from the base- and the workflow events.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

197
Desktop 7.1

Fields inherited from the Workflow event

The following fields are inherited from the Workflow event, and described in more detail in Sec-
tion 5.5.13, “Workflow Event”:

• workflowKey

• workflowName

• workflowGroupName

For further information see Section 9.5, “External Reference Profile”.

5.5.26. Supervision Event


When using the Supervision Service for real-time workflows, you can configure Supervision events
to be generated when certain combinations of conditions are met. These conditions are based on the
current values for specified MIM parameters.

For example, you can configure a Supervision event to be generated when the throughput goes above
a certain value, or when the heap size goes above a certain level, etc.

See Section 4.1.8.5.2, “Supervision Service” for further information about configuration of Supervision
events.

5.5.26.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which will generate event notifications every time a Supervision event is generated. Double
click on the field to open the Match Values dialog where you can click on the Add button to add
which values you want to filter on. If there are specific values available, these will appear in a drop-
down list. Alternatively, you can enter a hard coded string or a regular expression.

The following fields are available for filtering Supervision events in the Event Setup tab:

Supervision event specific fields

• action - With this field you can configure notifications to be sent only for certain actions. Actions
are configured in Action Lists for the Decision Tables you have created for the Supervision Service
in the Workflow Properties. Use regular expressions to filter on this field.

• cause - With this field you can specify to generate notifications only for events with certain de-
scriptions. The descriptions are added when configuring your actions for the Supervision Service.
See Section 4.1.8.5.2, “Supervision Service” for further information. Use regular expressions to
filter on this field.

• value - This field enables you configure notifications to be sent only for events with a certain
content. The content is added when you configure you actions for the Supervision Service. See
Section 4.1.8.5.2, “Supervision Service” for further information. Use regular expressions to filter
on this field.

Fields inherited from the Base event

The following fields are inherited from the Base event, and can also be used for filtering, described in
more detail in Section 5.5.1, “Base Event”:

• category - If you have configured any Event Categories, you can select to only generate notific-
ations for Supervision events with the selected categories. See Section 5.6, “Event Category” for
further information about Event Categories.

198
Desktop 7.1

• contents - This field contains the action type configured in the Supervision Service, i e Supervision
Event, and the cause, i e the name of the action, as well as the value.

• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types. However,
since the Supervision event is not a parent to any other event, this field will typically not be used
for this event.

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-06.*" for catching all Supervision events from 1st of June,
2014, to 30th of June, 2014.

• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster. However, since Supervision events only
have severity Information, this field may not be very useful for filtering.

• timeStamp This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-06-15 09:.*" for catching all Supervision events from 9:00 to 9:59 on the 15th
of June, 2014.

Fields inherited from the workflow event

The following fields are inherited from the workflow event, and can also be used for filtering, described
in more detail in Section 5.5.13, “Workflow Event”:

• workflowGroupName - This field can be used for configuring Supervision event notifications
to be generated only for specific workflow groups. Simply select the workflow groups you want to
generate Supervision events for in the drop-down-list, or enter a regular expression.

• workflowKey - This filed can be used for configuring Supervision event notifications to be gen-
erated only for specific workflow keys. You can browse for the workflow keys you want to add, or
enter a regular expression.

• workflowName - This field can be for configuring Supervision event notifications to be generated
only for specific workflow names. You can browse for the workflow names you want to add, or
enter a regular expression.

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

199
Desktop 7.1

5.5.26.2. Examples Supervision Event Configuration

Example 38. Supervision event notification saved in a Log File

Figure 156.

This configuration will give you the following notification setup:

• When a set of conditions that has an event action associated in one of the decision tables
configured for Supervision Service are met, a notification will be generated.

• When this notification is generated, a new line with information will be logged in the super-
vision.txt file located in the /home/user/supervision folder, containing the following data:

• The timestamp for when the event was triggered.

• The action type, i e Supervision Event.

• The action description.

• The action content.

200
Desktop 7.1

Example 39. Supervision event notification sent as mail

Figure 157.

This configuration will give you the following notification setup:

• When a set of conditions that has an event action with a description containing "High" asso-
ciated in one of the decision tables configured for Supervision Service are met, a notification
will be generated.

• When this notification is generated, an e-mail will be sent to the mail address
[email protected], containing the following data:

• Supervision High Level Alert

• A message saying: At <the timestamp for when the event was generated> a High Level
Supervision event was generated with the following contents:.

• The the entire content of the notification will also be included in the message.

5.5.27. Redis HA Event


The System Log is logging several different Redis HA Events. With the Event Notification configuration
you select which ones of these you are interested in and save to log, database, etc.

The Redis HA event includes the following fields:

• eventMessage - Can be used for matching any text within the event message.

• eventType - This field determines which event types of the ones logged in System Log you are in-
terested in.

• redisProfileIdentity - Can be used for matching a specific Redis profile identity

201
Desktop 7.1

• redisProfileName - Can be used for matching a specific Redis profile name

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

• category

• contents

• eventName

• origin

• receiveTimeStamp

• severity

• timeStamp

For further information about how to configure the HA Redis events see the System Administration
Guide.

5.5.28. Space Action Event


When using Configuration Spaces, the Space Action Event is always generated. By default, Medi-
ationZone® logs some preset actions. The events for these actions are notified by default to the System
Log. For further information on these preset actions, see the Configuration Spaces document.

You can also configure space action events to be notified when a space has been created, copied or
removed using Event Notification.

5.5.28.1. Filtering
In the Event Setup tab, the values for all the event fields are set by default to All in the Match Value(s)
column, which generate event notifications every time a Space Action event is generated. Double click
on the field to open the Match Values dialog where you can click on the Add button to add which
values you want to filter on. If there are specific values available, these will appear in a drop-down
list. Alternatively, you can enter a hard coded string or a regular expression.

The following fields are available to filter Space Action events in the Event Setup tab:

Space Action event specific fields

• actionType - With this field you can configure notifications to be sent for certain space actions.
You select the action type from the drop-down list: spacecreate done, spacecopy done
and spaceremove done.

• destinationSpaceName - The name of the destination space to which the content of the source
space is copied when a spacecopy command has been executed. To choose a specific space or
spaces, select the space from the drop-down box which includes all of your spaces.

• spaceName - The name of the space for which an action has occurred. To choose a specific space
or spaces, select the space from the drop-down box which includes all of your spaces. If the action
is a spacecopy, it is the source space name.

Fields inherited from the Base event

The following fields are inherited from the Base event, and described in more detail in Section 5.5.1,
“Base Event”:

202
Desktop 7.1

• category - If you have configured any Space Action categories, you can select to only generate
notifications for Space Action events with the selected categories.

• contents - This field contains a string with event specific information. If you want to use this
field for filtering you can enter a part of the contents as a hard coded string.

• eventName - This field can be used for specifying which event types you want to generate notific-
ations for. This may be useful if the selected event type is a parent to other event types.

• origin - If you only want to generate notifications for events that are issued from certain Execution
Contexts, you can specify the IP addresses of these Execution Contexts in this field.

• receiveTimeStamp - This field contains the date and time for when the event was inserted into
the Platform database. If you want to use timeStamp for filtering, it may be a good idea to enter a
regular expression, for example, "2014-11.*" for catching all Space Action events from 1st of
November, 2014, to 30th of November, 2014.

• severity - With this field you can determine to only generate notifications for events with a
certain severity; Information, Warning, Error or Disaster.

• timeStamp - This field contains the date and time for when the Execution Context generated the
event. If you want to use timeStamp for filtering, it may be a good idea to enter a regular expression,
for example, "2014-11-13 09:.*" for catching all the Space Action events from 9:00 to 9:59 on the
13th of November, 2014.

Note! The values on these fields may also be included in the notifications according to your
configurations in the Notifier Setup tab.

203
Desktop 7.1

5.5.28.2. Example Space Action Event Configuration

Example 40. Space Action event notification saved in a database

Figure 158.

This configuration will give you the following notification setup:

• When a space action is performed on any of your spaces, a notification will be generated.

• When a space action event is generated, an entry is added to the spaceactions table in the
configured database, containing the following data:

• The action performed i e spacecopy, spacecreate, or spaceremove

• The name of the space for which the action has occurred

5.5.29. <User Defined> Event


A user defined event is a basic event, extended with any variables entered by the user. It is configured
in the Ultra Format Editor:

event myEvent {
ascii addedField1;
int addedField2;
};

A user defined event is of workflow type therefore includes workflow specific fields.

The basic fields are automatically included in myEvent, along with the typed in fields. Population
of the fields is done via an APL utilizing agent. Example code:

myEvent x = udrCreate( myEvent );


x.addedField1 = aUDR.anum;
x.addedField2 = aUDR.code;

204
Desktop 7.1

dispatchEvent( theEvent );

The fields of a User Defined Event:

• agentName - The name of the agent issuing the event.

• category - A user defined category, as entered in the Event Categories dialog. If utilized, this
field is set manually in the APL code.

• eventName - the name of the Event, as defined in Ultra.

• origin - the IP address of the Execution Context the workflow issuing the event is running on.

• receiveTimeStamp - The date and time for when an event is inserted in the platform database.
This is the time used in for example the System Log.

• severity - The severity of the event. May be any of: Information, Warning, Error, or Disaster.
The default value is Information. If another severity is required, this field must be set manually in
APL to one of the strings: "Information", "Warning", "Error", "Disaster".

• timeStamp - The date and time taken from the host where the event is issued.

• workflowKey - The name of the internal MediationZone® workflow key.

• workflowName - The name of the workflow issuing the event.

• <any> - Any information, as stated in the format configuration.

• The contents field - Workflow name: <Workflow name>, Agent name: <Agent
name>

5.6. Event Category


An Event Category is defined from the Edit menu in the Event Notification Configuration. The Event
Category is used to send any kind of information to a Column, as opposed to direct mapping of a MIM
resource. The Event Category name is a string which is used as argument in the dispatchMessage
APL function to route messages to a Column. .

Figure 159. The Event Category window, Where a User Defined String is Specified to be Used
as Name for the Category Field in an Event.

When the Event Category is defined it is mapped against a Match Value in the Event setup tab. Then
the defined Event Category is used as a parameter in the APL code with the dispatchMessage
function.

205
Desktop 7.1

5.7. Enabling External Referencing


Currently only supported for Notification Type Log File. For information, see Section 9.5.3, “Enabling
External References in an Agent Profile Field”.

206
Desktop 7.1

6. Inspection
When workflows are executed, the agents may generate various kinds of data, such as logging errors
into the System Log, or sending erroneous data to the Error Correction System (ECS). The Inspectors
allow the user to view such information.

This section includes an overview of the MediationZone® Inspectors.

To open a MediationZone® Inspector, click the Inspection button

6.1. Menus and Buttons


The contents of the Desktop menus and button panel may change depending on which Inspector that
has been opened in the currently displayed tab. The buttons and menus will be described in the next
coming sections or in the respective user's guide.

The Desktop standard menus and buttons are described in Section 2.3.2.1, “Desktop Standard Menus”
and Section 2.3.2.2, “Desktop Standard Buttons”.

When workflows are executed, the agents may generate various kinds of data. The following Medi-
ationZone® Inspectors are available to analyze the data:

6.2. Aggregation Session Inspector


See Section 11.1.4, “Aggregation Session Inspection”.

6.3. Alarm Inspector


See the Web Interface user guide.

6.4. Archive Inspector


See Section 12.1.4, “Archive Inspector”.

6.5. Duplicate Batch Inspector


See Section 11.6.3, “Duplicate Batch Inspector”.

6.6. Duplicate UDR Inspector


See Section 11.7.3, “Duplicate UDR Inspector”.

207
Desktop 7.1

6.7. ECS Inspection


The ECS Inspector allows the user to inspect and maintain UDRs and batches located in ECS, view
and edit their content, and add them to reprocessing (RP) groups. The latter is a prerequisite for the
data to be collected by an ECS Collection agent. You may also restrict certain fields from being edited
in ECS.

Apart from simply sending a UDR or batch to ECS, a workflow can be configured to associate user-
defined information with the ECS data. For the UDRs this is Error Code and MIM information. For
cancelled batches, the Error UDR and Cancel Message may contain user defined information.

Note! Only a reference, not the data itself, is saved in the database. Physically, ECS data is
saved in the directory defined by the property mz.ecs.path found in
$MZ_HOME/etc/platform.xml. The default path is $MZ_HOME/ecs.

If the mz.ecs.path parameter is changed, the changes will take effect the next time data is
inserted into ECS. Existing ECS data is left at its current location and must not be moved. If
required to do so anyway, move the content of the old mz.ecs.path directory to the new,
and create a soft link in the old directory pointing out the new location.

Note! Take special precaution when changing, updating or renaming formats. If the updated
format does not contain all the fields of the historical format, in which UDRs may already reside
in the ECS or Aggregation, data will be lost. When a format is renamed, it will still contain all
the fields. The data, however, cannot be collected.

6.7.1. ECS Inspector


To open the ECS Inspector, click the Tools button in the upper left part of the MediationZone® Desktop
window, and then select ECS Inspector from the menu.

Initially, the window is empty and must be populated with data using the Search ECS dialog. Depending
on if the search is done for UDRs or batches the columns in the table will differ. For further information,
see Section 6.7.5, “Searching the ECS”.

6.7.1.1. Maximum Number of Displayed UDR Entries


To ensure that the GUI does not have to handle too much data, the maximum number of displayed
UDR entries have been set to 100,000 UDR entries. However, in the platform.xml file, you can
change the maximum number of displayed UDR entries in the ECS Inspector by using the property:

<property name="mz.ecs.max.udr.search" value="10000"/>

If you want to get quicker and smaller search results, for example, you can set this property to a lower
value.

Note! If the value of the property is set higher than the default value, it may result in poor per-
formance. Also, note that this property only has effect when UDRs are inspected and not when
batches are inspected.

208
Desktop 7.1

6.7.1.2. Maximum Number of UDR Entries to Update


To ensure that the MZ Platform, using standard 256 Mb heap size, does not run out of memory, the
maximum number of UDR entries to update/delete is limited to 15,000,000 UDR entries. However,
this limit is easily increased using a property, in the platform.xml file:

<property name="mz.ecs.max.udr.update" value="25000000"/>

Note! When increasing the above default limit, one might have to increase MZ Platform max-
imum heap size as well. This is done by changing the -Xmx parameter in the platform.xml
file. As a general rule, ECS wants another 10 Mb Platform heap space for every additional one
million UDRs.

6.7.1.3. Menus
6.7.1.3.1. General

The following menu items apply for both Batches and UDRs.

File menu

Close Closes the Error Correction System Inspector

Edit menu

Delete Removes selected or (if no entries are selected) all matching entries, provided
that the RP State is set to Reprocessed. The ECS does not have to be purged
manually, there is a predefined cleanup task - ECS_Maintenance for automatic
purging. For further information, see Section 16.1.4, “ECS_Maintenance System
Task”.

Note! If the maximum number of entries that can be displayed in the table
has been exceeded. The delete operation will still be applied for all
matching entries.

Search... Displays the Search Error Correction System dialog where search criteria
may be defined to limit the entries in the list. For further information, see Sec-
tion 6.7.5, “Searching the ECS”.

Matching entries are bundled in groups of 500 in the table. This list shows which
group, out of how many, that is currently displayed. An operation targeting all
matching entries, will have effect on all groups.
Select All Selects all entries in the ECS Inspector.
Error Codes... Displays the ECS Error Code dialog where the Error Codes in the system may
be configured. For further information, see Section 6.7.7, “Error Codes”.
Reprocessing Displays the ECS Reprocessing Groups dialog, where reprocessing groups are
Groups... managed. For further information, see Section 6.7.8, “Reprocessing Groups”.
Searchable Fields... Opens the Searchable Fields dialog, where you can define specific UDR fields
that you want to add as meta data that can later be used for making searches,
see Section 6.7.2, “Configuring Searchable Fields in the ECS”.
Restricted Fields... Opens the Restricted Fields dialog, where you can specify certain fields within
certain UDR types that should be restricted from being updated in the ECS In-
spector, see Section 6.7.4, “Configuring Restricted Fields in the ECS”.

209
Desktop 7.1

View menu

Refresh Select this option to refresh the data in the table.

Batch/UDR menu

Set State... Defines the state of a selected number of entries or (if no entries are selected) all entries.
Possible states are either New or Reprocessed (that is collected by an ECS Collection
agent and reprocessed with errors). Already processed data can be reset to New to enable
recollection. When the state is changed, the timestamp in the Last RP State Change
column in the ECS Inspector will be updated. See Section 6.7.9, “Changing State” for
more information.

Note! If the number of matches is larger than the maximum number of UDRs to
be displayed the state change will still be applied for all matching entries.

Assign to Assigns a selected number of entries or (if no entries are selected) all entries to a repro-
RPG... cessing group. Grouped entries can be collected simultaneously by an ECS collection
agent.

Figure 160. Message when collecting grouped entries

Note! Assignments to reprocessing groups can be made automatically for each


new entry. In order for this to happen, the data must be assigned an Error Code
in the workflow, and this Error Code must be mapped to the reprocessing group
in the ECS Error Code dialog.

Delete Removes selected or (if no entries are selected) all matched entries, provided that the
RP State is set to Reprocessed. The ECS does not have to be purged manually, there
is a predefined cleanup task - ECS_Maintenance for automatic purging. For further
information, see Section 16.1.4, “ECS_Maintenance System Task”.

Note! If the maximum number of entries that can be displayed in the table has
been exceeded, the delete operation will still be applied for all matching entries.

6.7.1.3.2. UDR specific menu options

UDR menu

Explore Displays the UDR Editor presenting the content of the selected UDR(s). The editor will also
UDR... be displayed if you double-click on a cell in the UDR Type column in the table of entries. In
the editor, the content of the UDR can be changed, except for the field Original Data.

210
Desktop 7.1

Figure 161. The UDR Editor

For further information, see the MediationZone® Ultra Format Management User's Guide.
Set Tag With this option you can set a tag on selected UDRs. When you have selected the UDR(s)
on you want to tag and then selected this option, a dialog will open asking you to enter the Tag
UDR(s)... Name.

Figure 162. Entering Tag Name

The Tag Name will then be visible in the Tag column in the ECS Inspector.
Clear If you select this option, the tags for the selected UDRs will be removed.
Tag(s) on
UDR(s)...
Bulk Several UDRs (selected or matched) may be edited simultaneously with the Bulk Editor.
Edit... The editor displayed from ECS differs slightly from the editor displayed from the UDR Editor
window, opened for example from the Explore UDR... dialog. The ECS Bulk Editor has a
Preview option, which makes it possible to preview the changes prior approving and saving.

When changes have been made you can select to view only the modified entries, only the
untouched entries, or all entries in the ECS Inspector.

211
Desktop 7.1

Figure 163. The ECS Bulk Editor

Note! If the number of matches exceeds the maximum number of entries that can be
displayed, it may be a good idea to set up a workflow for editing the entries using APL
instead of performing a bulk edit.

When clicking on the Apply Changes button, the Bulk Edit Result dialog will open, displaying
modified and untouched entries.

212
Desktop 7.1

Figure 164. The Bulk Edit Result dialog

Now you can select if you want to view only modified entries, untouched entries, or all entries
in the ECS Inspector by using either of the options Entire Result Set, Modified Only, or
Untouched Only. Click on the View button when you have made your selection.

The selected type of entries will then be displayed in the ECS Inspector. In the top right corner,
above the ECS Inspector table, you will see information about what selection you have made,
e g "Modified Entries from Bulk Edit".

Hint! If you want to view the changes before applying them, you can click on the
Preview button instead. The Bulk Edit Result - Preview dialog will then open, giving
you a preview of the changes that are about to be made. If you are satisfied with the
preview, click on the Apply button, and the Bulk Edit Result dialog will open.

For further information about the Bulk Editor, see the MediationZone® Ultra Format Man-
agement User's Guide.

6.7.1.3.3. Batch specific menu options

Batch menu

Explore Displays the Error UDR Viewer presenting the content of the Error UDR (if any). The
Error viewer will also be displayed if you double-click on a cell in the Error UDR column in
UDR... the table of entries..

213
Desktop 7.1

Figure 165. The Error UDR Viewer

It is not possible to change the content of an Error UDR.


View Displays the selected batch as raw data.
Batch...

Figure 166. View Batch

6.7.2. Configuring Searchable Fields in the ECS


If you want to search for UDRs with specific values on certain fields, you can configure these fields
in the Searchable Fields dialog.

Figure 167. Searchable Fields dialog

Note! These configurations have to be made before UDRs are sent to the ECS by the ECS For-
warding Agent.

214
Desktop 7.1

To configure searchable fields:

1. In the Labels tab, click on the Add button.

The Add Label dialog opens.

Figure 168. Add Label dialog

2. Enter a name for the label in the Label field and click on the Add button.

The new label is added in the Defined Field Labels list.

3. Repeat the previous step for all the labels you want to add, and then click on the Close button to
close the dialog.

Note! The maximum number of labels you can add is ten.

4. Click on the Mappings tab to map UDR fields to the different labels.

5. Click on the Add button to open the UDR Browser.

6. Select the UDR type you want to add and click OK.

The UDR type is added in the UDR Types field and the UDR Browser is closed.

7. Repeat the previous step for all the UDR types you want to add.

8. Select a UDR type in the UDR Types field, and double click on the UDR Field column for a label
you want to associate a UDR field with.

The Select UDR Field dialog opens.

9. Select the field in the UDR and click OK.

The selected field is listed in the UDR Field column for the label and the dialog is closed.

10. Repeat the previous step for all the UDR Types you want map fields from.

Note! This enables you to map one field from each UDR type to a certain label.

11. Click on the Save button when you are finished with your configuration.

The configuration is saved, and the next time the ECS receives UDRs from an ECS Forwarding
Agent, the configured UDR fields are added as meta data and can later be used for making searches,
see Section 6.7.5, “Searching the ECS”.

215
Desktop 7.1

6.7.3. Restricted Fields Configuration


The purpose of the restricted fields configuration is to make it possible to mark certain fields as not
editable in ECS Inspector, i.e. make them read only. For instance you might want to guard some fields
in the UDRs in ECS from being edited to avoid losing traceability.

The configuration is available for all users having access to ECS Inspector in MediationZone® .
However only users belonging to the administrator group are allowed to change the configuration, i.e.
for all other users the configuration is available in read only mode.

Restrictions can be set on any UDR type, also within sub-UDRs. The restrictions are applied recursively
so if you have restrictions on a UDR field of a certain type, all fields below this will be blocked from
editing as well.

The restrictions defined in this configuration are valid only in ECS, i.e. the UDRs are still possible to
modify outside ECS (unless they have been explicitly defined as read only in UDR definition).

It is possible to import and export the restricted fields configuration if needed. The configuration is
located in the configuration tree under System->ECS->Restricted Fields.

Note! The access rights described above applies also for import and export of configuration.
This means that any user can export restricted fields configuration (as long as they have ECS
Inspector access) but only members of the administrator group may import the configuration.

6.7.4. Configuring Restricted Fields in the ECS


If you want to restrict certain fields in certain UDR types from being edited in the ECS Inspector, you
can specify those fields in the Restricted Fields dialog.

Note! Only users belonging to the administrator group are allowed to configure restricted fields.
However, all users can view the configured restrictions.

Figure 169. Restricted Fields dialog

To configure restricted fields:

1. Click on the Add button beneath the Restricted UDR Types section.

The UDR Internal Format dialog opens.

216
Desktop 7.1

Figure 170. UDR Internal Format

2. Select the UDR Type for which you want to restrict fields from being edited and click on the Apply
button.

The UDR type is added in the Restricted UDR Types section.

3. Repeat the previous step for all the UDR Types you want to add, and then click on the OK button
to close the dialog.

4. Select one of the UDR Types that you have added and click on the Add button beneath the Restricted
Fields section.

The UDR Internal Format Browser opens.

Figure 171. UDR Internal Format Browser

5. Select a field you want to restrict from being edited in the UDR and click on the Apply button.

The UDR type is added in the Restricted Fields section.

6. Repeat the previous step for all the fields you want to restrict from being edited, and then click on
the OK button to close the dialog.

7. When you are finished, click on the Save button in the ECS Restricted Fields dialog to save your
settings.

The configured fields will now be blocked from editing in the ECS Inspector.

6.7.5. Searching the ECS


The Search ECS dialog allows the user to filter out and locate erroneous UDRs and batches in the
ECS. Any search settings you make can also be saved as filters that you can use for future searches.
The Search ECS dialog is displayed when the Search... option is selected in the ECS Inspector. Select
UDRs or Batches to display UDR or batch specific options.

217
Desktop 7.1

Figure 172. Search ECS dialog

6.7.5.1. Search Options


The entries in ECS can be either UDRs or batches. Depending on the selected type, different search
options are available.

6.7.5.1.1. Common Search Options

The following search options are available for both UDRs and batches in ECS:

Saved filters This field contains any saved filters you may have created. For more informa-
tion about how to create a filter, see Section 6.7.5.2, “Saving Search Settings
as a Filter”
Workflow The name of the workflow that sent the entry to ECS.
Agent The name of the agent that sent the entry to ECS.
Error Code An Error Code that has been defined in ECS Inspector. See Section 6.7.7,
“Error Codes” for further information.
Error Case A list displaying the Error Cases associated with the selected Error Code. If
the entry is too long to fit in the field, the field can be expanded by enlarging
the ECS Inspector in order to display the entire error case text.

An Error Case is a string, associated with a defined Error Code. Error Cases
can only be appended via APL code:

udrAddError( <UDR>,
<Error Code>,
<Error Case> );

Note! When Batch is selected, the <UDR> parameter is the error UDR.

MIM It is possible to configure a workflow to send descriptive MIM values with


the actual data (in the Workflow Properties window). This could be used to
refine the search in ECS.

218
Desktop 7.1

Insert Period Use this Search option to search for UDRs/batches that were inserted into ECS
during a specific time period, either by specifying a start and end time, or by
using any of the predefined intervals, e g today, this week, etc.
Reprocessing Group Contains a list of all reprocessing groups.

Unassigned (UDR/Batch) will list all entries not associated with any repro-
cessing group.
Reprocessing State The entry state, which can be New, or Reprocessed. Only entries in state New
may be collected.

6.7.5.1.2. Search Options for Batches

The following search options are only available when searching for Batches in ECS:

Cancel Agent The name of the agent that cancelled the batch.
Cancel Message The error message that was sent as an argument with the cancelBatch func-
tion.
Error UDR Type The type of Error UDR that can optionally be sent with a batch, containing im-
portant MIM information (or any other desired information when the UDR is
populated via APL).

6.7.5.1.3. Search Options for UDRs

The following search options are only available when searching for UDRs in ECS:

UDR Type The type of UDR you want to search for.


Tag If you have tagged any UDRs, this option can be used for displaying only UDRs with
the stated tag.
Repro- Use this Search option to search for UDRs that last changed state during a specific time
cessing State period, either by specifying a start and end time, or by using any of the predefined in-
Change Peri- tervals, e g today, this week, etc.
od
Search If you select this check box, you can enter specific values for different UDR labels that
Fields you have configured, see Section 6.7.2, “Configuring Searchable Fields in the ECS”.
Only UDRs containing the specified values will then be displayed in the ECS Inspector.

Example 41.

For example, with the following setting:

Figure 173.

Only UDRs with IMSI 2446888776 will be displayed in the ECS Inspector,
provided that the label IMSI has been mapped to the IMSI field in the UDRs.

Wild cards and intervals can also be used when entering the values for the fields; "*"
can be used for matching any or no characters, and intervals can be set by using
brackets "[ ]".

When using the * or [ ], the following rules apply:

219
Desktop 7.1

• Only one wild card and one interval can be used per value.

• An interval is defined with a start value and an end value.

• If the interval consists of the same number of digits in the start and end value, the
match will be made on that number of digits, e g (a[001-002]) will match a001 but
not a01.

• If the interval consists of different number of digits in the start and end value, the
match will be made on an appropriate numberof digits in the UDR, e g (a[1-20]) will
match a1 and a20 but not a01 or a020.

• Only one interval can be stated within the "[ ]".

• The start and end values have to consist of numbers.

• The start value cannot be larger than the end value.

• The start value cannot have a larger amount of numbers than the end value, e g [0001-
3].

If a setting for a value is not correct, an error message will be displayed as a tooltip.

In order for a UDR to pass the filter, all the defined values have to match.

Example 42.

For example, with the following setting:

Figure 174.

Only UDRs with:

• A Numbers starting with 468 AND

• B Numbers starting with 47 AND

• Location Area Code 10 to 20

will be displayed.

6.7.5.2. Saving Search Settings as a Filter


When the search settings have been made, you can select to save these settings as a filter.

Warning! If you have included search criteria that refers to parameters that have been defined
in your system, such as error codes, tags, search fields, etc, these filters will not work properly
in case you delete any of the defined parameters.

To save a filter:

1. Set the search options you want to have and click on the Save... button.

220
Desktop 7.1

A dialog opens asking you to enter a name for the filter.

Figure 175. Entering Filter Name

2. Enter a name in the ECS Filter Name field and click OK.

The dialog will close and the new filter will appear in the Saved Filters field.

The next time you want to use the same search settings, click on the filter name in the Saved Filters
field and the saved search settings will be displayed.

Hint! Any saved filters can be renamed or deleted by selecting the filter and then clicking on
the Rename or Delete buttons.

6.7.6. ECS Inspector Table


Once the search is performed and matches are found the table in the ECS Inspector will be populated.
Each row represents one UDR or one batch.

Note! The ECS Inspector caches the result when the user populates a list (for instance the Error
Codes). This is done to avoid unnecessary population of workflow names, agent names and error
codes since it is costly in terms of performance. You have to click on the Refresh button in order
to repopulate the search window.

Figure 176. ECS Inspector - UDRs

221
Desktop 7.1

Figure 177. ECS Inspector - Batches

6.7.6.1. Columns
The following columns are available in the ECS Inspector Table:

# The table sequence number.


Db ID A sequence number, automatically assigned to an entry.
Date Date and time when the entry was inserted in the ECS.
Workflow Name of the workflow from which the data was sent.
Agent Name of the agent that sent the entry to ECS. For UDRs, this will be the ECS forward-
ing agent and for batches a collection agent.
UDR Type This column is only available when you have selected to view UDRs and displays the
UDR type.
Cancel Agent This column is only available when you have selected to view batches and displays
the name of the agent that issued the cancelBatch request.
Cancel Mes- This column is only available when you have selected to view batches and displays
sage the message sent with the cancel request. The following example shows a user defined
request, defined with APL using an Analysis agent:

Example 43.

cancelBatch("undefined_number_prefixes.");

Error Code The Error Code as defined in ECS.


Error UDR This column is only available when you have selected to view batches and displays
the type of the Error UDR associated with the batch. If you double-click on a table
cell in this column, information about the whole Error UDR will be shown. The Error
UDR is populated either from the Workflow Properties window (see Figure 178,
“Workflow Properties window, Error tab”), or from an agent using APL (see Fig-
ure 532, “ECS Collection Configuration Dialog”). It can contain useful information
which is needed in a workflow reprocessing a batch. The fields of the Error UDR will
automatically appear as MIM values in the reprocessing workflow.

222
Desktop 7.1

Figure 178. Workflow Properties window, Error tab

Example 44.

myErrorUDR eUDR = udrCreate( myErrorUDR );


eUDR.noOfUDRs = (long)mimGet(
"IN", "Outbound UDRs" );
udrAddError( eUDR,
"nokSOURCE",
"Switch not found." );
cancelBatch( "Incorrect source.", eUDR );

The error UDR format is defined as any other format from the Ultra Format
Editor.

internal myErrorUDR{
long noOfUDRs;
};

RP Group Shows the reprocessing group that the entry is assigned to, if any. Assignments can
be made both manually and automatically. In the latter case, an Error Code must be
mapped to a reprocessing group.
RP State Initially, an entry has the reprocessing state, New, that is the entry has not been repro-
cessed. In order for it to be collectible, it has to be assigned to a reprocessing group.
When collected by an ECS Collection agent, the state is changed to Reprocessed.

223
Desktop 7.1

Note! Only entries in state New may be collected by the ECS Collection agent.
The state can manually be changed back to New if this is necessary. Only entries
set to Reprocessed can be removed.

MIM Values Double-clicking this field will display a new window, listing the MIM values. MIM
values to be associated with the entry are configured differently for the two types of
entries:

• Batch - From the Workflow Properties window.

• UDR - From the ECS forwarding agent.

Tags This column is only available when viewing UDRs and will display any tags that have
been set on the UDRs.
Last RP State This column displays the timestamp for when the reprocessing state was last changed.
Change The first time a UDR is sent to ECS, it will be in reprocessing state New, and this
column will display the timestamp for when the UDR was inserted into ECS. When
the UDR is collected for reprocessing or if the state is changed manually in the ECS
Inspector, this column will be updated with the current timestamp.
<search field The labels for any search fields you may have configured will be displayed as indi-
label(s)> vidual columns. These will only be available when viewing UDRs.

See Section 6.7.2, “Configuring Searchable Fields in the ECS” for further information
about configuring searchable fields.

6.7.6.2. Tagging UDR


When having made a search for UDRs, you can select specific UDRs that are of interest, tag them and
then save a filter based on the tag, which will display only the tagged UDRs whenever the filter is
used.

Warning! If you remove the tag that you have stated in the filter, the filter will not work properly.

To tag UDRs and save as a filter:

1. After having populated the ECS Inspector, select the UDRs you want to tag, click on the UDR
menu and select the Set Tag on UDR(s)... option.

A dialog opens asking you to enter a name for the tag.

Figure 179. Setting Tags

2. Enter the tag name in the Tag Name field and click OK.

The selected UDRs are now tagged.

224
Desktop 7.1

Hint! If you change your mind, the set tags can be removed by selecting the option Clear
Tag(s) on UDR(s) in the UDR menu.

3. Open the Search ECS dialog by clicking on the Search... button.

4. Select the Tag check box and enter the tag you want to search for in the field to the right of the
check box.

5. Click on the Save As... button beneath the Saved Filters field.

A dialog opens asking you to enter a name for the filter.

6. Enter a name in the Saved Filter Name field and click OK.

The dialog will close and the new filter will appear in the Saved Filters field.

Figure 180. Tag Filter

The next time you want to view the tagged UDRs, select the saved filter setting when making your
search and only the tagged UDRs will be displayed in the ECS Inspector.

6.7.7. Error Codes


With the Error Codes option, errors can be specified in the ECS and associated with both UDRs and
batches. A reprocessing group can also be assigned an Error Code, and when an entity is inserted in
the ECS it will thereby automatically be available for collection by the ECS Collection agent.

There are two predefined Error Codes within the system, AGGR_UNMATCHED_UDR and DUPLIC-
ATE_UDR, which are automatically set by the Aggregation and Duplicate UDR Detection agents when
the corresponding error condition is detected. All other Error Codes are defined by the user.

Apart from being accessible in the ECS Inspector, the error codes will also be used in ECS Statistics,
see Section 6.8, “ECS Statistics” and Section 5.5.21, “ECS Statistics Event”.

225
Desktop 7.1

Note! Several Error Codes can be attached to the same UDR. This will affect the ECS Statistics
output. For further information, see Section 6.8.1.2, “Error Code Search”.

To create an Error Code, select Error Codes... from the Edit menu. This will display the ECS Error
Code dialog.

Figure 181. ECS Error Code dialog

Selecting Add will open the Add ECS Error Code dialog. This is where assignments of new Error
Codes are made.

Figure 182. ECS Error Code dialog

Error Code The Error Code that will be attached to UDRs or batches.
Description A description of the error code.
RP Group The reprocessing group that the Error Code will be assigned to.

A user may send optional information to the ECS from an Analysis or an Aggregation agent, as long
as an Error Code has been defined. To this Error Code, any information may be appended using APL.
See the example below.

Example 45.

An Error Case can be appended using APL code.

udrAddError( input, "CALL ID ERROR",


"The callId: "
+ input.callId
+ ", Calling number: "
+ input.anum
);

In this example the "CALL ID ERROR" is defined in the ECS Error Code dialog, found in
the Edit menu in the ECS Inspector.

Note! To clear the errors for a UDR the udrClearErrors function should be used. For further
information, see Example 153, “Reassigning to a Different Reprocessing Group”.

226
Desktop 7.1

6.7.8. Reprocessing Groups


In order for ECS data to be reprocessable, it has to be assigned to a Reprocessing Group and have
status New. This group is then selected in the ECS Collection agent of the reprocessing workflow.

To create a reprocessing group, select Reprocessing Groups... in the Edit menu of the ECS Inspector.

Figure 183. ECS reprocessing group dialog

Click on the Add button to display the Add ECS Reprocessing Group dialog. The reprocessing group
must have a unique name.

Figure 184. Add ECS Reprocessing Group dialog

The Error UDR Type is only applicable for Batch Data Type. If no Error UDR is to be used in re-
processing, this information is not required.

Note! UDRs with several Error Codes mapped to different reprocessing groups cannot be
automatically assigned to a reprocessing group. They must be assigned manually.

6.7.9. Changing State


You can change the state of a selected number of entries or (if no entries are selected) all entries.
Possible states are either New or Reprocessed (that is collected by an ECS agent and reprocessed with
errors). Already processed data can be reset to New to enable recollection.

Note! If the number of matches is larger than the maximum number of UDRs to be displayed,
see Section 6.7.1.1, “Maximum Number of Displayed UDR Entries”, the state change will still
be applied for all matching entries.

To change state of selected entries:

1. If you only want to change the state for a few of the entries, select the entries in the table, otherwise
leave all entries unselected to apply the state change for all matching entries, and select the Set
State... option in the UDR menu.

The Set State dialog opens where you can see the total number of entries that will be affected.

227
Desktop 7.1

Note! If the number of matching entries exceeds the maximum number of entries that can
be displayed in the ECS Inspector, the dialog will only tell you that all matching entries will
be affected. If you proceed, another dialog will open up informing you about the total number
of entries that will be affected, asking you if you want to continue.

2. Select to which state you want to set the entries in the Select state: list and click OK.

If the number of matching entries exceeds the maximum number of entries that can be displayed
you will get a question if you want to continue.

3. Click Yes if you want to continue.

If the number of matching entries exceeds the maximum number of entries that can be displayed,
a progress bar will show the progress of the state change. This may be useful if you are changing
the state for a large number of entries.

Otherwise, the state will simply be changed in the table and the timestamp in the Last RP State
Change will be updated.

6.8. ECS Statistics


To open the ECS Statistics, click on the Inspection button in the upper left part of the MediationZone®
Desktop window, and then select ECS Statistics from the menu. The system gathers the current
number of new and reprocessed UDRs and batches per error code in ECS. The Error Correction
System Statistics window allows inspection of these calculated values, as well as more specific views
as the data may be inspected down to Error Code level. The data may also be exported and printed.

Note! The ECS Statistics data is gathered and calculated with a system task called ECS_Main-
tenance, see Section 16.1.4, “ECS_Maintenance System Task” for more information. If you
want to change the scheduling of the task, this is changed in the configuration for ECS_Main-
tenance_grp.

If the ECS_Maintenance system task is scheduled to be executed with a time interval of less
than an hour, the statistical data will be gathered every hour.

Initially the Error Correction System Statistics window is empty. It is populated by performing a
search.

File menu Export... Shows the Export dialog, allowing the statistics to be exported.
File menu Print... Shows the Print dialog, allowing the statistics to be printed.
Edit menu Search... Shows the Search ECS Statistics dialog. For further information, see Sec-
tion 6.8.1, “Searching the ECS Statistics”.

6.8.1. Searching the ECS Statistics


To populate the ECS Statistics window, select Search... from the Edit menu and specify the search
criteria. Limitations to the search are used to find data of interest.

If no limitations are entered in the search dialog, a basic search is performed. For further information,
see Section 6.8.1.1, “Basic Search”.

228
Desktop 7.1

Figure 185. Search ECS Statistics dialog

Data Type Will determine if the search will be made for Batches or UDRs.
Error Code A list of available Error Codes, as defined in ECS. To search for several Error Codes
the Add button may be selected to append further fields.
Period Refines the search by setting a time period when the data was entered into ECS.

Note!Only 100 000 entries at a time can be browsed. If the search results in more than 100 000
entries, bulk operations must be repeated for each multiple of 100 000.

6.8.1.1. Basic Search


If no limitations are entered in the search dialog, the basic search table is shown.

When selecting one row in the table, the spread of Error Codes is displayed in a pie chart. If up to four
Error Code types for that date are named, these will all be shown the graph. If five or more Error Code
types are present, the three most common Error Code types will be shown, and the rest will be grouped
in the category "Other".

229
Desktop 7.1

Figure 186. ECS Statistics window - Search without limitations

Date The date and time when the values were calculated.
New The amount of new errors current in ECS on the given date.
Reprocessed The amount of reprocessed errors current in ECS on the given date.
Value Type Enables the possibility to display graphical statistics for either New or Reprocessed
UDRs or batches separately.

6.8.1.2. Error Code Search


If a specific Error Code search is done, three new columns related to the Error Codes will be added to
the table.

230
Desktop 7.1

Figure 187. ECS Statistics window - Search of Error Codes

Error Code This column is only visible when the search is made on Error Codes.
Newest The last time the error occurred.
Oldest The first time the error occurred.
Error Code Report One line in the graph shows the number of UDRs attached with the selected
Error Code.

231
Desktop 7.1

7. Tools
MediationZone® provides different Tools to, for example, view logs, statistics, and pico instance in-
formation, and to import and export configurations.

The section describes all MediationZone® Tools, except for the Ultra Format Converter and the
UDR File Editor. For further information about the Ultra specific tools, see the MediationZone®
Ultra Management user's guide.

To open a MediationZone® Tool, click the Tools button

7.1. Menus and Buttons


The contents of the Desktop menus and button panel may change depending on which Tool that has
been opened in the currently displayed tab. The buttons and menus will be described in the next coming
sections or in the respective user's guide.

The Desktop standard menus and buttons are described in Section 2.3.2.1, “Desktop Standard Menus”
and Section 2.3.2.2, “Desktop Standard Buttons”.

7.2. Access Controller


To be able to operate MediationZone® , you need to be defined as a user in the system. Your access
to various MediationZone® applications is defined by the access group that you are assigned with.
The Execute permission means that members of an access group can use the a certain application. To
applications that include configurable parameters you apply the Write permission.

Note! Only members of the Administrator group have access to the Access Controller, hence
only administrators may add users to the system. Only one user may use the Access Controller
at the time.

For further information about permissions, see Section 7.3.5, “Properties”.

To open the Access Controller, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select Access Controller from the menu.

7.2.1. Users Tab


In MediationZone® the default user, mzadmin, will always have full permissions for any activity.

It is recommended that the password for mzadmin is changed and kept in a safe place. Instead personal
accounts should be created and used for handling the system in order to track changes.

To Add a User:
1. Open the Users tab.

2. From the Access Controller main menu select File and then Add.

3. Fill in the details according to the description below.

232
Desktop 7.1

Figure 188. Access Controller Window - Users Tab

Enable User Check to enable the user's predefined access rights


Username Enter the name of the user.

Valid characters are: A-Z, a-z, 0-9, '-' and '_'


Full Name Enter a descriptive name of the user.
Email Enter the user's e-mail address. This address will be automatically applied to ap-
plications from which e-mails may be sent.
Password Enter a password for the user account.
Verify Password Re-enter the password.
Group Enter a comma delimited list of all the access groups that the user is a member
of.
Member If enabled, the user is registered as a member of the specific group.
Default If enabled, this group is set as default group for the user. By default, this group
will have read, write and execute permissions for new configurations created by
the user.

For details of how to change your password see Section 2.3.2.1.1, “The File Menu”.

7.2.2. Access Groups Tab


Administrator is the default access group. The Administrator group has full access to all the activities
and functions in the system.

To add a new group to the system, select the Access Groups tab and then select Add from the File
menu or from the toolbar.

233
Desktop 7.1

Figure 189. Access Controller Window - Access Groups Tab

Name Enter the name of the group.

Valid characters are: A-Z, a-z, 0-9, '-' and '_'


Description Descriptive information about the group.
Application This column is a list of the all applications in the system.
Execute Check to enable the members of the access group to start an instance of the rel-
evant application. Clear to prohibit the access group members from using it.
Write Check to enable the members of the access group to edit and save a configuration
within the relevant application. Clear to prohibit the user from doing so.

Note! The main Desktop menu is divided into Configuration, Inspection,


and Tools. Configuration enables you to create configurations. Inspection
enables you to view data that is produced by workflows. Tools enables
you to view data that is generated by the system. When you define an
Access Group in the Access Controller, you can only check Write for
Inspection- and Tools applications, so that users are able to manipulate
data that is either generated by a workflow, or by the system. Configura-
tion Write access is set per configuration from the Set Permissions view.
For further information see Section 7.3.5, “Properties”.

Application Cat- A drop down menu that allows the user to filter on application type. Options are
egory All, Configuration, Inspection, Tools, or Web interface.
Select All Enables Write (if applicable) and Execute for all permissions in the chosen
category.
Deselect All Disables Write and Execute for all permissions in the chosen category.

For information about how to modify configuration permissions, see Section 7.3, “Configuration
Browser”.

7.2.3. Authentication Method Tab


User authentication is by default performed in MediationZone® . As an alternative, you can connect
MediationZone® to an external LDAP directory for delegated authentication. This facilitates automation
of administrative tasks such as creation of users and assigning access groups.

234
Desktop 7.1

If the external authentication server returns an error or cannot be accessed, MediationZone® will perform
the authentication internally as a fallback method.

The Authentication Method tab is only available if LDAP Authentication is installed.

Note! Configuration performed from the Users Tab has no impact on external authentication
servers.

7.2.3.1. Preparations
This section can be ignored if authentication is to be performed by MediationZone® .

7.2.3.1.1. Directory Structure

The LDAP directory that is used for authentication must conform to the following requirements:

1. The cn attribute of group entries must match an access group defined in MediationZone® .

Note! MediationZone® performs case sensitive comparisons of the cn attributes and access
groups.

2. For each user in a group entry, the memberUid attribute must be set.

3. All group entries must belong to the objectclass posixGroup.

4. All user entries must belong to the objectclass posixAccount.

7.2.3.1.2. Secure Access

The following steps are required before configuration of authentication with LDAPS or LDAP over
TLS:

1. Obtain the server certificate for the authentication server from your LDAP administrator.

2. Start a command shell and copy the server certificate to the platform host.

3. Change directory to $JAVA_HOME/lib/security on the platform host.

4. Install the server certificate using the Java keytool command.

keytool -import -file <certificate> -keystore cacerts

7.2.3.2. Configuration
To setup the authentication method, open the Authentication Method tab and fill in the details accord-
ing to the description below.

235
Desktop 7.1

Figure 190. Access Controller Window - Authentication Method Tab

Authentication The authentication method to be used. The following settings are available:
Method
• Default

• LDAP

The default setting is authentication performed by MediationZone® .


When using LDAP, you may connect via LDAPS by entering ldaps:// in the
URL.
URL The URL for the external authentication server. The default ports, 389 for LDAP
and 686 for LDAPS, are used unless other ports are specified in the URL.

Example 46. Example of LDAP URL

ldap://ldap.example.com:389

Example 47. Example of LDAPS URL

ldaps://ldap.example.com:636

Try Connection Tests the connection to the authentication server. LDAP attributes and other settings
than the URL are not used when testing the connection.
User Base DN The LDAP attributes for user lookups in the external authentication server. The
substring %s in this value will be replaced with the Username entered at login to
produce an identifier that is passed to the LDAP server.

Example 48. Example of User Base DN

uid=%s,ou=users,dc=digitalroute,dc=com

Group Base DN The LDAP attributes for group lookups in the external authentication server.

236
Desktop 7.1

Example 49. Example of Group Base DN

ou=groups,dc=digitalroute,dc=com

The name of the groups must be identical to the names configured in Access
Groups.
TLS Enables Transport Layer Security.

Note!

The following must be considered when using TLS:

• LDAPS and TLS is not a valid combination.

• The URL must contain a fully qualified DNS name or the authentication
will fail.

• The default LDAP port, 389, should be used.

The selected authentication method becomes effective when the configuration is saved.

Note! Authentication for the user mzadmin is always performed by MediationZone® regardless
of the selected authentication method.

7.2.4. Enhanced User Security


The security user control can be enhanced by changing the property mz.security.user.con-
trol.enabled in the platform.xml file. By default this property is set to value="false".
If set to true a number of rules regarding the passwords apply as soon as the platform is restarted.

The default value for allowed password age is set to 30 days for administrators and 90 days for users.
These limits can be modified in the file platform.xml.

mz.security.max.password.age.admin

mz.security.max.password.age.user

The values are set in days and are only valid in combination with the user.control property.

Note! The enhanced user security settings are not applicable when using LDAP as authentication
method.

7.2.5. Enhanced Security Password Rules


If the enhanced user security property is set to True, the password rules are:

The password must:

237
Desktop 7.1

• Be at least eight characters long

• Include at least one special character and one that is either a number or capital letter

The password should not:

• Contain more than three identical characters in an uninterrupted sequence

• Be identical to the user name

• Include the user name

• Be identical to any of the recent twelve (minimum) passwords used for the user ID

The default value of a password age is 30 days for administrator, and 90 days for user.

When an administrator creates a new user, a password should be assigned to the user. When the account
is used for the first time the user is prompted to change the password.

Note! Three failed login attempts will disable the user account. If this happens contact your
system administrator.

Note! The enhanced security password rules are not applicable when using LDAP as authentic-
ation method.

7.3. Configuration Browser


The Configuration Browser gives a view of all configurations in MediationZone® . The user can
easily filter what configurations to be shown, by selecting configurations of a specific type or owned
by a specific user. By default all configurations are displayed. The browser supports a set of operations
that can be performed for the configurations; cut, copy, paste, delete, rename, encrypt, decrypt and
validate. For each configuration you can also open a Properties dialog where permissions can be set
and where you can view history, references and basic information, see Section 2.2.6, “Properties used
for Desktop”. The browser also supports drag and drop for moving and copying configurations between
folders. Holding down the CTRL-key while dragging a configuration will result in a copy operation.

From the Configuration Browser you can also open the Configuration Tracer. In Configuration
Tracer you see both active configurations, and historical ones.

To open the Configuration Browser, click the Tools button in the upper left part of the Medi-
ationZone® Desktop window, and then select Configuration Browser from the menu.

Note! When using the default authentication method, configurations created by LDAP authen-
ticated users may not appear in the Configuration Browser . In order to make these configura-
tions visible, change the owner in Properties under the right-click menu of the Configuration
Navigator. The new owner must be listed in the Users tab of the Access Controller.

238
Desktop 7.1

Figure 191. The Configuration Browser

7.3.1. Menus
In this section the different menus in the Configuration Browser, and their respective options are
described.

7.3.1.1. File menu

New Folder... Select this option to create a new folder.

7.3.1.2. Edit menu

View/Edit Configura- Available when at least one configuration is selected in the browser.
tion(s)...
Select this option to open the selected configuration.

Export Configura- Available when at least one configuration is selected in the browser.
tion(s)
Select this option to export the selected configurations. The System Exporter
window will open with the configurations pre-selected.

Note! When exporting from the Configuration Browser configuration


dependencies are not automatically selected. This can be achieved by
selecting the "Select Dependencies" check box in the System Exporter
window. For further information see Section 7.9, “System Exporter”.

Cut Select this option to put one or more configurations on the clipboard for
moving the configuration to another location. Select the menu option Paste
in the folder where the configurations should be stored.

239
Desktop 7.1

This option is not applicable if the configuration is locked. For further in-
formation see Section 2.1.2, “Locks”.
Copy Select this option to put one or more configurations on the clipboard for
copy the configurations to another location. Select the menu option Paste
in the folder where the copied configurations should be stored.
Paste Select this option to store configurations that have been cut or copied to the
clipboard into a folder.

Delete... Select this option to delete the selected configuration(s). If the configuration
is referenced by another configuration, a warning message be displayed, in-
forming you that you cannot remove the configuration. For further informa-
tion see Section 7.3.5.3, “The References Tab”.
Rename... Select this option to change the name of the selected configuration. Take
special precaution when renaming a configuration. If, for example, an APL
script is renamed, workflows that are using this script will become invalid.
This is especially important to know when renaming folders containing many
ultra format configurations or APL. Renaming a folder with ultra formats
or APL configurations will make all referring configurations invalid.
Encrypt... Select this option to encrypt the selected configurations.
Decrypt... Select this option to decrypt the selected configurations.
Validate... Select this option to validate the configuration. A validation message will
be shown to the user.

Properties Select this option to launch the Properties dialog for the selected configur-
ation. For further information, see Section 7.3.5, “Properties”.

Configuration Tracer Select this option to launch the Configuration Tracer. For further informa-
tion, see Section 7.3.4, “Configuration Tracer”.

7.3.1.3. View menu

Filter Configurations Select this option to open the Filter Configurations dialog:

• From the Types tab you select the configurations that you want to see in
Configuration Browser

• From the Owners tab you select the owners whose configurations you want
to see in Configuration Browser

Configuration Types This menu option contains a sub menu with all the MediationZone® configur-
ation types and allows the user to filter the current view in the Configuration
Browser to only display configurations of certain types.
Owners This menu option contains a sub menu with all the MediationZone® users
and allows the user to only display configurations that are owned by certain
users.

7.3.2. The Folder Pane


On the left side of the Configuration Browser all folders in the system are displayed. The content of
the folder is shown in the configuration table. If you right click on a folder, a pop-up menu will be
displayed, allowing the user to rename or delete the folder as well as create a new folder. The built-in
Default folder cannot be modified.

240
Desktop 7.1

Each folder listed in the folder pane has a number attach to its name. This number indicates how many
configurations that are stored in that folder. The number will change when using a filter which makes
it easy to see which folders contains configurations of a specific type.

7.3.3. Configuration Browser Table


Displays all configurations for the selected folder. If you right click on a configuration, a pop-up menu
will be displayed from which you can perform most of the actions that are listed in the Edit menu.

The columns in the Configuration Browser table are:

Column Description
Type Contains an icon representing the application type.
Name Displays the name of the configuration.
Lock Indicates whether the configuration is locked or not.
Perm Displays the permissions granted to the current user of the configuration. Permissions
are shown as R (Read), W (Write) and X (eXecute). If the configuration is encrypted,
an E will also be added. For further information about permissions, see Sec-
tion 7.3.5.2, “The Permissions Tab”.
Owner Displays the username of the user that created the configuration. The owner can:

• Read, modify (write), and execute the configuration

• Modify the permissions of user groups to read, modify, and execute the configur-
ation.

Modified By Displays the username of the user that made the last modifications to the configur-
ation.
Modified Date Displays the date when the configuration was last modified.

7.3.4. Configuration Tracer


The Configuration Tracer provides you with two View Modes to choose from; the Active view mode
lists configurations that are saved in the system, and the Historic view mode that lists historical con-
figurations. The latter lists configurations that had been deleted.

The Configuration Tracer also provides you with the unique identification key that the system gives
every configuration.

Figure 192. The Configuration Tracer

241
Desktop 7.1

7.3.4.1. Menus
Edit Menu

View/Edit Configuration... This option is enabled when in Active mode, and when a Configuration
is selected. When you select a Configuration it opens in a tab.

Restore Applicable only in the Historic view mode.

Click to restore the selected configuration. Once you click Restore


and confirm its validity, the configuration is active and available for
use.

View Menu

Active Active mode will display the same configurations as those displayed in the Configuration
Browser.
Historic Historic mode will display configurations that have been removed from the system. The
user may select to restore such a configuration.
Refresh Select this option to refresh the information in the table.

7.3.4.2. Table
The table in the Configuration Tracer contains the following columns:

Type Contains an icon representing the application type.


Folder Displays the name of the folder that contains the configuration.
Name Displays the name of the configuration.
Enc Indicates if the configuration is encrypted or not. If it is encrypted, this column will
display an 'E', otherwise a dash '-'.
Ver Displays the configuration version.
Key Displays the internal key used by MediationZone® to identify the configuration.
Modified By Displays the user name of the user that made the last modifications to the configur-
ation.
Modified Date Displays the date when the configuration was last modified.

7.3.5. Properties
To open the Properties dialog, either right click on a configuration, or select a configuration, click on
the Edit menu and select the Properties option.

242
Desktop 7.1

Figure 193. The Properties Dialog Box

This dialog contains four different folders; Basic, which contains basic information about the config-
uration, Permission, where you set permissions for different users, References, where you can see
which other configurations that are referenced by the selected configuration, or that refers to the selected
configuration, and History which displays the revision history for the configuration. The Basic tab is
displayed by default.

7.3.5.1. The Basic Tab


The Basic tab is the default tab in the Properties dialog and contains the following information:

Name Displays the name of the configuration.


Type Displays the type of configuration.
Key Displays the internal key used by MediationZone® to identify the configuration.
Folder Displays the name of the folder in which the configuration is located.
Version Displays the version number of the configuration, see the History folder for further
information about the different versions.
Permissions Displays the permissions granted to the current user of the configuration. Permissions
are shown as R (Read), W (Write) and X (eXecute). If the configuration is encrypted,
an E will also be added. For further information about permissions, see Section 7.3.5.2,
“The Permissions Tab”.
Owner Displays the username of the user that created the configuration. The owner can:

• Read, modify (write), and execute the configuration

• Modify the permissions of user groups to read, modify, and execute the configuration.

Modified by Displays the user name of the user that made the last modifications to the configuration.
Modified Displays the date when the configuration was last modified.

If you want to use the information somewhere else you can highlight the information and press CTRL-
C to copy the information to the clipboard.

7.3.5.2. The Permissions Tab


The Permissions tab contains settings for what different user groups are allowed to do with the con-
figuration:

243
Desktop 7.1

Figure 194. The Permissions Folder

As access permissions are assigned to user groups, and not individual users, it is important to make
sure that the users are included in the correct user groups to allow access to different configurations.

R W X E Permission Description
R - - - Allowed only to view the configuration, given that the
user is granted access to the application.
- W - - Allowed to edit and delete the configuration.
- - X - Allowed only to execute the configuration.
R W - - Allowed to view, edit and delete the configuration, given
that the user is granted access to the application.
- W X - Allowed to edit, delete and execute the configuration.
R - X - Allowed to view and execute the configuration, given
that the user is granted access to the application.
R W X - Full access.
- - - E Encrypted.

7.3.5.3. The References Tab


The References tab contains information about which other configurations that the current configuration
is referring to, and which other configurations that the current configuration is referenced by:

Figure 195. The References Folder

244
Desktop 7.1

The References tab contains two sub folders; Used By, that displays all the configurations that uses
the current configuration, and Uses, that displays all the configurations that the current configuration
uses.

If you want to edit any of the configurations, you can double click on it and it will be opened in a tab.

7.3.5.4. The History Tab


The History tab contains version information for the configuration:

Figure 196. The History Folder

In the version table, the following columns are included:

Version Displays the version number.


Modified Date Displays the date and time when the version was saved.
Modified By Displays the user name of the user that saved the version.
Comment Displays any comments for the version.

If you want to clear the history for the configuration, click on the Clear Configuration History button.
The version number will not be affected by this.

7.4. Configuration Monitor


In the Configuration Monitor, the status for configuration operations, such as save and update, can be
monitored. By default, only the users own operations are shown, but all operations can be shown by
selecting the checkbox for all operations. In the table, all active operations and their progress are shown.

When an operation results in an exception during recompilation of configurations or a dependent


configuration switches state, a warning is raised and the operation row will stay in the list until it is
manually deleted. A warning sign is also placed on top of the icon in the status bar.

To open the Configuration Monitor, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select Configuration Monitor from the menu.

245
Desktop 7.1

Figure 197. The Configuration Monitor View

7.4.1. Menus and Buttons


In this section the different menus and buttons in the Configuration Monitor, and their respective
options are described.

7.4.1.1. Edit menu

Delete Select this option to delete the selected configuration(s) from the list.

7.4.1.2. View menu

Details Select this option to display the details about warnings that has occured. For more information
regarding the details, see Section 7.4.3, “Details”

7.4.2. Configuration Monitor Table


Displays all operations that is active or has failed. If you right click on a operation, a pop-up menu
will be displayed from which you can perform the actions that are listed in the Edit and the View
menus.

If Show all Operations is selected, operations from all users will be shown.

The columns in the Configuration Monitor table are:

Columns Description
User Name Specifies which user and desktop host that initated the operation.
Operation Name Specifies the operation that is executing.
Progress The progress of the steps for each operation is shown in this column. For example,
if an Ultra format has been saved and three workflows are dependent on this format,
the save operation would consist of four steps.

7.4.3. Details
To display the details for an operation, select the operation in the Configuration Monitor table and
click on the Details button.

246
Desktop 7.1

Figure 198. Operation warnings

The displayed Details is divided in two parts. The first section lists the dependent configurations that
has changed their states between invalid and valid and the second part comtains a selectable list of
exceptions that occured during compilation. The exceptions and their stack traces can be viewed by
selecting the exception and click on the View Trace button.

7.5. Documentation Generator


Using the Documentation Generator, you can create documentation for the configurations that you
have created.

7.5.1. To Generate Automated Documentation


To open the Documentation Generator, click the Tools button in the upper left part of the Medi-
ationZone® Desktop window, and then select Documentation Generator from the menu.

Figure 199. The Documentation Generator View

To generate documentation on the configurations in the system, you must select the Output Target
directory in which you want to generate the documentation. To select a directory click the Browse...
button, select the target directory, and click the Save button. Click the Generate button. You can then
open the generated HTML file (index.html) in your web browser from the selected target directory.

247
Desktop 7.1

Figure 200. Documentation displayed in web browser

7.5.2. Content of Automated Documentation


The documentation generated includes up-to-date information on the saved configurations. The sections
included vary according to the type of configuration documented and how it is used. The possible
sections included are the following:

Section Description
Workflow An image of the configuration. This section is only included for workflow configurations.
Globals The variables and constants that are declared globally. This section is only included for
APL Code configurations.
Functions The APL functions. This section is only included for APL Code configurations.
Description The content provided by the user in the configuration profile, using the Documentation
dialog. For example, you can provide a description and the purpose of the configuration
in the dialog.

For further information on how to populate this section, see Section 2.3.3.3, “Document-
ation”.
Uses A list of all the configurations that the configuration uses, for example, APL code or
Ultra format.
Used By A list of all the configurations that use the configuration.

248
Desktop 7.1

Access A list of the users who have access to the configuration.

7.6. Execution Manager


With Execution Manager you enable, activate, and monitor multiple workflow groups.

To open the Execution Manager, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select Execution Manager from the menu.

Figure 201. The Execution Manager View

The Execution Manager view is made up of the three tabs, Overview, Running Workflows, and View.
You open a Detail View tab for every workflow group, or groups, that you want to monitor.

7.6.1. The Overview Tab


The Overview tab is comprised of the Status box and the workflow groups table.

7.6.1.1. The Status Box


In the Status box you see a log of events of workflows and of workflow groups, that are also displayed
on the System Log tool. See Figure 201, “The Execution Manager View”.

You can clean the Status box from messages with a right-click, and then clicking Clear Area.

7.6.1.2. The Workflow Groups Table


The workflow groups table provides you with information about all the workflow groups that are
configured in the system:

Column Description
Name The workflow group name
Mode If the workflow group is Enabled it can be activated by its Scheduling Criteria.
If it is Disabled, the workflow group can only be started manually.
State The workflow group current state
Runtime Info Specific information about the workflow group state
Started By The workflow or workflow group could have been started by either one of
the following:

• Schedule: The workflow group scheduling criteria

249
Desktop 7.1

• A user

• The parent workflow group

Start Time The last time an execution was started


Next Run The next scheduled execution time.

Note: If the workflow is not enabled this space is empty.


Next Suspend Action The suspend action is either the scheduled execution suspension of a config-
uration (workflow or workflow group), or the removal of such a suspension
(activation enabling).

7.6.1.3. The Right-Click Menu


Right-clicking a row in the table opens the following menu:

Entry Description
Open in Detail View Opens the selection in a separate tab.
View Abort Message Opens an error dialog box that specifies the reason to aborting execution of
the particular workflow group or its workflow member.
Start Triggers execution of your selection
Stop Stops your selections execution
Enable/Disable Select if the workflow group should be enabled. This value overrides the one
in the workflow groups table which is described in Section 7.6.1.2, “The
Workflow Groups Table”
Open in Editor Opens a workflow configuration or a workflow group in a new tab in Desktop.
Search Opens a Search and Filter bar at the bottom of the Execution Manager. Enables
you to search and filter the list of workflows in the workflow groups table.

Using all lower case letters in the search and filter text field will result in case
insensitive search and filtering. If upper case letters are used anywhere in the
text field the search will be case sensitive.

Note: You can open the Search Bar from the View menu as well.

7.6.2. The Running Workflows Tab


This tab displays the workflows that are currently running as well as the ones that are unreachable.

Note! On this view you stop a workflow by right-clicking it and then selecting Stop.

The Running Workflow tab table contains the columns that you find on the Overview tab, as well as
the following:

Column Description
EC The IP address of the computer on which the workflow is running
Debug On or Off. See Toggle Debug Mode in Section 4.1.11.3, “Viewing Agent Events”.
Backlog The number of files that are yet to be processed.

The value on Backlog is identical to the Source Files Left MIM value.

250
Desktop 7.1

Throughput This column displays the throughput for the workflow's collecting agent. The value
shows either number of UDRs or bytes, depending on what the collecting agent produces,
and is updated every five seconds as long as the workflow is being executed.

7.6.3. The Detail Views Tab


The Detail View tab displays the workflow group, or groups, that you select from the Overview tab.

Note!

• Detail views are saved as part of the user preferences, and therefore enable you to export and
import them along with user information.

• A workflow group on a Detail View that is marked with a yellow warning icon, is invalid.

The table on this tab contains the columns that you see on the Running Workflows tab as well as the
following:

Column Description
Prereq A comma delimited list of the workflow group prerequisites settings. See
Section 4.2.2.5.1, “Members Execution Order”
Next Suspend Action The suspend action is either the scheduled execution suspension of a config-
uration (workflow or workflow group), or the removal of such a suspension
(activation enabling).

7.6.3.1. The Right-Click Menu


Right-clicking a row in the table opens a menu. See a description of all entries but the following in
Section 7.6.1.3, “ The Right-Click Menu”:

Entry Description
Debug On/Off Turns on/off debug information for the selected workflows. See Toggle Debug
Mode in Section 4.1.11.3, “Viewing Agent Events” or Section 4.1.8.4, “Execution
Tab” for more information.
Open in Monitor Applies only to workflows. Opens the selected workflow in the workflow monitor.

Note: Not applicable for workflow groups.

To Open a Detail View:


1. Right-click the workflow group, or a selection of multiple workflow groups, that you want to see
in a Detail View.

2. Select Open in Detail View; the Input dialog box appears.

3. Enter a unique name for the view and click OK; a new tab opens and displays the Workflow groups
and their members in a separate table.

7.6.4. Managing the Detail Views Tabs


You can choose between hiding and removing the Detail Views Tabs from the View Manager dialog
box.

251
Desktop 7.1

To Manage Detail View Tabs:


1. From the View menu select Manage Detail Views; the View Manager dialog box opens.

2. Check to select the tabs that you want on display, and clear the tabs that you want hidden.

3. To remove a tab: • Use the Remove button for your checked entries

• OR, from the View menu select Close Current View

• OR, right-click the tab that you want to close and select Close

• OR, close the tab by clicking the x button

7.7. Pico Manager


The Pico Manager configuration contains all the available hosts that run pico instances, including
their current permissions. See Pico Instance in Terminology section.

From the Pico Manager it is possible to deny hosts of pico instances (pico hosts) access to the system.
By default, access is granted to added hosts, until the privilege is manually removed.

Execution Contexts must be registered on a pico host before use. The registration is done from the
Pico Manager. For further information, see Section 7.7.1.1, “Adding Pico Hosts”.

You can also register groups of ECs/ECSAs, which can make configuration easier as the workflows,
or workflow groups, will then be able to address the configured groups instead of specific ECs/ECSAs.

The registration is done from the Pico Manager. For further information, see Section 7.7.2.1, “Adding
EC/ECSA Groups”.

By default, instances of Desktop, mzsh, and Service Contexts always have access to the system and
you do not need to register these on pico hosts. You can change this behavior by setting the property
mz.dynamicconnections in the platform.xml file:

• true - Instances always have access.

• false - Instances must be registered on pico hosts for access.

The Platform must be restarted for the change to become effective.

To open the Pico Manager, click the Tools button in the upper left part of the MediationZone® Desktop
window, and then select Pico Manager from the menu.

Figure 202. The Pico Manager - Pico Tab

252
Desktop 7.1

The Pico Manager configuration contains two different tabs; Pico, which is used for registering pico
clients, and Groups, which is used for registering groups of ECs/ECSAs. The Pico tab is displayed
by default.

7.7.1. The Pico tab


The Pico tab contains the following settings:

IP Address The IP address of the pico client. IPv6 addresses will be displayed with long notation
even if they have been entered with short notation.
Access Shows if the host is authorized to connect to the MediationZone® system.

7.7.1.1. Adding Pico Hosts


As described in the previous section, access may be denied/allowed for specified hosts. In case a host
is disabled while connected to the system, the change will take effect the next time a pico client on the
host tries to connect to the system.

Figure 203. Add Pico Host

IP Address Enter the IP address of the pico host. If using an IPv6 address, you can select to enter
the address with either long or short notation, and the system will then display the
addresses with long notation.

For further information about IPv6 addresses, see the System Administrator’s Guide.
Deny Access Indicates if the pico host will have access to connect to MediationZone® .
Instances A list of the Pico instances that you add. See Pico Instance in Terminology document.

Note! You can add more than one instance to a specific host.

7.7.1.1.1. Adding an Execution Context to the System

1. Make sure the new Execution Context is properly installed in the local area network. It has to be
assigned a host name and have the prerequisite software installed according to the MediationZone®
Installation Instructions - User Guide.

2. Make an Execution Context only installation of MediationZone® on the host. Make sure to give
the new Execution Context a unique name.

253
Desktop 7.1

3. Register the new Execution Context in the Pico Manager. Make sure to enter the name exactly as
entered in the previous step.

4. Start the Execution Context on the new host by entering the command:

$ mzsh startup ec2

where ec2 is the name given to the Execution Context.

7.7.1.1.2. Adding a Desktop to the System

1. Make sure the Desktop host is properly installed in the local area network and have the prerequisite
software installed according to the MediationZone® Installation Instructions - User Guide.

2. Make a Desktop only installation of MediationZone® on the new host.

3. If the property mz.dynamicconnections is set to false in the platform.xml file,


register the new Desktop in the Pico Manager.

4. Start the Desktop on the new host:

i. In Microsoft-Windows, on the desktop, click the MediationZone® icon.


OR

From the Start menu select Programs and then select MediationZone® Desktop.

ii. In Unix, enter the command

$ mzsh desktop

7.7.2. The Groups Tab


The Groups tab is used for registering groups of ECs/ECSAs

Figure 204. The Pico Manager - Groups Tab

The Groups tab contains the following columns:

Group Displays the name of any registered groups. This name will be selectable when configuring
Exection Contexts in the Execution tab in the workflow properties, or in the workflow
group configuration. See Section 4.1.8.4, “Execution Tab” for further information.
Members Displays the names of the ECs/ECSAs that have been included in the group.

7.7.2.1. Adding EC/ECSA Groups


To register a group of ECs/ECSAs:

254
Desktop 7.1

1. In the Groups tab, click on the Add button.

The Add EC Groups dialog opens.

This dialogue contains the following settings:

Group Name The name of the group.


EC/ECSA These radio buttons determine if the group will contain ECs or ECSAs.
Execution Context The ECs or ECSAs that are included in the group.

2. Select if the group should contain ECs or ECSAs by clicking on the corresponding radio button.

3. Click on the Add button.

The Add Execution Context dialog opens.

4. Click on the Execution Context drop-down-list and select one of the ECs/ECSAs you want to add
and click on the Add button.

The selected EC/ECSA is now added in the Execution Context section in the Add EC Groups
dialog.

5. Repeat step 4 for all the ECs/ECSAs you want add and then close the dialog by clicking on the
Close button.

6. When you are satisfied with your group configuration, click on the Add button in the Add EC
Groups dialog.

The group is now added in the Groups tab in the Pico Manager configuration.

7. If you want to add additional groups, alter the configurations in the Add EC Groups and click on
the Add button for each group.

8. When you have created all the groups you want to have, click on the Close button.

9. Click on the Save button in the Pico Manager configuration.

The configured groups are now available when configuring Execution settings in Workflow Prop-
erties. See Section 4.1.8.4, “Execution Tab” for further information.

7.8. Pico Viewer


The Pico Viewer window displays a list of all pico clients currently online in the system. Pico clients
are grouped in pico-started instances. For instance, EC1 in the following figure is a pico-started instance,
while its content, Batch Storage and Execution Context is referred to as pico clients.

To open the Pico Viewer, click the Tools button in the upper left part of the MediationZone® Desktop
window, and then select Pico Viewer from the menu.

255
Desktop 7.1

Figure 205. Pico Viewer Window

Pico Instance Name of the MediationZone® pico instance/clients. For each and every of one the
pico instances - Platform, Desktop, EC, etc - a JVM (Java Virtual Machine) is
started.
Allows the user to remove a stand-alone unreachable Execution Context from the
system in case it is unreachable. The platform will never automatically unregister
such instance since it is accepted that it can reside on an unreliable network.
Secure Indicates if the Pico instance is SSL secured or not.
Start Time The time the pico instance was started.
Memory Used, available, and maximum memory on the hosting JVM.
Response [ms] The time it took in milliseconds for the local Desktop to invoke a ping on the pico
instance.

7.8.1. Tool-Tip Information


Resting the mouse pointer on any of the objects in the Pico Instance column, will display information
on the OS and JVMs on which it is running.

Figure 206. The Tool-Tip of a Pico Instance

Resting the mouse pointer on any of the objects in the Memory column, will display detailed inform-
ation on the memory usage on the hosting JVM.

Figure 207. The Tool-Tip of the memory usage

7.9. System Exporter


System Exporter enables you to export data from your system either into a ZIP file or to a specific
directory. The export contains data about your system, its configurations, and run-time information.

256
Desktop 7.1

You can send this export data to another MediationZone® system, where you can use the System
Importer to import it and embed its contents locally.

Example 50.

A MediationZone® system can import a tested export ZIP file of configurations from a test
system and use it safely in production.

In System Exporter you can select data from the following folder types:

• Configuration: workflow configurations, agent profiles, workflow groups, Ultra formats, or alarm
detectors.

• Run-Time: Data that is generated by the system during workflow run-time.

• System: Other customized parts of MediationZone® such as: ECS, event category, folder (structure),
pico host, Ultra, user, or workflow alarm value.

Before using System Exporter, consider the following:

• No historical configuration of any application is included in the exported data.

• Avoid exporting excessive amounts of data. For information about data clean-up see Sec-
tion 4.1.1.4, “System Task Workflows”.

• When exporting Event Notifications, these will be disabled on import by default, see Sec-
tion 7.10.1.1, “To Import Data:” for further information.

7.9.1. Exporting
To open the System Exporter, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select System Exporter from the menu.

257
Desktop 7.1

Figure 208. The System Exporter View

The Edit Menu


Abort On Error Select this option to cancel the creation of an export file if an error occurs while
in process.

By default, an export file is created regardless of errors. If errors occur, the


export file might be missing components.
Select Dependencies Select this option to enable an automatic selection of dependent entries.

When you select an entry from the Available Entries table, all the de-
pendent entries are automatically selected, as well.

Encryption The export file is a ZIP file that contains a collection of XML files. Select the
Encryption option to make these files password encrypted.

UDR data such as archived data, will be encrypted as well.


Directory Output Select this option to prevent your selections from being packed into a ZIP file.
Your selections, XML files, and their tree view layout that you see under
Available Entries will be exported to the Target Output that you specify instead.
Exclude Runtime Select this option to exclude runtime data such as ECS or archive data. You
Data would typically want to exclude the runtime data when you export a large
amount of data, and you are only interested in the configurations.
The View Menu

258
Desktop 7.1

View Log Select this option to open a log of the export file production.

Figure 209. Export Log

Buttons and Fields


Output Target Click on the Browse button to select the path and enter the name of the file
where the export data is saved.
Available Entries Contains a tree layout view of the data you can select to export.
Export Click on this button to copy the selections to either an export ZIP file or directly
to the Target Output address; The Export button will change into Abort, which
enables you to cancel the Export process.

7.9.1.1. To Export Data:

Note! Since no runtime configuration change is included in the exported data and only the initial
value is exported, you need to take a note of information such as file sequence numbers in Col-
lector agents.

1. In the System Exporter, select options according to your preferences in the Edit menu.

2. Click on the Browse button to select the directory path to where you want to save your selections,
either ZIP-packed or not.

Hint!

By adding the following property:

<property name="mz.gui.systemexport.default.dir" value=""/>

in the $MZ_HOME/etc/desktop.xml file, you can configure what default directory you
want to have when clicking on the Browse button.

The value must be the full path to an existing directory, e g /home/mz.

The Exporter will also remember the last directory to which an export was made, and will
open the file browser in this directory the next time you click on the Browse button. This
directory will be kept in memory until the Desktop is closed.

259
Desktop 7.1

3. In the Available Entries field, expand the folders and select the check boxes for the entries you
want to export in the Include column.

4. Click on the Save as... button if you want to save data about your export.

Note! After you have selected the entries that you want included in the export ZIP file, you
can save your selections combination before you click Export. This is particularly useful if
you export a certain set of selections regularly.

The saved selection combination is a *.criteria" file that contains data only about your
selections. It is not an export ZIP file. The *.criteria" file is stored on your local disk
and not in the MediationZone® system.

5. Click on the Export button to start the export process.

Either an export ZIP file will be created at the Output Target, or the selected structure will be ex-
ported to the specified directory.

Note! In the export material you will also find three directories: one that includes the Ultra
code that your export involves, one that includes profile-relevant APL code, and another one
that contains workflow-related data. You can use the files that are included in these directories
to compare the export material with the data on the system to which you export.

7.9.2. Export File Structure


In MediationZone® , when you save a profile or a workflow configuration, you create a database entry.
When you then export a tree-structured data of your profiles and workflows, this database entry is in-
cluded in the exported material as a file.

If the profile or workflow data is password encrypted, it is exported as it is. Otherwise, a directory
named after that export data file, is created. In this directory, the contents of the export data file are
divided into files, as follows:

• The Ultra directory contains the files:

• Internal: Ultra profile related meta information

• Ultra_Format: Your Ultra code embedded in XML code

• The APL directory contains the files:

• Internal: APL profile related meta information

• APL_Source_Code: Your APL code embedded in XML code

• The workflow directory contains the files:

• Internal: Workflow related meta information

• Template: Workflow data, such as Agent configurations

• Workflow_Table: Workflow-table workflows related data

260
Desktop 7.1

The tree structure of the exported material is identical to the structure that is displayed on the
System Exporter view. See Figure 208, “The System Exporter View”.

7.10. System Importer


System Importer enables you to import data to your system, either from a ZIP file, or from a specific
directory. The import contains data about your system, its configurations, and run-time information.

System Importer imports data that has been exported by the System Exporter. Every time you import
data, System Importer will save a backup file that contains all the imported data. This file is stored
on the Platform computer, under $MZ_HOME/backup/yyyy_MM, by the name
import_<date>_<filename>.zip.

The file exported by the System Exporter can contain data from the following folder types:

• Configuration: Workflow configurations, agent profiles, workflow groups, Ultra formats, or alarm
detectors.

• Run-Time: Data that is generated by the system during workflow run-time.

• System: Other customized parts of MediationZone® such as: ECS, Event Category, Folder (structure),
pico host, Ultra, user, or workflow alarm value.

Before using System Importer consider the following:

• No historical configuration of any application is included in the exported data.

• Avoid importing excessive amounts of data. For information about data clean-up see Sec-
tion 4.1.1.4, “System Task Workflows”.

• When importing Event Notifications, these will be disabled by default, see Section 7.10.1.1,
“To Import Data:” for further information.

7.10.1. Importing
To open the System Importer, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select System Importer from the menu.

261
Desktop 7.1

Figure 210. The System Importer View

The Edit Menu


Abort On Error Select this option to abort the import process if an error occurs. If an error
occurs and you do not select this option, the import will be completed, but
the imported data might contain erroneous components.

Invalid Ultra and APL definitions are considered erroneous, and result
in aborting the import.

Select Dependencies Select this option to have dependencies follow along with the entries that
you actively select.
Preserve Permissions Select this option to preserve user permissions in the current system when
importing a configuration. Clear this option if it is okay to overwrite user
permissions in the current system when importing a configuration.
Directory Input Select this option to enable the import of unpacked data that have been expor-
ted to a directory, see Section 7.9.1, “Exporting” for further information.
Clear this option to import a ZIP file.
Hold Execution Select this option to prevent scheduled workflow groups from being executed
while importing configurations.
Restart For information, see systemimport in the MediationZone® Command Line
Tool user's manual.
Stop and Restart For information, see systemimport in the MediationZone® Command Line
Tool user's manual.
Stop Immediately and For information, see systemimport in the MediationZone® Command Line
Restart Tool user's manual.
Wait for Completion For information, see systemimport in the MediationZone® Command Line
and Restart Tool user's manual.
The View Menu

262
Desktop 7.1

View Log Select this option open a log of the import process.

Figure 211. Import Log

Buttons and Fields


Input Source Click on the Browse button to select the path and enter the name of the file
where the data that you want to import has been saved.
Available Entries Contains a tree layout view of the data you can select to import.
Import Click on this button to import the selections from the Input Source address
to your system. The Import button will change into Abort, which enables
you to cancel the Import process.

Note! If the configuration directory structure is not identical to that of


the exported material, the import will fail.

7.10.1.1. To Import Data:


1. In the System Importer, select options according to your preferences in the Edit menu.

2. Click on the Browse button to select the directory where the exported data is located.

Hint!

By adding the following property:

<property name="mz.gui.systemimport.default.dir" value=""/>

in the $MZ_HOME/etc/desktop.xml file, you can configure what default directory you
want to have when clicking on the Browse button.

The value must be the full path to an existing directory, e g /home/mz.

The Importer will also remember the last directory from which and import was made, and
will open the file browser in this directory the next time you click on the Browse button.
This directory will be kept in memory until the Desktop is closed.

3. In the Available Entries field, expand the folders and select the check boxes for the entries you
want to import in the Include column.

263
Desktop 7.1

4. Click on the Import button to start the import process.

5. Update the dynamic configuration data in the collectors with the file sequence numbers, that you
had noted down before performing the Export, see Section 7.9.1.1, “To Export Data:” for further
information.

6. Enable all the workflows that are configured with Scheduling.

• Prior to importing Inter Workflow and Aggregation profiles, empty the Workflow data stream.
Otherwise, these agent profiles will be overwritten by the profiles that are included in the
imported bundle, and might not recognize or reprocess data.

• Imported workflow groups are disabled by default. You need to activate all the members,
their respective sub-members, and the workflow group itself.

• When you import a User it is disabled by default. A User with Administrator permissions
must enable the user and revise which Access groups the user should be assigned to.

• Imported Alarms are disabled by default. You enable an Alarm from the Alarm Detection.

• Imported Event Notifications are disabled by default. You enable an Event notification from
the Event Notification Configuration.

7.11. System Log


By default, events and errors encountered in the MediationZone® system are saved in the System Log.
The System Log handles duplicate events within a time frame. Therefore every event and error has a
first and last occurred date, as well as information of how many times it was repeated. The System
Log allows the user to browse and purge this log.

There are three types of categories:

• System related log entries, generated by the core platform servers.

• Workflow related log entries, generated by the agents.

• User related log entries, originating from user actions.

If, for instance, a workflow aborts, the reason for the abortion may be tracked through this utility.

To open the System Log, click the Tools button in the upper left part of the MediationZone® Desktop
window, and then select System Log from the menu.

264
Desktop 7.1

Figure 212. The System Log View

Initially, the window is empty and must be populated with data using the Search System Log dialog.
For further information, see Section 7.11.1, “Searching the System Log”.

Edit menu Select Selects all entries in the group currently displayed. A group is selected from the
All Show Entries list.
Edit menu Displays the Search System Log dialog where search criteria may be defined to
Search... limit the entries in the list. For further information, see Section 7.11.1, “Searching
the System Log”.

View menu Show Displays the Stack Trace Viewer window. This information must always be in-
Trace... cluded when contacting DigitalRoute® Support in cases involving error messages.
Show Entries Entries matching the search criteria are displayed in groups of 500. Show Entries
contains a list of all groups of 500, from which one is selected. Note that the full
content of the log messages for a group is fetched from the database once the
group is selected. This to have as little impact on the overall performance of the
system as possible.
Severity The severity of the message, could be any of the following:

• I (information) - An informative message which is logged, for instance, when


a user logs in or a workflow is activated.

• W (warning) - A warning message is also informative, but is considered being


slightly more serious than an information message. A warning message is
logged, for instance, when a workflow sends data to ECS.

• E (Error) - An error is logged when any part of the system fails, for instance,
when a workflow aborts. Double-clicking an error message will display the
Stack Trace Viewer. This information must always be included when contact-
ing DigitalRoute® Support, in cases involving error messages.

265
Desktop 7.1

• D (Disaster) - Is usually never used, other than possibly for user defined agents.

Last Occurred The date when this message last occurred.


Repeated Shows how many times this message has occurred.
Area Indicates which part of the system the message originates from; user, system or
workflow.
Workflow/Agent The name of the workflow/agent from which the message originates.
Message The message. Note that selecting an entry from the list will display its full contents.
First Occurred The date when this message first occurred.
Message Area If an entry is selected from the list, further details about it is displayed in this
area.

7.11.1. Searching the System Log


In order to more easily track down specific entries, the Search System Log dialog offers the possibility
to constrain the entries that are displayed in the list.

The dialog is displayed by selecting Search... from the Edit menu.

266
Desktop 7.1

Figure 213. Search System Log

For an entry to be displayed in the list, it has to pass all of the following filters.

Log Area Which part of system that reported the entry. At least one must be enabled.
Severity Type Type of severity. At least one type must be enabled.
Period Period between what dates entries will be viewed. A few predefined options are
available. If none is selected, all are considered.

From If User Defined is selected in the Period list, all entries reported after
the selected date will match. If not, all entries before the To date will
match.
To If User Defined is selected in the Period list, all entries reported before
the selected date will match. If not, all entries after the From date will
match.

If neither From nor To is selected, all entries will match.

Workflow Group Check to include log messages of the workflow group that you select from the
drop-down list.
Workflow Contains options to filter out specific workflows and/or agent names. If disabled,
all workflows/agents will match.
Agent Check to include log messages of the agent that you select from the drop-down
list.

Note! System log presents log messages according to your Log Area selec-
tion: User, System, and/or Workflow. To have System Log present agent-
related messages you need to configure an agent event from the Event No-
tification Configuration. For further information see: Add Event in Sec-
tion 5.3.2, “The Event Setup Tab”.

267
Desktop 7.1

Username If enabled, all activities performed by the selected user will match. If disabled, all
user activities will match.

This only applies to events invoked by users, such as inserting, updating


and deleting data in the system. Thus, it does not apply to workflow owner-
ship and can therefore not be utilized to filter out a specific user's workflows.

Log Message Log entries may be scanned for occurrences of specific messages. Using all lower
case letters in the text field will result in case insensitive search. If upper case
letters are used anywhere in the text field the search will be case sensitive.

7.11.2. Printing the System Log


Detailed information, or a brief description of one or many System Log entries may be printed from
the Print System Log dialog. Select the entries of interest, before selecting Print. Selections are made
in one of the following ways:

• Single select - An individual row is selected by clicking on it.

• Browse select - A continuous number of rows are selected by first clicking it, then while holding
down the <Shift> key clicking the last row.

• Extended select - Individual rows are selected by clicking them while the <Ctrl> key is held down.

Selecting Print... from the File menu, displays the dialog.

Figure 214. Print System Log

Headers Only Will print only a short summary of each selected entry. The printed information
is the same as displayed on each row in the browser.
Full Details Will print detailed information about each selected entry, one page for each.
The information printed is the same as displayed in the Message Area of the
System Log Inspector.
Include Stack Trace Will include the stack trace for the log entries where available (that is, Error
type messages).

7.12. System Statistics


MediationZone® constantly collects information from the different sub-systems and hosts within the
system. Among other things, this information is used for load balancing. By using the System Statistics
window, you may view, export and import the statistical information.

There are three different types of statistics: host, pico instance and workflow.

7.12.1. Host Statistics


MediationZone® collects statistics from the different machines hosting a Platform or Execution Context,
e g the load of the CPU or the number of context switches. This is called host statistics.

268
Desktop 7.1

MediationZone® uses the std UNIX command to collect the information. This binary must be installed
to have statistics collected and to perform load-balancing work for workflows. The following list holds
all values collected from each host. On newer operating systems, some of these may not be available
for collection due to changes in the kernel of the operation system.

• CPU User Time - This value shows how much time was spent in non-kernel specific code. This
value is displayed in percentage. 100% means that all processing power is spent. See CPU System
Time as well.

• CPU System Time - This value shows how much time was spent in kernel specific code, such as
scheduling of different processes or network transfer. This value is displayed in percentage. 100%
means that all processing power is spent.

• Context Switches - The number of context switches per second. A context switch occurs when one
process hands over information to another process. The more context switches, the less effective
and scalable the system will be.

• Swapped To Disk - The amount of data that was swapped out. A large value indicates that the
system does not have enough RAM to manage the memory requirements of the different processes.

• Swapped In From Disk - The amount of data that was read from swap.

• Processes Waiting For Run - Shows how many processes that are waiting to run. A high number
indicates that the machine is not fast enough to manage the load.

• Processes Swapped Out - Processes that have been persisted in swap due to insufficient available
memory, or due to aggressive management of the memory layer.

• Processes In Sleep - The number of processes that are presently not doing anything.

7.12.2. Pico Instance


Every minute, MediationZone® collects memory information from the different Java processes defining
the Platform and the Execution Context. This information shows how much memory is used and how
much memory that is available for the running process

• Used Memory - Shows the amount of memory currently allocated by the running process. As Java
is a language utilizing garbage collection, this number may very well get close to the maximum
memory limit without being a problem for the running process. However, if the amount of used
memory is close to the maximum limit for a long time, the process needs more memory. This value
is displayed in bytes. See the -Xmx and -Xms properties defined in the XML file defining the process.

• Maximum Memory - Shows the amount of memory that the process can use. This value is displayed
in bytes.

• Process CPU Time - Shows the percentage of CPU time that has been used.

• Open File Descriptors - This is a Unix measurement that enables you to create a statistical diagram
over the number of open files during the last minute, hour, or day.

• Garbage Collection Count - Shows the number of times the garbage collector has run since the
last time statistics was collected.

• Garbage Collection Time - Shows the amount of time the garbage collector has run since the last
time statistics was collected. This value is displayed in milliseconds

• Thread Count - Shows the number of allocated threads.

269
Desktop 7.1

7.12.3. Workflow Statistics


MediationZone® collects statistical data that is sampled every 5 seconds as long as a workflow is being
executed. This information includes:

• Throughput - Displays workflow throughput. As long as a workflow is being executed, Medi-


ationZone® continuously samples the amount of processed UDRs, or raw data, per second.

• Queue Throughput - Displays queue throughput per second for real-time queues. Statistics for
real-time queues is only available when routing UDRs, not raw data.

Note! To enable its convenient delegation to external systems, or to generate an alarm if the
throughput falls too low, throughput is also defined as a MIM value for the workflow. For
further information, see Throughput Calculation.

• Simultaneous - Displays the number of simultaneously running workflows.

• Queue Size - The size of the queue space that is being used at the time of the sample for each indi-
vidual queue.

MediationZone® uses Java Management Extensions (JMX) to monitor MIM tree attributes in running
workflows. For more information, refer to Section 8.3, “Workflow Monitoring”.

7.12.4. Viewing the System Statistics


To open the System Statistics, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select System Statistics from the menu.

To display statistics in the System Statistics window, you will have to use the Search function, see
Section 7.12.4.1, “Searching System Statistics” or Import statistics, see Section 7.12.6, “Importing
Statistics”.

7.12.4.1. Searching System Statistics


To perform a search, either click on the Search button, or click on the Edit menu and select the
Search... option to open the Search System Statistics dialog.

In the Search System Statistics dialog, search criteria may be defined in order to single out the Stat-
istics of interest.

270
Desktop 7.1

Figure 215. Search System Statistics Dialog

View Mode Specifies the type of statistics you want to view; Host, Pico Instance or Workflow.
Resolution Specifies the time resolution to be used.

There are three different time resolutions on which statistics are collected.

Minute This is the most precise value but requires the most from the server when
locating the statistics. It is saved every minute.
Hour These values are calculated every hour and are a sum of the minute values
for that hour.
Day Day values are calculated by the corresponding statistics task and is a sum of
the minute values for that day.

Criteria The Criteria settings are used for selecting which search criteria you want the displayed
statistics to meet. The following criteria are available:

Host - If this option is selected, the statistics originating from the host selected in the
drop-down list will be displayed.

Pico Instance - If this option is selected, the statistics originating from the pico instance
selected in the drop-down list will be displayed.

Workflow - If this option is selected, the statistics originating from the workflow selected
in the drop-down list will be displayed.

Period - If this option is selected, the statistics from the chosen time interval will be
displayed. Either you can select one of the predefined time intervals; Today, Yesterday,
This Week, Previous Week, Last 7 Days, This Month or Previous Month, or you can
select the option User Defined and enter the start and end date and time of your choice
in the From: and To: fields.

Note! If several criteria are enabled, an absolute match will be displayed. For in-
stance, if Host and Workflow is specified as well as Period, only the time for
which there are both workflow measures and host measures is displayed.

271
Desktop 7.1

7.12.4.2. Options in the System Statistics window


When you have performed your first Search, see Section 7.12.4.1, “Searching System Statistics”, or
imported statistics, see Section 7.12.6, “Importing Statistics”, the System Statistics window will display
the statistical information.

Figure 216. The System Statistics Window

For each type of statistics you have selected to view, you will see a Value drop-down list displaying
the statistical value selected for the statistical type, and one drop-down list displaying the criteria se-
lected. If you select another statistical value or another criteria in one of these drop-down lists, the
statistical data in the System Statistics window will be updated instantly.

For each statistics type you can see:

Host View The statistics for each host will be displayed in a separate color. If there are
several matching hosts, all may be displayed at the same time.
Pico Instance View The statistics for each pico instance will be displayed in a separate color. If
there are several matching pico instances, all may be displayed at the same
time.
Workflow View The statistics for each workflow will be displayed in a separate color, and if
you have selected to view queue statistics, each queue will have its own color.
If there are several matching workflows/queues, all may be displayed at the
same time.

The menus in the menu list contains options for printing, exporting, importing, searching and refreshing
the statistics. Searching, printing and refreshing can also be performed by using the buttons in top of
the window. To the right of the buttons you can see the current date. For further information about the

272
Desktop 7.1

Export... and Import... options, see Section 7.12.5, “Exporting Statistics” and Section 7.12.6, “Import-
ing Statistics”.

In the bottom of the window, there is a scroll bar and two buttons for zooming in and out. With these,
you can focus on a particular time window within the search result. The scrollbar enables you to scroll
back and forth in time to see the value changes.

In the bottom right corner of the window, you have a drop-down list called Value. This list contains
three different types of values:

Minimum Will display the lowest value that was sampled.


Average Will display the average value that was sampled.
Maximum Will display the highest value that was sampled.

7.12.5. Exporting Statistics


Exporting statistics may be useful for several purposes, for example if you want to share the statistical
information with someone who do not have access to your MediationZone® system.

To export the statistics:

1. In the System Statistics window, click on the File menu and select the Export... option.

The Save dialog will open.

2. Browse to the directory where you want to save the file, enter a file name and click on the Save
button.

The statistical information for the selected time period will be saved in *.zip format.

Hint! The export functionality can also be used for saving statistics on a regular basis, e g every
month or every year, to use for comparison with current statistics.

7.12.6. Importing Statistics


The Import functionality is used for importing statistics that has previously been exported in a Medi-
ationZone® system. When using the Import functionality you do not have to perform a search in order
to display the statistics.

Note! An import of statistics will not affect the data in the database, it will just display a snapshot
of the statistics at the time when it was exported.

To import the statistics:

1. In the System Statistics window, click on the File menu and select the Import... option.

The Open dialog will open.

2. Browse to the directory where the *.zip file you want to import is located, select the file and click
on the Open button.

The statistical information will now be displayed in the System Statistics window. The same search
criteria that were set in the Search System Statistics dialog when the statistics was exported will
be displayed.

273
Desktop 7.1

The date information in the top of the window will now display the time interval for the imported
statistics, and the text "Imported Statistics" will also appear in red beside the date information.

7.13. UDR File Editor


For information about the UDR File Editor, see the MediationZone® Ultra user's guide.

7.14. Ultra Format Converter


For information about the Ultra Format Converter, see the MediationZone® Ultra user's guide.

274
Desktop 7.1

8. Monitoring
MediationZone® uses Java Management Extensions (JMX Beans) to enable external monitoring. A
connector is used to connect a JMX agent to a JMX enabled management application.

The Java Monitoring and Management Console (jconsole) is a JMX client that allows you to monitor
a local or remote JVM process. Currently you can monitor:

• Events

• Workflows

• RCP Latency

• Aggregation

• Couchbase Monitoring

For further information about jconsole, see:

http://docs.oracle.com/javase/8/docs/technotes/guides/management/jconsole.html

8.1. Starting the Jconsole Client


Jconsole is included in your JDK installation. To start it:

1. Use the command jconsole in any directory.

The JConsole: New Connection dialog opens.

Figure 217. The JConsole: New Connection dialog

2. If you want to monitor a local JVM process, select the Local Process. Select the process you want
to view and then click on the Connect button.

275
Desktop 7.1

Note! Which process you should to select depends on what you want to monitor. If you want
to monitor the Event Server, select the codeserver process. For other monitoring, e g Event
Sender or workflow, select the picostart process for the Execution Context that the Event
Sender or workflow is running on.

3. If you want to be able to monitor a JVM process remotely, you have to add a few JDK properties
in the platform.xml and executioncontext.xml files.

Example 51. Example of how to set the JDK properties

If you enter the following properties in the platform.xml or executioncontext.xml


files (or both):

<jdkarg value="-Dcom.sun.management.jmxremote.port=9999"/>
<jdkarg value="-Dcom.sun.management.jmxremote.authenticate=false"/>
<jdkarg value="-Dcom.sun.management.jmxremote.ssl=false"/>

you will be able to connect to port 9999 without having to enter any user name or password,
and without using SSL.

Note! Use different ports if you set remote port in both platform.xml or
executioncontext.xml.

For further information about which ports that you are recommended to use, how to set up
user names and passwords, how to set up SSL, and remote monitoring and management in
general, see the JDK product documentation regarding JConsole Management.

In the New Connection dialog, you can then select the option Remote Process:, enter the hostname
and port along with any username and password that may apply, and click on the Connect button.

The Java Monitoring & Management Console will open.

8.2. Event Monitoring


Event Monitoring includes three different sections:

• EventServerQueue - which shows information about all the events in the system

• EventListenerQueue - which shows information about the different listeners in the system

• ECEventSenderQueue - which shows information about the events that the Execution Context
will try to send to the Platform. If the connection with the Platform is broken, the EC/ECSA will
cache the events and then try to send them again once the connection is back up.

8.2.1. Monitoring the EventServerQueue


If you want to monitor the EventServerQueue:

1. Select the codeserver process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.

The Java Monitoring & Management Console will open and display the Overview tab.

2. Click on the MBeans tab to display the different beans.

276
Desktop 7.1

3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.event, then on
the plus sign for EventServerQueue.

4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.

Figure 218. JConsole displaying the attributes for the EventServerQueue

For the EventServerQueue you can see the following information:

EventLoad This attribute value shows the current load of the EventServerQueue, i e the amount
of the queue's maximum size that is occupied with events, where 0.75 equals 75
%, 0.5 equals 50 %, etc.
NoOfListeners This attribute value shows the total number of listeners within the system. If you
want to view information about a specific listener, see Section 8.2.2, “Monitoring
the EventListenerQueue”.
QueueSize This attribute value shows the total number of events in the queue.
TotalEvents This attribute value shows the total number of events logged since the Platform was
started.

8.2.2. Monitoring the EventListenerQueue


The EventListenerQueue contains a list of all the listeners to which the Event Server is about to dispatch
events.

To view the different listeners, follow the same procedure as for the EventServerQueue, see Sec-
tion 8.2.1, “Monitoring the EventServerQueue”, but click on the plus sign for EventListenerQueue
instead.

277
Desktop 7.1

Expand the tree for the listener that you want to view attributes for by clicking on the plus signs for
the EventListener you want to view and for Listener. Click on on Attributes to display the different
attribute values in the right section of the JConsole window.

Figure 219. JConsole displaying the attributes for the EventListenerQueue

For each listener in the EventListenerQueue you can see the following information:

EventLoad This attribute value shows the current load of the EventListenerQueue, i e the amount
of the queue's maximum size that is occupied with events, where 0.75 equals 75 %,
0.5 equals 50 %, etc.
QueueSize This attribute value shows the total number of events in the listener's queue.
TotalEvents This attribute value shows the total number of events logged for the listener since the
Platform was started.

8.2.3. Monitoring the ECEventSenderQueue


If you want to monitor the ECEventSenderQueue:

1. Select the picostart process for the EC you want to monitor the EventSender for when starting the
JMX client, see Section 8.1, “Starting the Jconsole Client” for further information.

The Java Monitoring & Management Console will open and display the Overview tab.

2. Click on the MBeans tab to display the different beans.

3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.event, then on
the plus sign for ECEventSenderQueue.

4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.

278
Desktop 7.1

Figure 220. JConsole displaying the attributes for the ECEventSenderQueue

For the ECEventSenderQueue you can see the following information:

ConnectedToPlatform This attribute value shows whether the Execution Context (ECSA) is con-
nected to the Platform (true) or not (false).
ConnectionDownTime This attribute value shows for how long the connection with the Platform
has been down. This value is displayed in seconds.
EventLoad This attribute value shows the current event load of the EventSender queue,
i e the amount of the queue's maximum size that is occupied with events,
where 0.75 equals 75 %, 0.5 equals 50 %, etc.
PersistentQueueSize This attribute value shows the number of events that the Execution Context
has not been able to send to the Platform due to broken connection.
QueueSize This attribute value shows the total number of events in the queue.
TotalEvents This attribute value shows the total number of events logged since the Ex-
ecution Context was started.

8.3. Workflow Monitoring


All MediationZone® workflows and MIM trees are published as JMX beans so that attribute values
for all currently running workflows, as well as their agents, can be viewed in real-time.

Note! Currently, the MIM monitoring is limited to global MIMs (real-time workflows).

If you want to monitor a workflow:

279
Desktop 7.1

1. Select the picostart process for the Execution Context on which the workflow is running when
starting the JMX client, see Section 8.1, “Starting the Jconsole Client” for further information.

The Java Monitoring & Management Console will open and display the Overview tab.

2. Click on the MBeans tab to display the different beans.

3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.wf, then on the
plus sign for Workflow and then on the plus sign for the workflow you want to monitor.

4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.

Figure 221. Workflow Monitoring using Jconsole

Beneath the workflow in the tree to the left, the MIMTree and Attributes can be expanded to display
more details of the different MIM tree attributes as shown in Figure 221, “Workflow Monitoring using
Jconsole”.

WorkflowManager shows information on number of running workflows.

When the Latency Statistics agent is used in the workflow, additional information becomes available
in the LatencyInfo structure. For information about the Latency Statistics agent, see Section 10.18,
“Latency Statistics”

280
Desktop 7.1

Figure 222. Latency Monitoring using Jconsole

8.4. RCP Latency Monitoring


The DigitalRoute® Remote Communication Protocol (RCP) is the protocol used for the Medi-
ationZone® internal communication. RCP latency monitoring can be performed on all of the available
Pico Instances:

• Platform

• Execution Context

• Execution Context Stand-Alone

• Desktop

• Command Line

Each time a new instance is started, for example, when starting an mzsh shell from the Command
Line, it will be added to the list of monitored Pico Instances.

The latency is the time it takes for a ping request to be sent to another party, for example, from the
Platform to an Execution Context, and for the corresponding ping response to be received.

Note! When starting the Platform, the latency values might become high. To get more realistic
values, do a reset using resetAllValues, as described in Section 8.4.2, “Operations”.

To monitor the latency between the Platform and the Pico Instance communicating with it, do the
following:

281
Desktop 7.1

1. Select the codeserver process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.

The Java Monitoring & Management Console will open and display the Overview tab.

2. Click on the MBeans tab to display the different beans.

3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.rcp, then on
the plus sign for Ping and then on the plus sign for the connection you want to monitor.

8.4.1. Attributes
Click on Attributes in the tree to display the different attribute values for a certain Pico Instance in
the right section of the JConsole window.

Figure 223. JConsole displaying the attributes for RCP

Counter This attribute value shows the total number of ping requests sent from a Pico Instance.
This value shows "0" when the Platform has been started, or after resetting the RCP
latency values using resetAllValues, as described in Section 8.4.2, “Operations”.
Latency Shows the current latency.
MinLatency This attribute value shows the lowest latency since the Platform was started, or after
resetting the RCP latency values using resetAllValues, as described in Section 8.4.2,
“Operations”.
MaxLatency This attribute value shows the highest latency since the Platform was started, or after
resetting the RCP latency values using resetAllValues, as described in Section 8.4.2,
“Operations”.

282
Desktop 7.1

8.4.2. Operations
Click on Operations in the tree to display the operation alternatives for a certain Pico Instance in the
right section of the JConsole window.

Figure 224. JConsole displaying the operations for RCP

resetAllValues Click on the resetAllValues operation in the tree to the left and then click this button
to reset all RCP latency values.

8.5. Aggregation Monitoring


There are two MBeans available to gather counter statistics about Aggregation, one for file storage
and one for Couchbase storage.

8.5.1. File Storage


When file storage is selected in an Aggregation profile, you can use JMX beans to view attributes
matching similar MIM parameters published by the Aggregation agent. There is also an operation that
you can use to reset all counters.

The MBean is registered under the com.digitalroute.profile domain, with the key type set to "Ag-
gregation" and the key name set to the Aggregation profile's name.

For information about MIM values published by the Aggregation agent, see Section 11.1.3.9, “Meta
Information Model”

To monitor the Aggregation MIM parameter values, do the following:

283
Desktop 7.1

1. Select the picostart process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.

The Java Monitoring & Management Console will open and display the Overview tab.

2. Click on the MBeans tab to display the different beans.

3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.profile, then
on the plus sign for Aggregation and then on the plus sign for the Aggregation Profile you want
to monitor.

8.5.1.1. Attributes
Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.

Figure 225. JConsole displaying the attributes for Aggregation with file storage

AggregationTime This attribute value shows the time (in milliseconds) that has been spent on
aggregation on the last batch.
CacheHits This attribute value shows the number of cache hits counted by the Aggreg-
ation profile each time session information is read from the cache.

CacheHits is reset each time the Execution Context is started, or after using
resetCounters, as described in Section 8.5.1.2, “Operations”.
CacheMisses This attribute value shows the number of cache misses counted by the Ag-
gregation Profile each time session information cannot be read from the
cache and is instead read from disk. Note that if a non-existing session is
requested, this will not be counted as a cache miss.

284
Desktop 7.1

CacheMisses is reset each time the Execution Context is started, or after


using resetCounters, as described in Section 8.5.1.2, “Operations”.
CommitTime This attribute value shows the time (in milliseconds) that has been spent on
the last commit.
CreatedSessions This attribute value shows the total number of sessions created by the Ag-
gregation profile since the Execution Context was started, or after using re-
setCounters, as described in Section 8.5.1.2, “Operations”.
DefragmentationTime This attribute value shows the time (in milliseconds) that has been spent on
the last defragmentation.
OnlineSessionCount This attribute value shows the number of Aggregation sessions cached in
memory.
SessionCount This attribute value shows the number of Aggregation sessions in storage
on the file system.

8.5.1.2. Operations
Click on Operations in the tree to display the operation alternatives for the Aggregation Profile in the
right section of the JConsole window.

Figure 226. JConsole displaying the operations for Aggregation with file storage

resetCounters Click on the resetCounters operation in the tree to the left and then click this button
to reset the values for CacheHits, CacheMisses and CreatedSessions.

8.5.2. Couchbase Storage


When Couchbase storage is selected in an Aggregation profile, you can use JMX beans to view attributes
matching similar MIM parameters published by the Aggregation agent.

285
Desktop 7.1

The MBean is registered under the com.digitalroute.workflow domain, with the key type set to
"Workflow" and the key workflow set to the name of the Aggregation workflow.

For information about MIM values published by the Aggregation agent, see Section 11.1.3.9, “Meta
Information Model”

To monitor the Aggregation MIM parameter values, do the following:

1. Select the picostart process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.

The Java Monitoring & Management Console will open and display the Overview tab.

2. Click on the MBeans tab to display the different beans.

3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.workflow, then
on the plus sign for Workflow and then on the plus sign for the workflow you want to monitor.

8.5.2.1. Attributes
Click on MIM Tree and then Attributes in the tree to display the different attribute values in the right
section of the JConsole window.

Figure 227. JConsole displaying the attributes for Aggregation with Couchbase storage

<agent name>.Agent This attribute value shows the name of the Aggregation agent.
Name
<agent name>.Created This attribute value shows the number of created aggregation sessions.
Session Count
The value of <agent name>.Created Session Count is reset when the
workflow is started.

286
Desktop 7.1

<agent name>.Inbound This attribute value shows the number of UDRs routed to the agent.
UDRs
The value of <agent name>.Inbound UDRs is reset when the workflow
is started.
<agent name>.Mirror This attribute value shows the total number of attempts to retrieve a stored
Attempt Count mirror session.

The value of <agent name>.Mirror Attempt Count is reset when the


workflow is started.
<agent name>.Mirror This attribute value shows the number of failed attempts to retrieve a
Error Count stored mirror session, where the failure was caused by one or more errors.

The value of <agent name>.Mirror Error Count is reset when the


workflow is started.
<agent name>.Mirror This attribute value shows the number of successful attempts to retrieve
Found Count a stored mirror session.

The value of <agent name>.Mirror Found Count is reset when the


workflow is started.
<agent name>.Mirror This attribute value shows comma separated counters that each contains
Latency the number of mirror session retrieval attempts for a specific latency in-
terval. Attempts that failed due to errors are not counted.

The attribute contains 20 counters for a series of 100 ms intervals. The


first interval is from 0 to 99 ms and the last interval is from 1900 ms and
up.

Example 52.

The value 1000,100,0,0,0,0,0,0,0,0,0,0,0,0,1 should


be interpreted as follows:

• There are 1000 mirror session retrieval attempts with a latency


of 99 ms or less.

• There are 100 mirror session retrieval attempts with a latency of


100 ms to 199 ms.

• There is one mirror session retrieval attempt with a latency of


1999 ms or more.

The value of <agent name>.Mirror Latency is reset when the workflow


is started.
<agent name>.Mirror Not This attribute value shows the number of attempts to retrieve a stored
Found Count mirror session that did not exist.

The value of <agent name>.Mirror Not Found Count is reset when the
workflow is started.
<agent name>.Outbound This attribute value shows the number of UDRs routed from the agent.
UDRs
The value of <agent name>.Outbound UDRs is reset when the workflow
is started.
<agent name>.Session This attribute value shows the number of sessions removed.
Remove Count

287
Desktop 7.1

The value of <agent name>.Session Remove Count is reset when the


workflow is started.
<agent name>.Session Multiple timeout threads may read the same session data from Couchbase
Timeout Attempt but only one of them will perform an update. If a thread reads a session
that has already been updated, it will be counted as an attempt. This attrib-
ute shows the number of attempts.

The value of <agent name>.Session Timeout Attempt is reset when the


workflow is started.
<agent name>.Session This attribute value shows the number sessions that has timed out.
Timeout Count
The value of <agent name>.Session Timeout Count is reset when the
workflow is started.
<agent name>.Session This attribute value shows comma separated counters that each contains
Timeout Latency the number of sessions for a specific timeout latency interval i e the dif-
ference between the actual timeout time and the expected timeout time.

The attribute contains 15 counters for a series of one-minute intervals.


The first interval is from 0 to 1 minutes and the last interval is from 14
minutes and up.

Example 53.

The value 1000,100,0,0,0,0,0,0,0,0,0,0,0,0,1 should


be interpreted as follows:

• There are 1000 sessions with a timeout latency that is less than
one minute.

• There are 100 sessions with a timeout latency of one to two


minutes.

• There is one session with a timeout latency of 14 minutes or


more.

The value of <agent name>.Session Timeout Latency is reset when the


workflow is started.

8.6. Couchbase Monitoring


When Monitoring is enabled in one or more Couchbase profiles, you can use JMX beans to view the
status of the monitored cluster.

Couchbase Monitoring includes three different sections:

• ConfigCordinator - which shows general information about the Couchbase Cluster.

• MonitorCoordinator - which shows information about the number of monitored Couchbase Nodes.

• Monitor_<cluster id> - which shows detailed information about the monitored Couchbase cluster.

288
Desktop 7.1

8.6.1. Monitoring the ConfigCordinator


The ConfigCordinator MBean is published at startup by all Execution Contexts on which Couchbase
Monitoring is installed.

If you want to monitor the ConfigCordinator:

1. Select a picostart process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.

The Java Monitoring & Management Console will open and display the Overview tab.

2. Click on the MBeans tab to display the different beans.

3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.couchbase.mon-
itor, then on the plus sign for ConfigCordinator.

4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.

Figure 228. JConsole displaying the attributes for ConfigCordinator

For the ConfigCordinator you can see the following information:

Unmanaged This attribute value shows the name of the Couchbase profiles of Couchbase Clusters
that are unmanged i e that do not respond to management requests.
Monitored This attribute value shows the IP addresses and ports of the the configured Couchbase
Cluster nodes, and the Couchbase cluster id to which they belong.
Coordinator This attribute value shows if the Execution Context actively performs Couchbase
Monitoring (true) or not (false).

289
Desktop 7.1

8.6.2. Monitoring the MonitorCoordinator


The MonitorCoordinator MBean is published at startup by all Execution Contexts on which Couchbase
Monitoring is installed.

If you want to monitor the ConfigCordinator:

1. Select a picostart process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.

The Java Monitoring & Management Console will open and display the Overview tab.

2. Click on the MBeans tab to display the different beans.

3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.couchbase.mon-
itor, then on the plus sign for MonitorCoordinator.

4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.

Figure 229. JConsole displaying the attributes for MonitorCordinator

For the MonitorCoordinator you can see the following information:

LocalMonitored This attribute value shows the number of Couchbase Nodes that are monitored
by the Execution Context.
AllMonitored This attribute value shows the total number of monitored Couchbase clusters

290
Desktop 7.1

8.6.3. Monitoring the Monitor_<cluster id>


The Monitor_<cluster id> MBean is published by a single Execution Context. This Execution Context
is associated with the elected leader in the ZooKeeper cluster. The MBean is published when Monit-
oring is checked in a Couchbase profile (Couchbase Monitoring must be installed).

If you want to monitor the ConfigCordinator:

1. Select a picostart process when starting the JMX client, see Section 8.1, “Starting the Jconsole
Client” for further information.

Note! It is not possible to know which Execution Context that is associated with the leader
in the ZooKeeper cluster. For this reason you will need to connect to all available picostart
processes until you find the one that contains the bean.

The Java Monitoring & Management Console will open and display the Overview tab.

2. Click on the MBeans tab to display the different beans.

3. Expand the tree in the left section by clicking on the plus sign for com.digitalroute.couchbase.mon-
itor, then on the plus sign for Monitor_<cluster id>.

4. Click on Attributes in the tree to display the different attribute values in the right section of the
JConsole window.

Figure 230. JConsole displaying the attributes for Monitor_<cluster id>

For the Monitor_<cluster id> you can see the following information:

291
Desktop 7.1

ClusterAvailable This attribute value shows if the Couchbase cluster is currently available
(true) or not (false).
ClusterDetails This attribute value shows IP Address and id of the monitored couchbase
cluster.
ClusterUnavailableDura- This attribute value shows for how long the cluster has been unavailable
tion in seconds. When the cluster is available, this value is set to 0.
Failovers This attribute value shows the number of failovers since monitoring
started.
FailureCountThreshold This attribute value shows the maximum number of failed health checks
before a Couchbase node is automatically failed over.
Frequency This attribute value shows the frequency of health checks in milliseconds.
HealthChecks This attribute value shows the total number of cluster health checks that
have been performed since the Execution Context started.
LastHealthCheckDetails This attribute value shows the result of the last health check, including
node health and cluster membership. The IP addresses in the attribute
value may not be the same as the ones specified in the Couchbase profile.
For example, the IP address in this value can be 127.0.0.1 for a Couchbase
node running on the local host machine, even though an external IP ad-
dress is specified in the profile.
StartTime This attribute value shows the date and time when monitoring started.

292
Desktop 7.1

9. Appendix I - Profiles
This appendix contains descriptions for the profiles that are not related to specific agents. All the agent
specific profiles are described in connection with each agent in the following appendixes.

9.1. Audit Profile


MediationZone® offers the possibility to output information to user defined database tables. This
means that several workflows may output information about the same batch to the same table, which
makes it possible to trace batches/UDRs between workflows. To increase this traceability, it is highly
recommended to add fields to the UDRs, to make it possible to identify their origin. Useful values
may be:

• Name of the switch.

• Name of the original file name.

• Time stamp of the original file.

The audit table column types are defined in an Audit profile configuration.

The Audit profile is loaded when you start a workflow that depends on it. Changes to the profile become
effective when you restart the workflow.

To create a new Audit profile configuration, click the New Configuration button in the upper left part
of the MediationZone® Desktop window, and then select Audit Profile from the menu.

To open an existing Audit profile configuration, double-click the Configuration in the Configuration
Navigator, or right-click a Configuration and then select Open Configuration(s)....

Figure 231. The Audit Profile Editor

Database This is the database the agent will connect and send data to.

Click the Browse... button to get a list of all the database profiles that are available.
For further information see Section 9.3, “Database Profile”.

293
Desktop 7.1

Note! For performance reasons, Audit information is logged directly from


an Execution Context to the database. If an external Execution Context is
unable to connect to the database, a "Workflow performance warning" will
be logged in System Log. If this warning appears, the firewall might need
to be reconfigured to allow the Execution Context to communicate directly
with the database.

The Audit functionality in MediationZone® is supported for use with the following
databases:

• Oracle

• TimesTen

• Derby

• SQL Server

• SAP HANA

Refresh Select Refresh to reload the meta data for the tables residing in the selected data-
base.

Use Default Check this to use the default database schema that was added in the Username
Database field of the Default Connection Setup in the Database profile configuration. For
Schema more details on how to add a default database schema, see Section 9.3, “Database
Profile”.

Note! This is not applicable for all database types. Use Default Database
Schema is available for selection only when accessing Oracle or TimesTen
databases.

Tables within the default schema will be listed without schema prefix.

Table A list of selected audit tables. For further information about adding and editing
tables, see Section 9.1.3, “Adding and Editing a Table Mapping”.

9.1.1. Audit Profile Menus


The contents of the menus in the menu bar may change depending on which Configuration type that
has been opened in the currently displayed tab. The Audit profile uses the standard menu items that
are visible for all Configurations, and these are described in Section 3.1.1, “Configuration Menus”.

9.1.2. Audit Profile Buttons


The contents of the button panel may change depending on which Configuration type that has been
opened in the currently displayed tab. The Audit profile uses the standard buttons that are visible for
all Configurations, and these are described in Section 3.1.2, “Configuration Buttons”.

9.1.3. Adding and Editing a Table Mapping


From the Add and Edit Audit Table Attributes dialogs, the existing table columns are mapped to
MediationZone® valid types.

294
Desktop 7.1

Figure 232. Add Audit Table Attributes

Table A list from which the audit table is selected.


Column The name of the columns in the selected table.
Name
Type Clicking the cell, displays a list of valid MediationZone® types. Each column must be
mapped against a type. Valid types are:

• Counter - A built-in sequence which is incremented with the value passed on with
the auditAdd APL function.

• Key - Used to differ between several audit inserts. It is possible to use several keys,
where a unique combination of keys will result in one new row in the database.

If the same key combination is used several times within a batch, the existing row will
be overwritten with new audit data. However, if a later batch uses the same key com-
bination, a new row will be created.

If using more than one key, the Key Sequence must be entered in the same order when
calling the auditAdd or auditSet APL functions. The Audit functions are further
described in the APL Reference Guide.

Note that this is not a database key and it must be kept as small as possible. A value
that is static during the whole batch must never be used as a key value.

• Value - A column holding any type of value to be set, except for Counter values. This
is used in combination with the auditSet APL function. Another use is mapping
against existing MIM values in the Workflow Properties window.

• Transaction Id - To make sure entries are transaction safe, each table must contain a
column of type NUMBER and at least have the length twelve (or have no size declared
at all). Do not enter or alter any values in this column, it is handled automatically by
the MediationZone® system. The value -1indicates that the entry is committed and
safe.

Note! The Transaction Id should be indexed for best performance. The contents
will be of low cardinality and could therefore be compressed if supported.

• Unused - Used in case a column must not be populated, that is, set to null.

Key Se- A key sequence is a defined way to assign a Key value, to identify in which order you
quence need to send along key values when you use the auditAdd or auditSet APL func-
tions.

Each key in a table must have a sequence number in order to be identified when passed
on as parameters to the APL audit functions. The first key is identified as 1, the second
as 2, and so on.

The key sequence will uniquely identify all audit log entries to be inserted per batch.

295
Desktop 7.1

9.1.4. An Example
To illustrate how Audit may be used, consider a workflow with an Analysis agent, validating and
routing UDRs. Most of the UDRs will be sent on the "COMPLETE" route. The rest of the incomplete
UDRs will be sent on the "PARTIALS" route. If there is a considerable amount UDRs that are routed
to the latter, the batch is canceled.

Figure 233. A Workflow Example

The output on each route is to be logged in a concealed audit table, including information on canceled
batches. An entry in the table will be made for each batch, and for each route. Hence two entries per
batch.

Figure 234. Audit Information May Be Concealed

In this example only the destination key is needed, which will uniquely identify all rows to be inserted
per batch. The name of the destination agent is therefore selected. Note, it is not possible to update an
existing row in the table, only to add new rows. This to assure the traceability of data. In order to output
other information than MIM values (which may be mapped in the Workflow Properties window),
the workflow must contain an Analysis or Aggregation agent.

Setting up an Audit profile involves the following steps:

1. Design the tables:

• One column (of type NUMBER) must be reserved for the MediationZone® transaction handling.
This column should be indexed in order to achieve best performance. The contents will be of
low cardinality and could therefore be compressed if supported.

• Consider which column/columns that contains tag information, that is, the key. A key may consist
of one or several columns.

2. Create an Audit profile. For further information, see Section 9.1.4.1.1, “Adding the Table Mapping”.

296
Desktop 7.1

3. Map parameters in the Workflow Preferences Audit tab to the Audit profile. For further information,
see Section 9.1.4.2, “Workflow Properties - Audit tab”.

4. Design APL code to populate the tables. For further information, see Section 9.1.4.3, “Populating
Audit Tables”.

9.1.4.1. Audit Profile


In the Audit profile configuration, the column types are configured. To create a new Audit profile
Configuration, click the New Configuration button in the upper left part of the MediationZone®
Desktop window, and then select Audit Profile from the menu. Select the database in which the table(s)
reside, then select Add.

9.1.4.1.1. Adding the Table Mapping

From the Add and Edit Audit Table Attributes dialogs, the existing table columns are mapped to
MediationZone® valid types.

Figure 235. The Audit Profile

The data to insert, will be put in the UDRs column. Setting it to type Counter, makes it possible to
use the auditAdd function to increment the corresponding column value. If Value is used, the
auditSet function can be used in order to assign a value.

9.1.4.2. Workflow Properties - Audit tab


The Audit tab in the Workflow Properties window defines the type of data entered in the table by
the workflow. This is either MIM types or anything sent on with the APL audit functions.

Figure 236. Workflow Properties - The Audit Tab

The DESTINATION and UDRs columns in Figure 235, “The Audit Profile” are populated by using
the APL audit functions. The CANCELED column name might be mapped directly to an existing
MIM value.

9.1.4.3. Populating Audit Tables


There are two ways of populating audit tables; either by using the auditAdd function, which auto-
matically increments the value of Counter columns, or by setting fixed values to columns of type

297
Desktop 7.1

Value with the auditSet function. Note that Counter columns are automatically set to 0 (zero)
when a batch is canceled. This is not the case for Value columns.

In the following subsections, the differences of the case exemplified in Figure 234, “Audit Information
May Be Concealed” is discussed.

Note! In terms of performance, it does not matter how many times an audit function is called.
Each call is saved in memory and a summary for each key is committed at End Batch.

9.1.4.3.1. Counter Increment

By using the auditAdd function, the user does not have to keep track of the number to increment a
counter column with. At Cancel Batch, the value is set to 0 (zero).

In the current example, each UDR is validated with respect to the contents of the causeForOutput
field. The statistics table is updated to hold information on the numbers of UDRs sent on the different
routes.

Example 54.

int noPART;
beginBatch {
noPART = 0;
}

consume {
if ( input.causeForOutput == "0" ) {
udrRoute( input, "COMPLETE" );
auditAdd( "myFolder.count_PARTIALS",
"ADMIN.PARTIALS_AUDIT",
"UDRS", 1,
mimGet( "TTFILES", "Source Filename" ),
mimGet( "COMPLETE", "Agent Name" ) );
} else {
noPART = noPART + 1;
if ( noPART < 300 ) {
udrRoute( input, "PARTIALS" );
auditAdd( "myFolder.count_PARTIALS",
"ADMIN.PARTIALS_AUDIT",
"UDRS", 1,
mimGet( "TTFILES", "Source Filename" ),
mimGet( "PARTIALS", "Agent Name" ) );
} else {
cancelBatch( "Too many partials found." );
}
}
}

9.1.4.3.2. Fixed Values

Using the auditSet function for the same example as discussed in the previous section, means the
user has to keep track of the number of records in the APL code. Note that the Profile must be updated;
the Counter column must be redefined to Value.

298
Desktop 7.1

Value columns are not reset when a batch is canceled. Hence there will be entries made in the table
for the UDRs column for all batches.

9.2. Couchbase Profile


A Couchbase profile is used to read and write bucket data in a Couchbase database. This profile can
be accessed by workflows using Aggregation, Distributed Storage or PCC.

As a client to Couchbase, the profile operates in synchronous mode. When sending a request to
Couchbase, the profile expects a server response, indicating success or failure, before proceeding to
send the next one in queue.

When using the Couchbase profile for Aggregation, it is possible to enable asynchronous mode. In
this mode the Couchbase profile does not wait for a response from Couchbase before sending the next
request in queue. For more information about using the Couchbase profile in Aggregation, see Sec-
tion 11.1.3.13, “Performance Tuning with Couchbase Storage”.

The Couchbase profile is loaded when you start a workflow that depends on it. Changes to the profile
become effective when you restart the workflow.

Note! Created or updated Couchbase profiles that are used for PCC do not become effective until you
restart the Execution Contexts.

To create a new Couchbase profile, click on the New Configuration button in the upper left part of
the MediationZone® Desktop window, and then select Couchbase Profile from the menu.

To open an existing Couchbase profile, double-click on the configuration in the Configuration Nav-
igator, or right-click on the configuration, and then select Open Configuration(s)....

In a Couchbase profile, there are three tabs; Connectivity, Management, and Advanced.

9.2.1. Connectivity settings


The Connectivity tab is displayed by default.

Figure 237. The Couchbase Profile - Connectivity Tab

The following settings are available in the Connectivity tab:

299
Desktop 7.1

Bucket Name Enter the bucket that you want to access in Couchbase in this field.
Bucket Password Enter an optional password for the bucket in this field.
Connections Enter the number of connections between the nodes you want to have in
your cluster in this field.

Note! Usually 1 connection is sufficient, but for high throughput


situations (> 15,000 operations/s), increasing this number will allow
for further vertical scalability in accessing Couchbase.

Operation Timeout Enter the number of milliseconds after which database operations should
(ms) time out.
Operation Queue Max Enter the maximum time interval, in milliseconds, a client will wait to add
Block Time (ms) a new item to a queue.
Retry Interval Time In this field, enter the time interval, in milliseconds, that you want to wait
(ms) before trying to read the cluster configuration again after a failed attempt.
Max Number Of Re- Enter the maximum number of retries in this field.
tries
Cluster Nodes In this section, add IP addresses/hostnames and ports of at least one of the
nodes in the cluster. This address information is used by the Couchbase
profile to connect to the cluster at workflow start, and to retrieve the IP ad-
dresses and ports of the other nodes in the cluster.

If the first node in the list cannot be accessed, the Couchbase profile will
attempt to connect to the next one in order. This is repeated until a successful
connection can be established. Hence it is not necessary to add all the nodes,
but it is good practice to do so for a small cluster. For example, if there are
just three nodes, you should add all of them.

You should also add all nodes if Monitoring in the Management Settings
is Active. The specified nodes are used by the Couchbase Monitoring Service
to check the health of the cluster. If none of these nodes are available, the
monitoring will stop.

9.2.2. Management settings


In the Management tab which contains the user name, password, size and monitoring settings.

Figure 238. The Couchbase Profile - Management Tab

The following settings are available in the Management tab:

300
Desktop 7.1

Admin User Name If you want to create a new bucket that does not exist in your Couchbase
cluster, enter the user name that you stated when installing Couchbase in this
field.
Admin Password If you want to create a new bucket that does not exist in your Couchbase
cluster, enter the password that you stated when installing Couchbase in this
field.
Bucket Size (MB) Enter the size of the bucket you want to create, in MB in this field. Once the
bucket is created, you cannot change the size by updating this field.
Number of Replicas Enter the number of replicas you want to have in this field.
Monitoring - Active Select this check box if you want to activate the Couchbase Monitoring Ser-
vice. This service is suitable for High Availability installations, since it will
allow you to detect failing nodes earlier than the monitoring built into
Couchbase itself, and perform automatic failover of nodes.

Note! You must install and configure the the Couchbase Monitoring
Service to use this functionality. For more information, see the Install-
ation Instructions.

For information about how to access the current status of a cluster, see Sec-
tion 4.1.8.5.1, “Couchbase Monitor Service”.
Frequency (ms) Enter the frequency, in milliseconds, with which you want to perform monit-
oring.
Failure Count Enter the number of failures before performing a failover of a node.

Note! If you have several Couchbase profiles that have Monitoring activated, it is important
that the monitoring configurations for Frequency and Failure Count are the same in all the profiles,
as there is no guarantee which profile these settings are read from.

If the bucket that you specify in the Couchbase profile does not exist, it is created in runtime, i e when
accessed in a workflow. This is provided that Admin User Name and Admin Password have been
stated in the Management tab. If the bucket you want to access already exists in your cluster, these
two fields do not have to be filled in.

9.2.3. Advanced Configurations


In the Advanced tab you can configure additional properties.

It is recommended to change these properties when using the Couchbase profile in Aggregation. For
more information about using the Couchbase profile in Aggregation, see Section 11.1.3.13, “Performance
Tuning with Couchbase Storage”.

301
Desktop 7.1

Figure 239. The Couchbase Profile - Advanced Tab

Use the property mz.couchbase.transcoder.compressionThreshold to set the lower


size limit for compression of stored data. Any document that exceeds this limit will be compressed
when stored in Couchbase.

See the text in the Properties field for further information about the other properties that you can set,
or see the official Couchbase documentation at http://docs.couchbase.com for more detailed descriptions
of the different parameters.

9.2.4. Couchbase Profile Menus


The contents of the menus in the menu bar may change depending on which configuration type that
has been opened in the currently displayed tab. The Couchbase profile uses the standard menu items
that are visible for all configurations, and these are described in Section 3.1.1, “Configuration Menus”.

9.2.5. Couchbase Profile Buttons


The contents of the button panel may change depending on which configuration type that has been
opened in the currently displayed tab. The Couchbase profile uses the standard buttons that are visible
for all configurations, and these are described in Section 3.1.2, “Configuration Buttons”.

9.3. Database Profile


In a Database profile configuration, you can create database profiles for use in various MediationZone®
agents, profiles and APL functions. These include:

• Audit Profile

302
Desktop 7.1

• Callable Statements (APL)

• Database Bulk Lookup Functions (APL)

• Database Table Related Functions (APL)

• Database Collection/Forwarding Agents

• Event Notification

• Prepared Statements (APL)

• Shared Table Profile

• SQL Collection/Forwarding Agents

• SQL Loader Agent

• Task Workflow Agents (SQL)

What a profile can be used for depends on the selected database type. The supported usage for each
database type is described in Section 9.3.5, “Database Types”.

The Database profile is loaded when you start a workflow that depends on it. Changes to the profile
become effective when you restart the workflow.

To create a new Database profile configuration, click the New Configuration button in the upper left
part of the MediationZone® Desktop window, and then select Database Profile from the menu.

To open an existing Database profile configuration, double-click the Configuration in the Configuration
Navigator, or right-click a Configuration and then select Open Configuration(s)....

9.3.1. Database Profile Menus


The contents of the menus in the menu bar may change depending on which Configuration type that
has been opened in the currently displayed tab. The Database profile uses the standard menu items
that are visible for all Configurations, and these are described in Section 3.1.1, “Configuration Menus”.

There is one menu item that is specific for Database profile configurations, and it is described in the
next coming section:

9.3.1.1. The Edit Menu

Item Description
External References Select this menu item to Enable External References in an Agent Profile Field.
Please refer to Section 9.5.3, “Enabling External References in an Agent
Profile Field” for further information.

9.3.2. Database Profile Buttons


The contents of the button panel may change depending on which Configuration type that has been
opened in the currently displayed tab. The Database profile uses the standard buttons that are visible
for all Configurations, and these are described in Section 3.1.2, “Configuration Buttons”.

9.3.3. Database Connection Setup


The two radio buttons Default Connection Setup and Advanced Connection Setup makes it possible
to display different connection options.

303
Desktop 7.1

9.3.3.1. Default Connection Setup


Select the Default Connection Setup radio button to use a preconfigured connection string.

Figure 240. The Database Profile Configuration

Default Connection Setup Select to configure a default connection.


Advanced Connection Setup Select to configure the data source connection using a connection string.
For further information, see Section 9.3.3.2, “Advanced Connection
Setup”.
Database Type Select any of the any of the available database types. You may need to
perform some preparations before attempting to connect to the database
for the first time. For information about required preparations, see
Section 9.3.5, “Database Types”.
Database Name Enter a name that identifies the database instance. For example, when
you configure the profile for an Oracle database, this field should con-
tain the SID.
Database Host Enter the host name or IP address of the host on which the database is
running. Type it exactly as when accessing it from any other application
within the network.
Port Number Enter the database network port.
Username Enter the database user name.
Password Enter the database password.
Try Connection Click to try the connection to the database, using the configured values.

9.3.3.2. Advanced Connection Setup


Select the Advanced Connection Setup radio button when you need to use a customized connection
string that contains additional properties.

304
Desktop 7.1

Figure 241. Database Profile Configuration - Advanced Connection Setup

Default Connection Setup Select to configure a default connection. For further information, see
Section 9.3.3.1, “Default Connection Setup”.
Advanced Connection Setup Select to configure the data source connection using a connection
string.
Database Type Select any of the any of the available database types. You may need
to perform some preparations before attempting to connect to the
database for the first time. For information about required preparations,
see Section 9.3.5, “Database Types”.
Connection String Enter a connection string containing information about the database
and the means of connecting to it.
Notification Service This field is used when the selected Database Type is Oracle. For
more information, see Section 9.3.5.4, “Oracle”
Username Enter the database user name.
Password Enter the database password.
Try Connection Click to try the connection to the database, using the configured values.

9.3.4. Enabling External Referencing


For information, see Section 9.5.3, “Enabling External References in an Agent Profile Field”.

9.3.5. Database Types


This section contains information that is specific to each of the supported database types:

• Derby

• MySQL

• Netezza

• Oracle

• PostgreSQL

• SAP HANA

• SQL Server

• Sybase IQ

305
Desktop 7.1

• TimesTen

9.3.5.1. Derby
This section contains information that is specific to the database type Derby.

9.3.5.1.1. Supported Functions

The Derby database can be used with:

• Audit Profile

• Database Table Related Functions (APL)

• Database Collection/Forwarding Agents

• Event Notification

• SQL Collection/Forwarding Agents

• Task Workflow Agents (SQL)

9.3.5.1.2. Preparations

The drivers that are required to use the Derby database are bundled with the MediationZone® software
and no additional preparations are required.

9.3.5.2. MySQL
This section contains information that is specific to the database type MySQL.

9.3.5.2.1. Supported Functions

The MySQL database can be used with:

• Database Bulk Lookup Functions (APL)

• Database Table Related Functions (APL)

• Event Notification

• Prepared Statements (APL)

• SQL Collection/Forwarding Agents

• SQL Loader Agent

• Task Workflow Agents (SQL)

9.3.5.2.2. Preparations

This section describes preparations that you must perform before attempting to connect to a MySQL
database.

When performing table lookups to a MySQL database, the result may not be updated unless the Exe-
cution Context is restarted. Use the following statement to avoid this issue:

SET GLOBAL TRANSACTION ISOLATION LEVEL {READ-COMMITTED};

To decrease re-connection overhead, database connections are saved in a connection pool. To set the
connection pool size, open the executioncontext.xml file and edit its value:

306
Desktop 7.1

<property name="mysql.connectionpool.maxlimit" value="45"/>

9.3.5.3. Netezza
This section contains information that is specific to the database type Netezza.

9.3.5.3.1. Supported Functions

The Netezza database can be used with:

• sqlExec (APL)

• SQL Loader Agent

For data loading, it is recommended that you use the SQL Loader Agent. You can also use the APL
function sqlExec for both data loading and unloading. For information on how to use sqlExec with
Netezza see Section 9.3.5.3.3, “APL Examples”. For further details, see also the IBM Netezza Data
Loading Guide.

9.3.5.3.2. Preparations

This section describes preparations that you must perform before attempting to connect to a Netezza
database.

A driver, provided by your IBM contact, is required to access the Netezza database. This driver must
be storen on each host (Platform or Execution Context) that will connect to a Netezza database.

For MediationZone® to access the Netezza database, the classpath must be specified. Edit the classpath
in the files platform.xml and executioncontext.xml for each Execution Context. For ex-
ample:

<classpath path="/opt/netezza/nzjdbc.jar"/>

After the classpath has been set, copy the jar file to the specified path.

The Platform and the Execution Contexts must be restarted for the changes in platform.xml and
executioncontext.xml to become effective.

9.3.5.3.3. APL Examples

This section gives examples of how to use the APL function SQLExec with a Netezza Database profile.

Example 55. Load data from an external table on the Netezza database host

initialize {
int rowcount = sqlExec("NETEZZA.NetezzaProfile",
"INSERT INTO mytable
SELECT * FROM EXTERNAL '/tmp/test.csv' USING (delim ',')");
}

307
Desktop 7.1

Example 56. Load data from an external table on the Execution Context host

initialize {
int rowcount = sqlExec("NETEZZA.NetezzaProfile",
"INSERT INTO mytable SELECT * FROM EXTERNAL '/tmp/test.csv'
USING (delim ',' REMOTESOURCE 'JDBC')");
}

Example 57. Unload data to an external table on the Execution Context host

initialize {
int rowcount = sqlExec("NETEZZA.NetezzaProfile",
"CREATE EXTERNAL TABLE '/tmp/test.csv'
USING (DELIM ',' REMOTESOURCE 'JDBC')
AS SELECT * FROM mytable");
}

9.3.5.4. Oracle
This section contains information that is specific to the database type Oracle.

9.3.5.4.1. Supported Functions

The Oracle database can be used with:

• Audit Profile

• Callable Statements (APL)

• Database Bulk Lookup Functions (APL)

• Database Table Related Functions (APL)

• Database Collection/Forwarding Agents

• Event Notification

• Prepared Statements (APL)

• Shared Table Profile

• SQL Collection/Forwarding Agents

• Task Workflow Agents (SQL)

9.3.5.4.2. Preparations

If Oracle was not setup during installation of MediationZone® , you must perform additional installation
before attempting to connect to an Oracle database. For information about enabling client access to
Oracle, see the Installation Instructions.

308
Desktop 7.1

9.3.5.4.3. Advanced Connection Configuration for Oracle RAC

The Advanced Connection Setup is used for Oracle RAC Configurations.

To make the Connection String text area and the Notification Service text field appear, select the
Advanced Connection Setup radio button. The Username, Password and Database Type fields will
remain.

If MediationZone® is installed with the Oracle database, the Oracle RAC functionality Fast Connection
Failover (FCF) is available. MediationZone® support FCF thus the behavior from MediationZone®
perspective is that there will normally be some exceptions generated during RAC instance failover.
When FCF is configured, MediationZone® detects a lost connection, clears the database connection
pool and reinitializes the connection pool.

During a RAC instance failover you might experience exceptions for example when database transac-
tions such as updates and inserts are done. Database exceptions are logged in the MediationZone®
system.

The Platform and Execution Contexts supports the failover behavior. However, note that neither
database collection nor forwarding agents support FCF. These agents have a different type of database
connection pool implementation.

Figure 242. Database Profile Configuration - Advanced Connection Setup

Connection In the text field a connection string can be entered. The connection string can
String contain a SID or a service name. The string added will not be modified by the un-
derlying system.

If a connection string is longer than the text area space a vertical scroll bar will be
displayed to enable viewing and editing of the connection string.
Notification Ser- Enter the Configuration that enables the Oracle Notification Service daemon (ONS)
vice to establish a Fast Connection Failover (FCF).

The ONS string that you enter should at least specify the agent ONS configuration
attribute, that is made up of a comma separated list of host:port pairs. The hosts
and ports represent the remote ONS daemons that are available on the RAC agents.
For further information see the Installation guidelines of MediationZone® Oracle
RAC in the MediationZone® Installation User Guide.

9.3.5.5. PostgreSQL
This section contains information that is specific to the database type PostgreSQL.

9.3.5.5.1. Supported Functions

The PostgreSQL database can be used with:

309
Desktop 7.1

• Database Bulk Lookup Functions (APL)

• Database Table Related Functions (APL)

• Event Notification

• Prepared Statements (APL)

• SQL Collection/Forwarding Agents

• Task Workflow Agents (SQL)

9.3.5.5.2. Preparations

The drivers that are required to use the PostgreSQL database are bundled with the MediationZone®
software and no additional preparations are required.

9.3.5.6. SAP HANA


This section contains information that is specific to the database type SAP HANA.

9.3.5.6.1. Supported Functions

The SAP HANA database can be used with the following functionality:

• Audit Profile

• Database Bulk Lookup Functions (APL)

• Database Table Related Functions (APL)

• Event Notification

• Prepared Statements (APL)

• SQL Collection/Forwarding Agents

• SQL Loader Agent

• Task Workflow Agents (SQL)

Note! The SAP HANA database does not guarantee 99,999% availability. Therefore, it is not
recommended to use MediationZone® with the jdbc connection to SAP HANA for real-time
applications, which require 99,999% availability.

9.3.5.6.2. Preparations

A driver, provide by your SAP contact, is required to connect a SAP HANA database from Medi-
ationZone® . This driver must be stored on each host (Platform or Execution Context) that will connect
to a SAPA HANA database.

The classpath must also be specified. Edit the classpath in the files platform.xml and
executioncontext.xml for each Execution Context. For example:

<classpath path="/opt/sapHana/ngdbc/ngdbc.jar"/>

After the classpath has been set, copy the jar file to the specified path.

The Platform and the Execution Contexts must be restarted for the changes in platform.xml and
executioncontext.xml to become effective.

310
Desktop 7.1

9.3.5.7. SQL Server


This section contains information that is specific to the database type SQL Server.

9.3.5.7.1. Supported Functions

The SQL Server database can be used with:

• Audit Profile

• Database Bulk Lookup Functions (APL)

• Database Table Related Functions (APL)

• Database Collection/Forwarding Agents

• Event Notification

• Prepared Statements (APL)

• SQL Collection/Forwarding Agents

• Task Workflow Agents (SQL)

Note! For SQL Server, the column type timestamp is not supported in tables accessed by
MediationZone® . Use column type datetime instead. See also the System Administration
Guide for information about time zone settings.

9.3.5.7.2. Preparations

The drivers that are required to use SQL Server database are bundled with the MediationZone® software
and no additional preparations are required.

9.3.5.8. Sybase IQ
This section contains information that is specific to the database type Sybase IQ.

9.3.5.8.1. Supported Functions

The Sybase IQ database can be used with:

• APL function sqlexec

• Event Notification

• SQL Collection/Forwarding Agents

• SQL Loader Agent

• Task Workflow Agents (SQL)

9.3.5.8.2. Preparations

The Sybase JDBC driver has to be downloaded to the Platform in order to connect to a Sybase IQ
database from MediationZone® .

You must proceed as follows:

1. Go to the Sybase web page and download jConnect for JDBC from the "Product Download Center":
http://www.sybase.com/download

311
Desktop 7.1

2. Place the downloaded jar file in the $MZ_HOME/3pp directory.

3. Add this classpath to platform.xml:

<classpath path="3pp/jconn4.jar"

4. Restart the Platform and the Execution Contexts.

9.3.5.8.3. Close Pooled Connections

The APL function closePooledConnections enables you to close a pooled connection with the
Sybase IQ server. This feature helps you eliminate invalid connections.

Note! This function only closes inactive connections, regardless of how long the connections
have been idle.

int closePooledConnections
(string dbProfile)

Parameters:

Returned Value Void


dbProfile The name of the database where the table is stored, preceded by the folder name.

Example 58.

persistent int profileUsageCnt;


....
if ( profileUsageCnt > 100 ) {
closePooledConnections("sybase_iq.mydb");
profileUsageCnt = 0;
}

9.3.5.8.4. Performance Tuning

The default maximum number of connections on an Execution Context is five. You can tune this
number by setting the property sybase.iq.pool.maxlimitin the executioncontext.xml.

Example 59.

<property name="sybase.iq.pool.maxlimit" value="20"/>

By default there is no timeout value defined for the socket tied to a database connection. This means
that a running query could get stuck, in case the database suddenly becomes unreachable. To specify
a time out value, in milliseconds, set the property sybase.jdbc.socketread.timeoutin the
executioncontext.xml file.

Example 60.

<property name="sybase.jdbc.socketread.timeout" value="600000"/>

312
Desktop 7.1

Note: When using the timeout property you must ensure that you set a limit that exceeds your longest
running query, otherwise you might terminate a connection while it is executing a query.

9.3.5.9. TimesTen
This section contains information that is specific to the database type TimesTen.

9.3.5.9.1. Supported Functions

The TimesTen database can be used with:

• Audit Profile

• Database Bulk Lookup Functions (APL)

• Database Table Related Functions (APL)

• Event Notification

• Prepared Statements (APL)

• Shared Table Profile

• SQL Collection/Forwarding Agents

• Task Workflow Agents (SQL)

When storing a date MIM value in TimesTen, do not use the DATE column type. Instead, use
the TIMESTAMP type.

9.3.5.9.2. Preparations

The TimesTen Client must be installed on every host (Platform or Execution Context) that is connected
to a TimesTen data source through MediationZone® .

Edit the files platform.xml and executioncontext.xml on the hosts that run TimesTen.
Assuming that TimesTen is installed at /opt/TimesTen, add the following line:
<classpath path="/opt/TimesTen/tt70/lib/ttjdbc6.jar"/>

Additionally, the LD_LIBRARY_PATH variable in the shell from which you launch the Platform or
Execution Context, should include the path to the TimesTen Client native library.

Example 61. $TT_HOME is set to the path where TimesTen is installed

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TT_HOME/tt70/lib

The Platform and the Execution Contexts must be restarted for the changes in platform.xml and
executioncontext.xml to become effective.

9.3.5.9.3. Performance Tuning

The direct driver requires that TimesTen is installed on the Execution Context hosts. By doing so you
improve performance.

To decrease re-connection overhead, database connections are saved in a connection pool. To configure
the connection pool size, set the property timesten.connectionpool.maxlimit in the
executioncontext.xml file.

313
Desktop 7.1

Example 62.

<property name="timesten.connectionpool.maxlimit" value="45"/>

9.4. Distributed Storage Profile


9.4.1. Overview
The Distributed Storage profile enables you to access a distributed storage solution from APL without
having to provide details about its type or implementation.

The use of the Distributed Storage profile and profiles for specific distributed storage types, like
Couchbase and Redis, makes it easy to change the database setup with a minimum of impact on the
configured business logic. This simplifies the process of creating flexible real-time solutions with high
availability and performance.

Figure 243. Distributed Storage Profile Concept

APL provides functions to read, store and remove data in one or multiple distributed storage instances
within the same workflow. It also provides functions for transaction management and bulk processing.

For information about which APL functions that are applicable for the Distributed Storage profile, see
the APL Reference Guide.

Note! When using Redis for the Distributed Storage, the following functions cannot be used:

beginTransaction

commitTransaction

rollbackTransaction

dsCreateKeyIterator

destroyKeyIterator

getNextKey

314
Desktop 7.1

In the current version of MediationZone® , the Couchbase profile and the Redis profile are available
for use with the Distributed Storage profile. For further information about these profiles, see Section 9.2,
“Couchbase Profile” and Section 9.6, “Redis Profile”.

The Distributed Storage profile is loaded when you start a workflow that depends on it. Changes to
the profile become effective when you restart the workflow.

9.4.2. Configuration
1. To open the Distributed Storage profile configuration, click the New Configuration button in the
upper left part of the MediationZone® Desktop window, and then select Distributed Storage
Profile from the menu.

Figure 244. The Distributed Storage Profile Editor

2. Select a Storage Type from the drop-down list.

3. Click Browse... and select the storage profile you want to apply.

9.5. External Reference Profile


External Reference profile enables you to load MediationZone® with configuration values that originate
from a properties file that is external to a workflow configuration.

The External Reference values are read during runtime when needed by the workflow or a profile.

The External Reference profile is loaded when you start a workflow that depends on it. Changes to
the profile become effective when you restart the workflow.

In this section you will find information about:

• The External Reference profile

• Configuration of a workflow with an external reference

• Enabling an agent profile field with external referencing

• Using passwords in External References

9.5.1. Profile Management


An External Reference profile configuration enables you to create and edit a profile that defines the
file from which the references originate and the names of the references within the MediationZone®
workflow.

9.5.1.1. External Reference Profile Menus


The contents of the menus in the menu bar may change depending on which Configuration type that
has been opened in the currently displayed tab. The External Reference profile uses the standard menu
items that are visible for all configurations, and these are described in Section 3.1.1, “Configuration
Menus”.

315
Desktop 7.1

9.5.1.2. External Reference Profile Buttons


The contents of the button panel may change depending on which Configuration type that has been
opened in the currently displayed tab. The External Reference profile uses the standard buttons that
are visible for all Configurations, and these are described in Section 3.1.2, “Configuration Buttons”.

9.5.1.3. Creating an External Reference Profile


In the External Reference profile you specify a properties file that contains the External Reference
values.

Note! The properties file should reside on the MediationZone® platform host.

A properties file contains Key-Value pairs. The typical format of a properties file is:

Key1=Value1
Key2=Value2
Key3=Value3

The Value data type can be: a string, a boolean, a password or a numeric value.

Boolean values can be represented by true, false, yes, or no, and are not case sensitive.

Password values must be represented by a string that has been encrypted by the encryptpassword
command in mzsh.

Note! If you are using characters encoded with something other than iso-8859-1 in your property
file for External References, the property file has to be converted to ASCII by using the Java
tool native2ascii. See the JDK product documentation for further information about using nat-
ive2ascii.

In Figure 245, “The External Reference Profile Configuration”, the extRef.prop file contains the
following data:

cd1=/mnt/storage/col1
cd2=/mnt/storage/col2
cd3=/mnt/storage/col3

Note! If the file contains two or more identical keys with different values, the last value is the
one that is applied.

Add a slash ("\") to continue the value on the next line. If the value is a multi-line string use
("\n\") to separate the rows.

key1=PrettyLongValueThat\
ContinuesOneTheSecondLine
key2=north\n\
center\n\
south

To create a new External Reference profile configuration, click the New Configuration button in the
upper left part of the MediationZone® Desktop window, and then select External Reference Profile
from the menu.

316
Desktop 7.1

To open an existing External Reference Profile Configuration, double-click the configuration in the
Configuration Navigator, or right-click a configuration and then select Open Configuration(s)....

Figure 245. The External Reference Profile Configuration

External Reference Type From the drop-down list select the External Reference source type.
Properties File Enter the path and the name of the Properties file.
Local Key The name of the External Reference in MediationZone® .
Properties File Key The name of the External Reference in the Properties file.

9.5.1.3.1. To create an External Reference profile:

1. To create an External Reference profile configuration, click the New Configuration button in the
upper left part of the MediationZone® Desktop window, and then select External Reference
Profile from the menu.

2. Configure your External Reference profile and save it.

9.5.2. Configuration
You enable external referencing in MediationZone® by:

• Enabling external referencing of workflow table fields

• Assigning a workflow table field with an external reference

9.5.2.1. To Enable the Display of Fields in the Workflow Table:


1. To create a new workflow configuration, click the New Configuration button in the upper left part
of the MediationZone® Desktop window, and then select Workflow from the menu.

2. From the New Workflow dialog, select workflow type Batch.

3. From the button panel, click Workflow Properties to open the Workflow Properties dialog.

4. On the Workflow Table tab, check the Per Workflow or Default check-boxes for the fields that
you want displayed as columns in the Workflow Table.

5. Check Enable External Reference, and click Browse to enter your External Reference profile.
See Section 9.5.1.3.1, “To create an External Reference profile:”.

6. Click OK. You have now enabled the access to external references for selected workflow table
fields.

317
Desktop 7.1

9.5.2.2. To Assign a Workflow Table Field with an External Reference:


1. At the bottom of the workflow configuration, in the workflow table, right-click a field and select
Enable External Reference; The field is marked with the external reference icon .

2. To enter a value either double-click the field, or right-click it and then select Edit Cell.

3. Enter the name of the reference, Local Key, which value you want applied to the field during the
workflow run-time, and then press Enter.

9.5.2.3. To Disable an External Reference Assigned Field:


At the bottom of the workflow configuration, in the workflow table, right-click an external reference
field and select Disable External Reference; The field is cleared from its value and External Reference
icon.

9.5.3. Enabling External References in an Agent Profile Field


You enable external referencing of an agent profile field from the agent's profile configuration.

Note! External referencing is applicable only from the following agent profiles:

• Inter Workflow

• Database

• Archiving

• Aggregation

• Duplicate UDR

• Workflow Bridge

For further information on the agents listed above, see the relevant appendix sections.

9.5.3.1. To Enable an Agent Profile Field with External Referencing:


1. Open an agent's profile configuration.

2. Select External References from the Edit menu to open the external references view.

Figure 246. The External References View

Enable External Reference Check to enable external referencing of the agent profile fields.

318
Desktop 7.1

Clear to disable external referencing of any agent profile field.


Profile Click Browse and select the agent profile you want to apply.
Enable Check to select the agent profile fields that you want enabled with
External Referencing.
Field Name The names of the agent profile fields.
External Reference Key The value

3. Check Enable External Reference, and click Browse to select your External Reference profile.

4. In the Selected Agent profile fields table, select the external reference keys to use by checking
Enable and External Reference Key field.

5. Click OK; you have now enabled external references for the selected profile field.

9.5.4. Using passwords in External References


In the Database profile and for several different agents, you can use passwords in the External Refer-
ences.

The password values must be represented by a string that has been encrypted with the mzsh encrypt-
password command.

When using the mzsh encryptpassword command you can select to use keys that have been
generated using the Java standard tool keytool. The keys to be used are determined by using aliases,
and if no alias is used, the default key will be used for the encryption. See the JDK product document-
ation for further information about using keytool in different scenarios.

Note! You have to use storetype JCEKS.

If aliases are to be used, the full path and password to the keystore has to be indicated by including
the mz.cryptoservice.keystore.path and mz.cryptoservice.keystore.password
properties in the platform.xml file. See the section describing System Properties in the System
Administration Guide for further information about these properties. The keystore must also contain
keys for all the aliases you want to use.

Note! The same keytool can be used for generating keys for RCP encryption. However, these
keys are of a different type and cannot be used for External References.

319
Desktop 7.1

Example 63. Encrypting passwords with crypto service keystore keys

This is an example of how passwords can be encrypted with crypto service keystore keys:

1. Create a security key with the keytool:

keytool -genseckey -alias myAlias -keyalg AES


-keystore myKeystore.jks -keysize 128
-storepass myKeystorePassword -storetype JCEKS.

Note!

• If you enter a -keysize that is larger than 128, you may get a message saying
that JCE Unlimited Strength Jurisidiction Policy Files needs to be installed. See the
Oracle product documentation for further information about this.

• The -storepass flag is optional. If you do not enter a -storepass you will
be prompted for a password.

• -storetype JCEKS is mandatory.

• You will be prompted if you want to use the same password for the key as for the
keystore and MediationZone® requires that the same password is used.

2. Place the keystore in a suitable directory.

3. Encrypt the password to the keystore using the mzsh encryptpassword command with
the default key:

mzsh mzadmin/<password> encryptpassword myKeystorePassword

The encrypted password is returned.

4. Add the following properties in platform.xml:

<property name="mz.cryptoservice.keystore.path"
value="<suitable directory>/myKeystore.jks"/>
<property name="mz.cryptoservice.keystore.password"
value="<the encrypted password>

5. Encrypt the passwords with aliases that you want to use in your external references:

mzsh mzadmin/<password> encryptpassword -a myAlias <passwordToEncrypt>

The returned password string can now be pasted into your External References properties file
and then be used by either the Database profile, or any of the agents where passwords are
available via External References.

9.6. Redis Profile


A Redis profile configuration is used to create logical units of Redis instances, master and slave(s),
when Redis is used as database. These profiles can then be used by various MediationZone® applications
such as workflows, audit profiles, etc.

320
Desktop 7.1

The Redis profile is loaded when you start a workflow that depends on it. Changes to the profile become
effective when you restart the workflow.

To create a new Redis profile configuration, click the New Configuration button in the upper left part
of the MediationZone® Desktop window, and then select Redis Profile from the menu.

To open an existing Redis Profile Configuration, double-click the Configuration in the Configuration
Navigator, or right-click a Configuration and then select Open Configuration(s)....

In a Redis Profile Configuration, there are two tabs; General and Advanced.

9.6.1. General Configurations


The General tab is displayed by default.

Figure 247. The Redis Profile

In order to edit any Configurations in the selected profile, the Active check box has to be cleared and
the profile saved.

The following Configurations are available in the General tab:

Identity The Identity of the Redis profile is used as a lookup key when referencing the
profile from a workflow, or other context, and has to be unique.
Type In this list you select the type of Redis profile you want to use; HA or Simple.
Active When this check box is selected and the profile is saved, the monitoring function
will be activated.
Redis Instances In this table you add all the redis instances you want to include in the profile. Each
instance is configured with:

• Name - the name of the instance

• Host - the IP address of the instance host

• Port - the port number of the instance

321
Desktop 7.1

• Priority - determines which slave instance that should be promoted to master


in case the master goes down. The available instance with the lowest value will
be promoted to master.

Note! The priority will only be considered in case of a failover. In ordinary


circumstances, the instance configured as master in the Configuration
files will be master regardless of the priority.

Use password au- If you want to use password authentication, select the Use password authentica-
thentication tion check box and enter the password you want to use to log in to the Redis
database. The password has to match the value set for the requirepass property
in the config files described in the System Administration Guide.
Pico Hosts By default the Redis profile is used for all pico instances, but if you want to restrict
usage to specific picos, you can configure this section with:

• Restrict usage to selected pico instances - select this check box in order to select
which picos you want to be able to use this Redis profile.

• Pico - add all the picos that you want to be able to use this Redis profile in this
table.

9.6.2. Advanced Configurations


In the Advanced tab you can configure additional properties if you want. These can typically be left
unchanged in the standard Redis Configuration.

Figure 248. The Redis Profile - Advanced Tab

See the text in the Properties field for further information about the properties you can set.

322
Desktop 7.1

9.6.3. Redis Profile Menus


The contents of the menus in the menu bar may change depending on which Configuration type that
has been opened in the currently displayed tab. The Redis profile uses the standard menu items that
are visible for all Configurations, and these are described in Section 3.1.1, “Configuration Menus”.

9.6.4. Redis Profile Buttons


The contents of the button panel may change depending on which Configuration type that has been
opened in the currently displayed tab. The Redis profile uses the standard buttons that are visible for
all Configurations, and these are described in Section 3.1.2, “Configuration Buttons”.

9.7. Shared Table Profile


This section describes the Shared Table profile. This profile enables workflow instances to share tables
for lookups.

Figure 249. Shared Table

Using the Table Lookup Service instead of adding tableCreate in each workflow instance will
increase the throughput with less duplicated tables, fewer lookups and reduced memory consumption.

The Table Lookup Service comprises a profile in which SQL queries are defined, and two APL func-
tions; one that references the profile and creates a shared table, and one that can be used for refreshing
the table data from the APL code.

The Shared Table profile is loaded when you start a workflow that depends on it. Changes to the profile
become effective when you restart the workflow and each time you save the profile.

9.7.1. Memory Allocation


There are three different ways to allocate memory for the created tables. By default, the tables are kept
as Java objects in memory. The shared tables can also be configured to keep the tables as raw data
either on or off the heap. By using raw data, the overhead of java objects is removed and less memory
is required.

The type of memory allocation chosen for the shared tables are configured in the Shared Table profile
by selecting a Table Storage parameter, and if relevant an Index Storage parameter, with the option
to select variable width varchar columns. For further information, Section 9.7.2, “The Shared Table
Profile Configuration ”.

For more information regarding memory allocation, see the System Administrator's guide.

323
Desktop 7.1

9.7.2. The Shared Table Profile Configuration


The Shared Table profile configuration is opened by clicking on the New Configuration button in
Desktop and selecting the Shared Table Profile option.

Figure 250. The Shared Table Profile Configuration

9.7.2.1. Menus and Buttons


The contents of the menus in the menu bar may change depending on which configuration type that
has been opened in the currently active tab. The Shared Table profile uses the standard menu items
and buttons that are visible for all Configurations, and these are described in Section 3.1, “Menus and
Buttons”.

9.7.2.2. Configuration
The Shared Table profile configurations contains the following settings:

Database Click on the Browse... button and select the Database profile you want to use.
Any type of database that has been configured in a database profile can be used.
See Section 9.3, “Database Profile” for further information.
Release Timeout If this check box is selected, the table will be released when the entered number
(seconds) of seconds have passed since the workflows accessing the table were stopped.
The entered number of seconds must be larger than 0.

If this check box is not selected, the table will stay available until the execution
context is restarted.
Refresh Interval Select this check box in order to refresh the data in the table with the interval
(seconds) entered. The entered number of seconds must be larger than 0.

If this check box is not selected, the table will only be refreshed if the APL function
tableRefreshShared is used. For more information regarding the function,
see Section 9.7.3.2, “tableRefreshShared”

324
Desktop 7.1

Note! The interval for checking if the table needs to be refreshed is 10


seconds, which is the minimum time before a new refresh is performed.

In case a refresh fails, a new refresh is initiated every 10th second, until
the refresh has finished successfully.

Object Select this option to set the Table Storage to Object. If you select this option,
the shared tables are stored as Java objects on the JVM heap.
On Heap Select this option to set the Table Storage to On Heap. If you select this option,
the shared tables are stored in a compact format on the JVM heap. If you select
On Heap, you must select an option for the Index Storage.
Off Heap Select this option to set the Table Storage to Off Heap. If you selct this option,
the shared tables are stored in a compact format outside the JVM heap.

Note! You are required to set the jdk parameter in the executioncon-
text.xml, for example:

<jdkarg value="-XX:MaxDirectMemorySize=4096M"/>

If you select Off Heap, you must select an option for the Index Storage.
Unsafe Select this option to set the Table Storage to Unsafe. If you select this option,
the shared tables are stored in a compact format. If you select Unsafe, you must
select an option for the Index Storage.
Primitive Lookup Select this option to set the Table Storage to Primitive Lookup. This provides
simple lookup tables with a fast lookup function but they are limited to two
columns of type Int/Long for the key (column 1) and type Short/Int/Long for the
value (column 2). Lookup operations on Primitive Lookup tables are limited
to the equals operation on column 1.
Object Select this option to set the Index Storage to Object. If you select this option,
the index is stored as Java objects on the JVM heap. This option is only available
if you have selected On Heap, Off Heap or Unsafe for Table Storage.
Pointer Select this option to set the Index Storage to Pointer . If you select this option,
the index is stored as pointers to the table data. This option is only available if
you have selected On Heap, Off Heap or Unsafe for Table Storage.
Cached Long/Int Select this option to set the Index Storage to Cached Long/Int Pointer. This
Pointer option is only available if you have selected On Heap, Off Heap or Unsafe for
Table Storage. For numeric index columns, the Cached Long/Int Pointer can
be used for faster lookups, but at the cost of slightly higher memory consumption.
Variable Width Select this check box to enable variable width storage of varchar columns. This
Varchar Columns reduces memory usage for columns that are wide and of varying width.
SQL Load State- In this field, an SQL SELECT statement should be entered in order to create the
ment contents of the table returned by the tableCreateShared APL function.

325
Desktop 7.1

Example 64.

For example,

SELECT key,value FROM MyTable

will return a table named MyTable with the columns key and value when
the tableCreateShare function is used together with this profile.

If no data has been fetched from the database, SQL errors in the table lookup will
cause runtime errors (workflow aborts). However, if data has already been fetched
from the database then this data will be used. This will also be logged in the System
Log.
Table Indices If you want to create an index for one or several columns of the shared table, these
columns can be added in this field by clicking on the Add... button and adding
the columns for which you want to create an index. The index will start with 0
for the first column.

Note! An index will not be created unless there are at least five rows in the
table.

9.7.3. APL
The following functions are included for the Table Lookup Service:

• tableCreateShared

• tableRefreshShared

9.7.3.1. tableCreateShared
Returns a shared table that holds the result of the database query entered in the Shared Table profile.

table tableCreateShared
( string profileName )

Parameters:

profileName Name of the Shared Table profile you want to use.


Returns A table containing the columns stated with the SQL query in the stated Shared
Table profile, that can be shared by several workflow instances.

Example 65.

initialize {
table myTable = tableCreateShared("Folder.mySharedProfile");
}

will create a shared table called myTable with the columns returned by the SQL query in the
mySharedProfile Shared Table profile.

326
Desktop 7.1

9.7.3.2. tableRefreshShared
This function can be used for refreshing the data for a shared table configured with a Shared Table
profile. The table will be updated for all workflow instances that are using the table and are running
on the same EC.

table tableRefreshShared
( string profileName )

Parameters:

profileName Name of the Shared Table profile you want to refresh data for.
Returns A refreshed shared table.

Example 66.

table myTable = tableRefreshShared("Folder.mySharedProfile");

will return the shared table called myTable, which uses the mySharedProfile, with refreshed
data.

327
Desktop 7.1

10. Appendix II - Collection agents

10.1. AFT/TCP Agent


10.1.1. Introduction
This section describes the AFT/TCP agent. This is an extension Batch agent on the DigitalRoute®
MediationZone® Platform.

10.1.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• The Nortel DMS-GSP with EIU and AFT functionality

• Standard TCP/IP

10.1.2. AFT/TCP Agent


The AFT/TCP Agent allows files from Nortel DMS-GSP switches with EIU and AFT functionality
to be collected and inserted into a MediationZone® workflow, using the TCP/IP protocol. An AFT/TCP
agent does not communicate directly with a Nortel switch. Instead there is an EIU service in between,
responsible for keeping track of all transactions.

When the workflow is activated, the agent connects to the EIU service and awaits data packets on a
predefined port to arrive from the DMS-GSP switch. The AFT/TCP agent may not be combined with
other collectors in the same workflow.

10.1.2.1. Configuration
The AFT/TCP agent configuration window is displayed when the agent in a workflow is double-clicked
or right-clicked, selecting Configuration...

Figure 251. AFT/TCP agent configuration window.

EIU Host Host name or IP-address of the Ethernet Interface Unit.


EIU Port TCP/IP port number, corresponding to the EIU Host. Default port is 7530.
MTP Trace If enabled, the agent will trace all MTP messages in hex. The messages will be traced
as ordinary events.

328
Desktop 7.1

Keep Alive If enabled, the agent tells the system to perform a continuous test that the remote host
is up and running. The keep-alive functionality will make sure that bad connections
are discovered.

The keep-alive interval is system dependent, and can be displayed with the following
command on Sun Solaris:

# /usr/sbin/ndd /dev/tcp \ tcp_keepalive_interval

Default is 7200 000 (milliseconds).


Timeout (sec) The value defines the maximum time, in seconds, to wait for replies from commands.
0 (zero) means wait forever.

10.1.2.2. Transaction Behavior


This section includes information about the AFT/TCP agent transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

10.1.2.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.

10.1.2.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS. The batch is
then handled through the ECS Inspector or ECS Collection agent.

If the Cancel Batch behavior, defined on workflow level, is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation ECS will not be involved, and the file will not be affected but may
be marked for PTF at the DMS.

10.1.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The AFT/TCP agent produces bytearray types.

10.1.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”

329
Desktop 7.1

10.1.2.4.1. Publishes

MIM Parameter Description


File Retrieval Timestamp This parameter contains a timestamp, that indicates when the file transfer
starts.

File Retrieval Timestamp is of the date type and is defined as


a header MIM context type.
Source Filename This parameter contains the name of the currently processed file, as defined
at the source.

Source Filename is of the string type and is defined as a header


MIM context type.
Source Host This parameter contains the Host as defined in the configuration window
of the agent.

Source Host is of the string type and is defined as a header MIM


context type.
Block sequence no This parameter contains the block sequence number of the block currently
consumed.

Block sequence no is of the int type and is defined as a batch MIM


context type.

10.1.2.4.2. Accesses

The agent does not itself access any MIM resources.

10.1.2.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with file:filename

Reported along with the name of the source file that has been collected and inserted into the workflow.

• File cancelled: filename

Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted; refer to Section 10.1.2.2, “Transaction Behavior” for
further information.

10.1.2.6. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Connection established to EIU: hostname portnumber

Reported along with the number set in EIU Port, when a connection to the EIU Host has been es-
tablished. The message will only be displayed if debug is acivated on workflow level.

330
Desktop 7.1

• PFT for: filename rejected

Reported when the sequence number of the last block, acknowledged by the downstream collector
in the switch start file output request, is not equal to 0 (zero). The switch will then try to send the
whole file again. The message will only be displayed if debug is acivated on workflow level.

10.2. FTAM/5ESS Agent


10.2.1. Introduction
This section describes the FTAM/5ESS agent. This is an extension agent of the DigitalRoute® Medi-
ationZone® platform.

10.2.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• The Lucent Data Link

• FTAM

10.2.1.2. Documentation
• Lucent Data Link Interface Specification

10.2.2. FTAM/5ESS Agent


The FTAM/5ESS agent allows files from Lucent 5ESS switches to be collected and inserted into a
MediationZone® workflow using the FTAM protocol.

The agent does not communicate directly with the 5ESS switch. Instead it connects via the FTAM
Interface service that must be running on a host in the MediationZone® network. The advantage with
this implementation is that only one host has to be equipped with FTAM software. For further inform-
ation about how the FTAM Interface service is maneuvered, see Section 10.2.3, “FTAM Interface
Service”.

When activated, the agent connects to the FTAM Interface service and requests to retreive the next
file, based on the file prefix name and the file generation number from the named host.

The FTAM/5ESS agent may not be combined with other collectors in the same workflow.

10.2.2.1. Configuration
The FTAM/5ESS agent configuration window is displayed when the agent in a workflow is double-
clicked or right-clicked, selecting Configuration...

331
Desktop 7.1

10.2.2.1.1. Switch Tab

Figure 252. FTAM/5ESS agent configuration window, Switch tab.

Host Name The host name of the 5ESS switch.


User Name User name as defined in the 5ESS switch.
Password Password related to the User Name.
Path and File Prefix Name The beginning of the filename to retrieve. This can be a string pattern
for filename matching according to Lucent specification.
Stop at Number If specified, the agent will collect all sub-files between File Generation
Number and Stop at Number and then stop the workflow. If not
specified, the agent will continue collecting sub-files forever according
to the values set in Wrap On/Wrap To.
Wrap On/Wrap To If Stop at Number is not set, the agent will continue collecting sub-
files forever, wrapping to the Wrap To number after reaching the
Wrap On number. Default is wrapping to 0000 after reaching 9999.
Selection Order To specify the logical state of the files to be collected. Possible values
are specified in the switch documentation.
Use File Generation Number Determines if file generation number and selection order will be part
of the filename.
Remove After Collection Determines if the file will be removed from the 5ESS switch after
collection.

332
Desktop 7.1

Example 67.

This example aims to demonstrate filename generation.

The configuration in Figure 252, “FTAM/5ESS agent configuration window, Switch tab.” can
be used to retrieve the following switch files in SAFE state:

06091C.2216
06092C.2217
06093C.2218
06094C.2219
06094D.2220
...

When the agent is asking for the first file (06091C.2216) the corresponding file name pattern
will be *[A-Z].2216;STATE=SAFE.

10.2.2.1.2. Interface Tab

Figure 253. FTAM/5ESS agent configuration window, Interface tab.

Host Host name or IP-address of the host where the FTAM Interface service is running. This field
should contain the host alias, which is located in the <ROOT_DIR>/etc/host_def folder.
Port Port number on the Host, on which the FTAM Interface service is listening. The default port
number is 16702.

10.2.2.1.3. Advanced Tab

Figure 254. FTAM/5ESS agent configuration window, Advanced tab.

Account Optional account name, as defined in the 5ESS switch, if utilized.


File Store Password Optional password for file access, as defined in the 5ESS switch, if utilized.
Document Type A list from which the data format is selected; unstructured text, record-oriented
text, or unstructured binary.

10.2.2.2. Transaction Behavior


This section includes information about the FTAM/5ESS agent transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

333
Desktop 7.1

10.2.2.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.

10.2.2.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

If the Cancel Batch behavior, defined on workflow level, is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation ECS will not be involved, and the file will not be deleted even if
the Remove After Collection is enabled.

10.2.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray types.

10.2.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see the
MediationZone® Desktop user's guide.

10.2.2.4.1. Publishes

MIM Parameter Description


File Retrieval Timestamp This MIM parameter contains a timestamp, that indicates when the file
transfer starts.

File Retrieval Timestamp is of the date type and is defined as


a header MIM context type.
Source Filename This MIM parameter contains the name of the currently processed file, as
defined at the source.

Source Filename is of the string type and is defined as a header


MIM context type.
Source Host This MIM parameter contains host name/IP address as defined in the Switch
tab.

Source Host is of the string type and is defined as a header MIM


context type.
Source User Name This MIM parameter contains user name as defined in the Switch tab.

Source User Name is of the string type and is defined as a header


MIM context type.

334
Desktop 7.1

10.2.2.4.2. Accesses

The agent does not itself access any MIM resources.

10.2.2.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see the MediationZone® Desktop user's
guide.

• Ready with file: filename

Reported along with the name of the source file, that has been collected and inserted into the work-
flow.

• File cancelled: filename

Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted. For further information, see Section 10.2.2.2, “Transaction
Behavior”.

10.2.2.6. Debug Events


There are no debug events for this agent.

10.2.3. FTAM Interface Service


The interface is a stand-alone program for Linux that communicates with MediationZone® through
the specified TCP port.

The interface consists of the following binaries:

• start_ftaminterface - Starts the interface.

• status_ftaminterface - Reports status of the interface.

• stop_ftaminterface - Terminates the interface.

• ftamagent - Internal communication agent.

Figure 255. Communication between FTAM Collection Agents and Network Element, via the
FTAM Interface Service.

335
Desktop 7.1

It is important to start the interface by using a full path name. If the binaries are placed in
/opt/mz/ftam/bin the same path must be used when the interface is started. The following
command can be used to start the interface:

$ /opt/mz/ftam/bin/start_ftaminterface ROOT_DIR=/var/ftamroot PORT=16702


The port name mzftam not found in /etc/services, the default value 16702
will be used.
Binary directory: /opt/ftam_isode_if
FTAM root directory: /var/ftam_root
FTAM interface port: 16702
INFO: The interface has been started.
$

The root directory contains internal state information for recovery and log files.

10.3. FTAM/EWSD Agent


10.3.1. Introduction
This section describes the FTAM/EWSD agent. This is an extension agent on the DigitalRoute® Me-
diationZone® platform.

10.3.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• FTAM

10.3.2. FTAM/EWSD Agent


The FTAM/EWSD agent allows files from Siemens EWSD switches to be collected and inserted into
a MediationZone® workflow using the FTAM protocol. The agent does not communicate directly
with the EWSD switch. Instead it goes via the FTAM Interface service that must be running on a host
in the MediationZone® network. The advantage with this implementation is that only one host has to
be equipped with FTAM software. For further information about how the FTAM Interface service is
maneuvered, see Section 10.3.3, “FTAM Interface Service”.

The FTAM/EWSD usually collects a cyclic file, since that is how Siemens EWSD switches generate
traffic data. When activated, the agent connects to the FTAM Interface service and requests the new
data. The switch keeps track of the data to be collected by using two parameters: begin and end
copy area pointer.

After the FTAM/EWSD agent has safely collected the data, a delete request is issued, resulting in
the begin copy area pointer being moved to the end. The collected data is saved in files,
each containing one copy area (all data from one(1) activation).

The FTAM/EWSD agent may not be combined with other collectors in the same workflow.

Since the FTAM/EWSD agent is the active part, it has to be scheduled to be invoked periodically.

336
Desktop 7.1

10.3.2.1. Configuration
The FTAM/EWSD agent configuration window is displayed when the agent in a workflow is double-
clicked or right-clicked, selecting Configuration...

10.3.2.1.1. Switch Tab

Figure 256. FTAM/EWSD agent configuration window, Switch tab.

Host Name The host name of the EWSD switch.


User Name User name as defined at the EWSD switch.
Password Password related to the User Name.
Filename Name of the cyclic file into which the EWSD switch stores data, and from
which the agent will be collecting.
Remote File is Cyclic If disabled, static files will be collected, that is, the switch does not produce
a cyclic file.

10.3.2.1.2. Interface Tab

Figure 257. FTAM/EWSD agent configuration window, Interface tab.

Host Host name or IP-address of the host where the FTAM Interface service is running. This field
should contain the host alias, which is located in the <ROOT_DIR>/etc/host_def folder.
Port Port number on the Host, on which the FTAM Interface service is listening. The default port
number is 16702.

337
Desktop 7.1

10.3.2.1.3. Advanced Tab

Figure 258. FTAM/EWSD agent configuration window, Advanced tab.

Account Optional account name, as defined at the EWSD switch, if utilized.


File Store Password Optional password for file access, as defined at the EWSD switch, if utilized.
Document Type A list from which the data format is selected; unstructured text, record-oriented
text, or unstructured binary.
Number of Retries The number of times to retry upon failing connection.
Delay (sec) The number of seconds between each retry attempt.

10.3.2.2. Transaction Behavior


This section includes information about the FTAM/EWSD agent transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

10.3.2.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.

10.3.2.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

If the Cancel Batch behavior, defined on workflow level, is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation ECS will not be involved, and the established copy area will not
be deleted.

10.3.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

338
Desktop 7.1

The agent produces bytearray types.

10.3.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see the
MediationZone® Desktop user's guide.

10.3.2.4.1. Publishes

MIM Parameter Description


File Retrieval Timestamp This MIM parameter contains a timestamp, indicating when the file transfer
starts.

File Retrieval Timestamp is of the date type and is defined as


a header MIM context type.
Source Filename This MIM parameter contains the Filename, merged with an automatic
generated file sequence number of five (5) digits (00001-99999).

Source Filename is of the string type and is defined as a header


MIM context type.
Source Host This MIM parameter contains the Host Name.

Source Host is of the string type and is defined as a header MIM


context type.
Source User Name This MIM parameter contains the User Name.

Source User Name is of the string type and is defined as a header


MIM context type.

10.3.2.4.2. Accesses

The agent does not itself access any MIM resources.

10.3.2.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see the MediationZone® Desktop user's
guide.

• Ready with file: filename

Reported along with the name of the source file, when the file given in Filename has been collected
and inserted into the workflow.

• File cancelled: filename

Reported along with the name of the source file, each time a Cancel Batch message is received. This
assumes the workflow is not aborted. For further information, see Section 10.3.2.2, “Transaction
Behavior”.

10.3.2.6. Debug Events


There are no debug events for this agent.

339
Desktop 7.1

10.3.3. FTAM Interface Service


The interface is a stand-alone program for Linux that communicates with MediationZone® through
the specified TCP port.

The interface consists of the following binaries:

• start_ftaminterface - Starts the interface.

• status_ftaminterface - Reports status of the interface.

• stop_ftaminterface - Terminates the interface.

• ftamagent - Internal communication agent.

Figure 259. Communication between FTAM Collection Agents and Network Element, via the
FTAM Interface Service.

It is important to start the interface by using a full path name. If the binaries are placed in
/opt/mz/ftam/bin the same path must be used when the interface is started. The following
command can be used to start the interface:

$ /opt/mz/ftam/bin/start_ftaminterface ROOT_DIR=/var/ftamroot PORT=16702


The port name mzftam not found in /etc/services, the default value 16702
will be used.
Binary directory: /opt/ftam_isode_if
FTAM root directory: /var/ftam_root
FTAM interface port: 16702
INFO: The interface has been started.
$

The root directory contains internal state information for recovery and log files.

10.4. FTAM/IOG Agent


10.4.1. Introduction
This section describes the FTAM/IOG agent. This is an extension agent on the DigitalRoute® Medi-
ationZone® platform.

340
Desktop 7.1

10.4.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• FTAM

10.4.1.2. Documentation
• FTAM Responder Application, 56/155 17-ANZ 216 01 Uen, Ericsson

10.4.2. FTAM/IOG Agent


The FTAM/IOG agent allows files from Ericsson IOG to be collected and inserted into a Medi-
ationZone® workflow using the FTAM protocol. The agent does not communicate directly with the
IOG. Instead it goes via the FTAM Interface service that must be running on a host in the Medi-
ationZone® network. The advantage with this implementation is that only one host has to be equipped
with FTAM software. For further information about how the FTAM Interface service is maneuvered,
see Section 10.4.3, “FTAM Interface Service”.

The FTAM/IOG agent collects subfiles that are part of a composite main file on the IOG. When activ-
ated, the agent connects to the FTAM Interface service and requests the new data. The FTAM/IOG
agent keeps track of the data to be collected by reading the contents of one or more directory control
files, which maintains the status of the subfiles

The FTAM/IOG agent may not be combined with other collectors in the same workflow.

10.4.2.1. Configuration
The FTAM/IOG agent configuration window is displayed when the agent in a workflow is double-
clicked or right-clicked, selecting Configuration...

10.4.2.1.1. Switch Tab

Figure 260. FTAM/IOG agent configuration window, Switch tab.

Host Name The host name of the IOG.


User Name User name as defined in the IOG.
Password Password related to the User Name.

341
Desktop 7.1

Directory Control File 1 The name of the first directory control file as specified in the IOG. Ad-
ditional directory control files can be specified in the Advanced tab.
Main Filename The name of the main file as defined in the IOG.
Stop at Subfile A subfile sequence number in the range of 0001-9999.
Regular Expression A regular expression according to Java syntax, using the subfilename as
input. The result are the names of the subfiles to be collected.
Remove After Collection If enabled, the source files will be removed from the IOG after the col-
lection.

Note! For further information about regular expressions in Java, see:

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

10.4.2.1.2. Interface Tab

Figure 261. FTAM/IOG agent configuration window, Interface tab.

Host Host name or IP-address of the host where the FTAM Interface service is running. This field
should contain the host alias, which is located in the <ROOT_DIR>/etc/host_def folder.
Port Port number on the Host, on which the FTAM Interface service is listening. The default port
number is 16702.

10.4.2.1.3. Advanced Tab

Figure 262. FTAM/IOG agent configuration window, Advanced tab.

Account Optional account name, as defined in the IOG, if utilized.


File Store Password Optional password for file access, as defined in the IOG, if utilized.
Document Type A list from which the data format is selected; unstructured text, record-
oriented text, or unstructured binary.

342
Desktop 7.1

Directory Control File [2-4] The names of up to three additional directory control files as defined
in the IOG.

10.4.2.2. Transaction Behavior


This section includes information about the FTAM/IOG agent transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

10.4.2.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.

10.4.2.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

If the Cancel Batch behavior, defined on workflow level, is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation ECS will not be involved, and the established copy area will not
be deleted.

10.4.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray types.

10.4.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see the
MediationZone® Desktop user's guide.

10.4.2.4.1. Publishes

MIM Parameter Description


File Retrieval Timestamp This MIM parameter contains a timestamp, indicating when the file transfer
starts.

File Retrieval Timestamp is of the date type and is defined as


a header MIM context type.
Source Filename This MIM parameter contains the Filename, merged with an automatic
generated file sequence number of five (5) digits (00001-99999).

Source Filename is of the string type and is defined as a header


MIM context type.
Source Host This MIM parameter contains the Host Name.

343
Desktop 7.1

Source Host is of the string type and is defined as a header MIM


context type.
Source User Name This MIM parameter contains the User Name.

Source User Name is of the string type and is defined as a header


MIM context type.

10.4.2.4.2. Accesses

The agent does not itself access any MIM resources.

10.4.2.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see the MediationZone® Desktop user's
guide.

• Ready with file: filename

Reported along with the name of the source file, when the file given in Filename has been collected
and inserted into the workflow.

• File cancelled: filename

Reported along with the name of the source file, each time a Cancel Batch message is received. This
assumes the workflow is not aborted. For further information, see Section 10.4.2.2, “Transaction
Behavior”.

10.4.2.6. Debug Events


There are no debug events for this agent.

10.4.3. FTAM Interface Service


The interface is a stand-alone program for Linux that communicates with MediationZone® through
the specified TCP port.

The interface consists of the following binaries:

• start_ftaminterface - Starts the interface.

• status_ftaminterface - Reports status of the interface.

• stop_ftaminterface - Terminates the interface.

• ftamagent - Internal communication agent.

344
Desktop 7.1

Figure 263. Communication between FTAM Collection Agents and Network Element, via the
FTAM Interface Service.

It is important to start the interface by using a full path name. If the binaries are placed in
/opt/mz/ftam/bin the same path must be used when the interface is started. The following
command can be used to start the interface:

$ /opt/mz/ftam/bin/start_ftaminterface ROOT_DIR=/var/ftamroot PORT=16702


The port name mzftam not found in /etc/services, the default value 16702
will be used.
Binary directory: /opt/ftam_isode_if
FTAM root directory: /var/ftam_root
FTAM interface port: 16702
INFO: The interface has been started.
$

The root directory contains internal state information for recovery and log files.

10.5. FTAM/Nokia Agent


10.5.1. Introduction
This section describes the FTAM/Nokia agent. This is an extension agent on the DigitalRoute® Medi-
ationZone® platform.

10.5.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• FTAM

10.5.1.2. Documentation
• Nokia Data File Transfer from VDS Device to Postprocessing System

345
Desktop 7.1

10.5.2. FTAM/Nokia Agent


The FTAM/Nokia agent allows files from Nokia DX200 switches to be collected and inserted into a
MediationZone® workflow using the FTAM protocol. The agent does not communicate directly with
the DX200 switch. Instead it goes via the FTAM Interface service that must be running on a host in
the MediationZone® network. The advantage with this implementation is that only one host has to be
equipped with FTAM software. For further information about how the FTAM Interface service is
maneuvered, see Section 10.5.3, “FTAM Interface Service”.

The FTAM/Nokia agent collects files stored in a circular buffer in the Virtual Storing Device (VDS)
of the DX200 switch. When activated, the agent first reads the transfer control file, TTTCOF, to check
the validity of its timestamps. The agent then reads the storage control file, TTSCOF, to check the
number of data files that can be collected. Once the files are safely transferred, the agent updates
TTTCOF, enabling the VDS to overwrite the collected data.

The FTAM/Nokia agent may not be combined with other collectors in the same workflow.

10.5.2.1. Configuration
The FTAM/Nokia agent configuration window is displayed when the agent in a workflow is double-
clicked or right-clicked, selecting Configuration...

10.5.2.1.1. Switch Tab

Figure 264. FTAM/Nokia agent configuration window, Switch tab.

Host Name The host name of the DX200 switch.


User Name User name as defined in the DX200 switch.
Password Password related to the User Name.
Root Directory Absolute pathname of the source directory on the VDS file sys-
tem.
VDS Device No A number identifying the VDS.
Local Time Zone at The Switch The time zone used by the switch.
Correct TTTCOF Timestamps Future dated timestamps in the TTTCOF are corrected if they
Before are older than the date in this field.The FTAM/Nokia agent sets
the corrected timestamp to the current date and time.
Retrieve GZIPed Files Determines if the agent will decompress the files before passing
them on in the workflow.

346
Desktop 7.1

10.5.2.1.2. Interface Tab

Figure 265. FTAM/Nokia agent configuration window, Interface tab.

Host Host name or IP-address of the host where the FTAM Interface service is running. This field
should contain the host alias, which is located in the <ROOT_DIR>/etc/host_def folder.
Port Port number on the Host, on which the FTAM Interface service is listening. The default port
number is 16702.

10.5.2.1.3. Advanced Tab

Figure 266. FTAM/Nokia agent configuration window, Advanced tab.

Account Optional account name, as defined in the DX200 switch, if utilized.


File Store Password Optional password for file access, as defined in the DX200 switch,
if utilized.
Max Allowed Time Diff The maximum number of minutes forward in time that is allowed
(minutes) for valid timestamps in the TTTCOF file.
Document Type A list from which the data format is selected; unstructured text,
record-oriented text, or unstructured binary.
Prefix The prefix of the data file names.
Number Position The position of the sequence number in the data file names.
Suffix The suffix of the data file names.
Ends with VDS device no The data files have the number of the VDS device added to the
name.

347
Desktop 7.1

10.5.2.2. Transaction Behavior


This section includes information about the FTAM/Nokia agent transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

10.5.2.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.

10.5.2.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

If the Cancel Batch behavior, defined on workflow level, is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation ECS will not be involved, and the established copy area will not
be deleted.

10.5.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray types.

10.5.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see the
MediationZone® Desktop user's guide.

10.5.2.4.1. Publishes

MIM Parameter Description


File Retrieval Timestamp This MIM parameter contains a timestamp, indicating when the file transfer
starts.

File Retrieval Timestamp is of the date type and is defined as


a header MIM context type.
Source Filename This MIM parameter contains the Filename, merged with an automatic
generated file sequence number of five (5) digits (00001-99999).

Source Filename is of the string type and is defined as a header


MIM context type.
Source Host This MIM parameter contains the Host Name.

Source Host is of the string type and is defined as a header MIM


context type.

348
Desktop 7.1

Source User Name This MIM parameter contains the User Name.

Source User Name is of the string type and is defined as a header


MIM context type.

10.5.2.4.2. Accesses

The agent does not itself access any MIM resources.

10.5.2.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see the MediationZone® Desktop user's
guide.

• Ready with file: filename

Reported along with the name of the source file, when the file given in Filename has been collected
and inserted into the workflow.

• File cancelled: filename

Reported along with the name of the source file, each time a Cancel Batch message is received. This
assumes the workflow is not aborted. For further information, see Section 10.5.2.2, “Transaction
Behavior”.

10.5.2.6. Debug Events


There are no debug events for this agent.

10.5.3. FTAM Interface Service


The interface is a stand-alone program for Linux that communicates with MediationZone® through
the specified TCP port.

The interface consists of the following binaries:

• start_ftaminterface - Starts the interface.

• status_ftaminterface - Reports status of the interface.

• stop_ftaminterface - Terminates the interface.

• ftamagent - Internal communication agent.

349
Desktop 7.1

Figure 267. Communication between FTAM Collection Agents and Network Element, via the
FTAM Interface Service.

It is important to start the interface by using a full path name. If the binaries are placed in
/opt/mz/ftam/bin the same path must be used when the interface is started. The following
command can be used to start the interface:

$ /opt/mz/ftam/bin/start_ftaminterface ROOT_DIR=/var/ftamroot PORT=16702


The port name mzftam not found in /etc/services, the default value 16702
will be used.
Binary directory: /opt/ftam_isode_if
FTAM root directory: /var/ftam_root
FTAM interface port: 16702
INFO: The interface has been started.
$

The root directory contains internal state information for recovery and log files.

10.6. FTAM/S12 Agent


10.6.1. Introduction
This section describes the FTAM/S12 agent. This is an extension agent of the DigitalRoute® Medi-
ationZone® platform.

10.6.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• The Alcatel S12

• FTAM

10.6.1.2. Documentation
• Alcatel STR-FTAM-SERVICES 214 7944 AAAA

350
Desktop 7.1

10.6.2. FTAM/S12 Agent


The FTAM/S12 agent allows data from cyclic files in Alcatel S12 switches to be collected and inserted
into a MediationZone® workflow using the FTAM protocol.

The agent does not communicate directly with the S12 switch. Instead it connects via the FTAM Inter-
face service that must be running on a host in the MediationZone® network. The advantage with this
implementation is that only one host has to be equipped with FTAM software. For further information
about how the FTAM Interface service is maneuvered, see Section 10.6.3, “FTAM Interface Service”.

When activated, the agent connects to the FTAM Interface service and requests all available data from
the specified cyclic file.

The FTAM/S12 agent may not be combined with other collectors in the same workflow.

10.6.2.1. Configuration
The FTAM/S12 agent configuration window is displayed when the agent in a workflow is double-
clicked or right-clicked, selecting Configuration...

10.6.2.1.1. Switch Tab

Figure 268. FTAM/S12 agent configuration window, Switch tab.

Host Name The host name of the S12 switch.


User Name User name as defined at the S12 switch.
Password Password related to the User Name.
Filename The name of the cyclic file at the switch.

10.6.2.1.2. Interface Tab

Figure 269. FTAM/S12 agent configuration window, Interface tab.

351
Desktop 7.1

Host Host name or IP-address of the host where the FTAM Interface service is running. This field
should contain the host alias, which is located in the <ROOT_DIR>/etc/host_def folder.
Port Port number on the Host, on which the FTAM Interface service is listening. The default port
number is 16702.

10.6.2.1.3. Advanced Tab

Figure 270. FTAM/S12 agent configuration window, Advanced tab.

Account Optional account name, as defined at the S12 switch, if utilized.


File Store Password Optional password for file access, as defined at the S12 switch, if utilized.
Document Type A list from which the data format is selected; unstructured text, record-oriented
text, or unstructured binary.
Number of Retries The number of times to commence local retries upon temporary errors.
Delay (sec) The number of seconds between each retry attempt.

10.6.2.2. Transaction Behavior


This section includes information about the FTAM/S12 agent transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

10.6.2.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted right before the first byte of each collected file is fed into a workflow.
End Batch Emitted just after the last byte of each collected file has been fed into the system.

10.6.2.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

352
Desktop 7.1

If the Cancel Batch behavior, defined on workflow level, is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation ECS will not be involved, and the file will not be moved, and the
established copy area will not be deleted.

10.6.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray types.

10.6.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see the
MediationZone® Desktop user's guide.

10.6.2.4.1. Publishes

MIM Parameter Description


File Retrieval Timestamp This MIM parameter contains a timestamp, that indicates when the file
transfer starts.

File Retrieval Timestamp is of the date type and is defined as


a header MIM context type.
Source Filename This MIM parameter contains the name of the currently processed file as
defined in the Switch tab.

Source Filename is of the date type and is defined as a header MIM


context type.
Source Host This MIM parameter contains host name/IP address as defined in the Switch
tab.

Source Host is of the string type and is defined as a header MIM


context type.
Source User Name This MIM parameter contains user name as defined in the Switch tab.

Source User Name is of the string type and is defined as a header


MIM context type.

10.6.2.4.2. Accesses

The agent does not itself access any MIM resources.

10.6.2.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see the MediationZone® Desktop user's
guide.

• Ready with file: filename

Reported along with the name of the source file, that has been collected and inserted into the work-
flow.

353
Desktop 7.1

• File cancelled: filename

Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted. For further information, see Section 10.6.2.2, “Transaction
Behavior”.

10.6.2.6. Debug Events


There are no debug events for this agent.

10.6.3. FTAM Interface Service


The interface is a stand-alone program for Linux that communicates with MediationZone® through
the specified TCP port.

The interface consists of the following binaries:

• start_ftaminterface - Starts the interface.

• status_ftaminterface - Reports status of the interface.

• stop_ftaminterface - Terminates the interface.

• ftamagent - Internal communication agent.

Figure 271. Communication between FTAM Collection Agents and Network Element, via the
FTAM Interface Service.

It is important to start the interface by using a full path name. If the binaries are placed in
/opt/mz/ftam/bin the same path must be used when the interface is started. The following
command can be used to start the interface:

$ /opt/mz/ftam/bin/start_ftaminterface ROOT_DIR=/var/ftamroot PORT=16702


The port name mzftam not found in /etc/services, the default value 16702
will be used.
Binary directory: /opt/ftam_isode_if
FTAM root directory: /var/ftam_root
FTAM interface port: 16702
INFO: The interface has been started.
$

The root directory contains internal state information for recovery and log files.

354
Desktop 7.1

10.7. FTP/DX200 Collection Agent


10.7.1. Introduction
This section describes the FTP/DX200 Collection Agent. This is an extension agent of the DigitalRoute®
MediationZone® platform.

10.7.1.1. Prerequisites
The reader of this information should be familiar with the:

• MediationZone® Platform

• VDS device in a DX200 switch

• Standard FTP (RFC 959, http://www.ietf.org/rfc/rfc0959.txt)

• SSH2 and SFTP (http://tools.ietf.org/html/draft-ietf-secsh-filexfer-03)

10.7.2. Overview
The FTP/DX200 collection agent collects data files from a DX200 network element, and inserts them
into a MediationZone® workflow, by using the FTP or SFTP protocol. To do this, the agent:

• Reads the storage control file TTSCOFyy.IMG, that specifies what to collect.

• Registers every file that has been successfully collected in the transaction control file
TTTCOFyy.IMG.

Note! By default, the agent will skip files if the sequential order has been lost, and files that
have been overwritten (reaching FULL state before being set to OPEN state) will not be collected.

However, by adding the properties mz.dx200.acceptsequentiallost and


mz.dx200.acceptoverwritten in executioncontext.xml you can change this
behavior.

See the System Administration Guide for further information about these properties.

10.7.3. Preparations
Prior to configuring a DX200 agent to use SFTP, consider the following preparation notes:

• Server Identification

• Attributes

• Authentication

• Server Keys

10.7.3.1. Server Identification


The DX200 agent uses a file with known host keys to validate the server identity during connection
setup. The location and naming of this file is managed through the property:

mz.ssh.known_hosts_file

355
Desktop 7.1

It is set in executioncontext.xml to manage where the file is saved. The default value is
${mz.home}/etc/ssh/known_hosts.

The SSH implementation uses JCE (Java Cryptography Exentsion), which means that there may be
limitations on key sizes for your Java distribution. This is usually not a problem. However, there may
be some cases where the unlimited strength cryptography policy is needed. For instance, if the host
RSA keys are larger than 2048 bits (depending on the SSH server configuration). This may require
that you update the Java Platform that runs the Execution Context.

For unlimited strength cryptography on the Oracle JRE, download the JCE Unlimited Strength Juris-
diction Policy Files from http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-
2133166.html. Replace the jar files in $JAVA_HOME/jre/lib/security with the files in this
package. The OpenJDK JRE does not require special handling of the JCE policy files for unlimited
strength cryptography.

10.7.3.2. Attributes
DX200 agent support the following SFTP algorithms:

blowfish-cbc, cast128-cbc, twofish192-cbc, twofish256-cbc, twofish128-cbc, aes128-cbc, aes256-cbc,


aes192-cbc, 3des-cbc.

10.7.3.3. Authentication
The DX200 agent support authentication through either username/password or private key. Private
keys can optionally be protected by a Key password. Most commonly used private key files, can be
imported into MediationZone® .

Typical command line syntax (most systems):

ssh-keygen -t <keyType> -f <directoryPath>

keyType The type of key to be generated. Both RSA and DSA key types are supported.
directoryPath The directory in which you want to save the generated keys.

356
Desktop 7.1

Example 68.

The private key may be created using the following command line:

> ssh-keygen -t rsa -f /tmp/keystore


Enter passphrase: xxxxxx
Enter same passphrase again: xxxxxx

Then the following is stated:

Your identification key has been saved in /tmp/keystore


Your public key has been saved in /tmp/keystore.pub

When the keys are created the private key may be imported to the DX200 agent:

Finally, on the SFTP server host, append /tmp/keystore.pub to


$HOME/.ssh/authorized_keys. If the $HOME/.ssh/authorized_keys is not
there it must be created.

10.7.3.4. Server Keys


The SSH protocol uses host verification as protection against attacks where an attacker manages to
reroute the TCP connection from the correct server to another machine. Since the password is sent
directly over the encrypted connection, it is critical for security that an incorrect public key is not ac-
cepted by the client.

The agent uses a file with the known hosts and keys. It will accept the key supplied by the server if
either of the following is fulfilled:

1. The host is previously unknown. In this case the public key will be registered in the file.

2. The host is known and the public key matches the old data.

357
Desktop 7.1

3. The host is known however has a new key and the user has been configured to accept the new key.
For further information, see the Advanced tab.

If the host key changes for some reason, the file will have to be removed (or edited) in order for the
new key to be accepted.

10.7.4. Configuration
To configure the FTP/DX200 collection agent, in the Workflow configuration, either double-click on
the agent's icon, or right-click on the agent and then select the Configuration option in the popup
menu. The agent's configuration dialog box will then open. The dialog contains three tabs; Switch,
Advanced and TTSCOF Settings.

10.7.4.1. Switch Tab

Figure 272. FTP/DX200 Collection Agent Configuration - Switch Tab

The Switch tab includes configuration settings that are related to the remote host and the directory
where the control files are located. In this tab you specify from which VDS device the control files
are retrieved, and the time zone location of the VDS device.

Host Name Enter the name of the Host or the IP address of the switch that is to be connected.
Transfer Protocol Choose transfer protocol.
Authenticate With Choice of authentication mechanism. Both password and private key authentic-
ation are supported. When you select Private Key, a Select... button will appear,
which opens a window where the private key may be inserted. If the private
key is protected by a passphrase, the passphrase must be provided as well. For
further information about private keys, see Section 10.7.3.3, “Authentication”.
User Name Enter the name of the user from whose account on the remote Switch the FTP
session is created.
Password Enter the user password.
Root Directory Enter the physical path of the source directory on the remote Host, where the
control files are saved.
Switch Time Zone Select the time zone location. Timezone is used when updating the transaction
control file.
VDS Device No. Enter the network element device from where the control files are retrieved.

358
Desktop 7.1

10.7.4.2. Advanced Tab

Figure 273. FTP/DX200 Collection Agent Configuration - Advanced Tab

The Advanced tab includes configuration settings that are related to more specific use of the FTP
service.

Prefix Enter the name prefix of the data files.


Number Positions Select the length of the number part in the data file name as follows:

• 01 for a two digit number

• 001 for a three digit number

• 0001 for a four digit number

For example: If you select 0001, data file number 99 will include the fol-
lowing four digits: 0099.
Ends With VDS Device Select this check box to create data file names that end with VDS device
No. No.
Server Port Enter the port number for the server to connect to, on the remote Switch.

Note! Make sure to update the Server Port when changing the
Transfer Protocol.

Number Of Retries Enter the number of attempts to reconnect after temporary communication
errors.
Retry Interval (ms) Enter the time interval, in milliseconds, between connection attempts.
Local Data Port Enter the local port number that the agent will listen to for incoming data
connections.

This port will be used when communication is established in Active Mode.


If the default value, zero, is not changed, the FTP server will negotiate
about the port the data communication will be established on.
Active Mode (PORT) Select this check box to set the FTP connection mode to ACTIVE. Other-
wise, the mode is PASSIVE.

359
Desktop 7.1

Transfer Type Select either Binary or ASCII transfer of the data files.

Note!Setting Transfer Type to the wrong type might corrupt the


transferred data files.

FTP Command Trace Select this check box to generate a printout of the FTP commands and re-
sponses. This printout is logged in the Event Area of the Workflow Mon-
itor.

Use this option only to trace communication problems, as workflow per-


formance might deteriorate.

10.7.4.3. TTSCOF Settings Tab

Figure 274. FTP/DX200 Collection Agent Configuration - TTSCOF Settings Tab

The TTSCOF Settings tab includes configuration settings that allows you to adjust the default settings
for the FTP DX200 agent

Collect when file is With this setting you can select to allow files to be collected, even though
present on only one WDU they are only present on one WDU. This may be useful if one of the WDUs
cannot be reached for some reason. Default is No, which means that files
will only be collected if they are present on both WDUs

Note! WDU is short for Winchester Drive Unit, and each VDS
(Virtual Data Storage) has two WDUs.

WDU0 Path In this field you can specify the path to WDU0. This setting is optional.
WDU1 Path In this field you can specify the path to WDU1. This setting is optional.
Select default collection Select the WDU you want to use as default in this list. WDU 1 is default.
WDU
Collect files with bit 5 In this list you can select if you only want to collect files where bit 5 IS
NOT set (Must not be set), or where bit 5 IS set (Must be set), or if you

360
Desktop 7.1

always want to collect files regardless of whether bit 5 is set or not (May
be set).
Collect files with bit 6 In this list you can select if you only want to collect files where bit 6 IS
NOT set (Must not be set), or where bit 6 IS set (Must be set), or if you
always want to collect files regardless of whether bit 6 is set or not (May
be set).
Collect files with bit 7 In this list you can select if you only want to collect files where bit 7 IS
NOT set (Must not be set), or where bit 7 IS set (Must be set), or if you
always want to collect files regardless of whether bit 7 is set or not (May
be set).

10.7.4.4. Transaction Behavior


This section includes information about the FTP/DX200 collection agent transaction behavior. For
information about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transac-
tions”.

10.7.4.4.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Will be emitted right before the first byte of each collected file is fed into a workflow.
End Batch Will be emitted just after the last byte of each collected file has been fed into the system.

10.7.4.4.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

If the Cancel Batch behavior Defined on workflow level is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation ECS will not be involved, and the file will not be deleted.

10.7.4.5. Introspection
The introspection is the type of data an agent expects and delivers.

The agents consumes bytearray types.

10.7.4.6. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

10.7.4.6.1. Publishes

MIM Parameter Description


Connection Retries This MIM parameter contains the number of re-connections, resulted by
connection problems, since the last Workflow activation.

361
Desktop 7.1

Connection Retries is of the integer type and is defined as a batch


MIM context type.
File Creation This MIM parameter contains a time stamp, that indicates when the file has
Timestamp been created. The value originates from the Data Storage Control File and
is expressed in local time.

File Creation Timestamp is of the date type and is defined as a


header MIM context type.
File Retrieval This MIM parameter contains a time stamp, that indicates when the file
Timestamp transfer starts.

File Retrieval Timestamp is of the date type and is defined as


a header MIM context type.
Source Filename This MIM parameter contains the name of the file that is currently being
processed, as defined at the source.

Source Filename is of the string type and is defined as a header


MIM context type.
Source File Count This MIM parameter contains the number of files that were available to this
instance for collection at startup. The value is static throughout the execution
of the workflow, even if more files arrive during the execution. The new
files will not be collected until the next execution.

Source File Count is of the long type and is defined as a global


MIM context type.
Source Host This MIM parameter contains the IP address or hostname of the switch.

Source Host is of the string type and is defined as a global MIM


context type.
Source Username This MIM parameter contains the login name.

Source Username is of the string type and is defined as a global


MIM context type.
Source Pathname This MIM parameter contains the value of the directory where the control
files will be read.

Source Pathname is of the string type and is defined as a global


MIM context type.
Storage Status Byte This MIM parameter contains the value of the last byte in the TTSCOF re-
cord.

Storage Status Byte is of the int type and is defined as a header


MIM context type.

10.7.4.6.2. Accesses

The agent itself does not access any MIM resources.

10.7.4.7. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with file: name

362
Desktop 7.1

Reported together with the name of the control (TTTCOFxx.IMG) and data file that have been col-
lected and inserted into the workflow.

• File cancelled: name

Reported together with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted; please see, Transaction behaviour, Cancel Batch.

10.7.4.8. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Command trace

A printout of the control channel trace. This is only valid if FTP command trace in the Advanced
tab is selected.

10.8. FTP/EWSD Agent


10.8.1. Introduction
This section describes the FTP/EWSD agent. This agent is an extension of the DigitalRoute® Medi-
ationZone® platform.

10.8.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• The Siemens SAMAR OneFileView file handling

• Standard FTP (RFC 959, http://www.ietf.org/rfc/rfc0959.txt

10.8.2. Overview
The FTP/EWSD agent enables collection of cyclic files from Siemens EWSD switches into the Medi-
ationZone® workflow, by using the FTP protocol.

When the workflow is activated, the FTP/EWSD agent connects to the configured FTP service and
requests information about the cyclic file by using the LIST FTP command. The switch returns inform-
ation about the cyclic file. The agent compares the returned information field values COPY-DATA-
BEGIN and LAST-RELEASE with its internal state. If LAST-RELEASE has the same value that the
agent holds, the agent deletes the cyclic file with the FTP delete command DELE, and thereby releases
the currently established copy space.

The agent then retrieves the file contents with RETR and inserts it into the workflow. The RETR
command establishes a new copy space. When all the data is successfully collected into the workflow,
the agent generates another LIST command in order to retrieve the value of COPY-DATA-BEGIN
and of LAST-RELEASE. The agent saves these values and then generates the DELE command to
release the space of the collected data.

363
Desktop 7.1

10.8.2.1. Multiple File View Option


In the Multiple File View mode, the cyclic file is sliced into sections (slices).

By checking Automatic Seq No Assignment you enable the agent to automatically set these sequence
numbers. The agent connects to the configured FTP service and searches from the ACTIVE file slice,
backwards, to the oldest file slice in the FILLED state. The agent then collects all the file slices, from
the oldest to the most recent file slice.

When collection is complete, release all the file slices and continue to collect the currently
ACTIVE file slice.

10.8.3. Configuration
You open the FTP/EWSD agent configuration view from the workflow editor. In the workflow template,
either double-click the agent icon, or right-click it and select Configuration.

10.8.3.1. Switch Tab


This tab contains remote host and source file parameters.

Figure 275. The FTP/EWSD Agent Configuration View - Switch Tab

Hostname Enter the host name or the IP-address of the switch that you want the agent
to connect to.
Username Enter the username of the account on the remote switch, to enable the FTP
session to login.
Password Enter the Username user's password.
Filename Enter the name the cyclic file that the agent should collect.
Remote File is Cyclic Check to enable a transaction safe retrieval of data from the switch. This is
useful when you want to retrieve statistical switch data.
Multiple File View Check to enable sectioning and thereby a more effective data management
of the file. Clear to stay in single file view. For further information see Sec-
tion 10.8.2.1, “Multiple File View Option”.

10.8.3.2. Advanced Tab


This tab enables you to specify advanced use of the FTP service parameters.

364
Desktop 7.1

Figure 276. The FTP/EWSD agent Configuration View - Advanced Tab

Command Port Enter a number between 1 and 65535 to define the port that the FTP service
will use to communicate with the agent from a remote switch.
Timeout (sec) Enter the maximum length of time, in seconds, while the agent should
await a reply after sending a command, before a timeout is called. 0 (zero)
means "wait forever".
Number of Retries The number of times to retry upon TCP communication failure. This is for
the FTP command channel only. An IO failure during the file transfer will
not trigger retries. If value set to O no retires will be done.
Delay (sec) Enter the length of the delay period between each connection atempt.
Local Data Port Enter the number of the port through which the agent should expect input
data connections (FTP PORT command). Enter 0 (zero) to have the oper-
ating system select a random port number that is not being used currently,
according to the system specifications for each data connection.

Enter a non-zero value to have the agent use the same local port for all the
data connections.

Value range: 0 - 65535.


Local Address Enter the IP-address of the local endpoint, used both for commands and
for data transmission channels.
Keep Alive Check to have the agent tell the system to perform a continuous test that
the remote host is running. This enables you to identify bad connections.

365
Desktop 7.1

The keep-alive interval (Idle time) is system dependent, and dis-


played by the following command on Sun Solaris:

# /usr/sbin/ndd /dev/tcp tcp_keepalive_interval

Default is 7200 000 (milliseconds).

Passive Mode (PASV) Check when using an FTP passive mode connection.

Currently, Siemens does not support this option. However, some firewalls
require passive mode.
Binary Transfer Check to enable binary transfer. Clear to enable ascii transfer.
FTP Command Trace Check to debug the communication with the remote switch. See a log of
the commands and responses in the workflow editor Event Area. The
LIST command results are traced as well.
Release File Slice After Check to release all the file slices after successful retrieval. File release is
Retrieval initiated by the FTP delete command.
Automatic Seq No As- Check to enable automatic numbering of the file slices in Multiple File
signment View mode. If this option is checked, the number management is done in
SAMAR.
Don't Collect The Active Check to have the agent collect only file slices that are prior to the active
File Slice one.

10.8.4. Transaction Behavior


This section includes information about the FTP/EWSD agent transaction behavior. For further inform-
ation about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

10.8.4.1. Emits
The agent emits commands that change the state of the file that is currently being processed.

Command Description
Begin Batch Invoked right before the first byte of each file is collected by a workflow.
End Batch Invoked right after the last byte of each file is collected by a workflow.

10.8.4.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

If the Cancel Batch behavior, defined for a workflow, is configured to abort the
workflow, the agent will not receive the last Cancel Batch message. In such
case an ECS is not involved, and the established copy is not deleted.

366
Desktop 7.1

10.8.5. Introspection
This section includes information about the data type that the agent expects and delivers.

The FTP/EWSD agent generates bytearray types.

10.8.6. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

10.8.6.1. Publishes

MIM Parameters Description


File Retrieval Timestamp This MIM parameter contains a timestamp, that indicates when the file
transfer starts.

File Retrieval Timestamp is of the date type and is defined as


a header MIM context type.
Source Filename A parameter that contains the name of the file that is currently being pro-
cessed, as defined at the source.

Source Filename is of the string type and is defined as a header


MIM context type.
Source Host The IP-address or host name of the switch.

Source Host is of the string type and is defined as a header MIM


context type.
Source Username The login name.

Source Username is of the string type and is defined as a header


MIM context type.
CURR-FSIZE This parameter is assigned with the value of the corresponding field of the
LIST command output.

CURR-FSIZE is of the string type and is defined as a header MIM


context type.

10.8.6.2. Accesses
The agent itself does not access any MIM resources.

10.8.7. Agent Event Messages


Whenever an agent event occurs the agent generates an information message according to your config-
uration in the Event Notification Editor.

This section includes the event messages that can be configured for the FTP/EWSD agent.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with file: filename

Reported along with the name of the source file that has been collected and inserted into the workflow.

• File cancelled: filename

367
Desktop 7.1

Reported along with the name of the current file, every time a Cancel Batch message is received.
If the workflow is aborted no such messages are received. For further information see Section 10.8.4,
“Transaction Behavior”.

10.8.8. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Command trace

A printout of the control channel trace. Valid only if FTP Command Trace is enabled in the Ad-
vanced tab.

10.9. FTP/NMSC Collection Agent


10.9.1. Introduction
This section describes the FTP/NMSC Collection Agent. This is an extension of the DigitalRoute®
MediationZone® platform.

10.9.1.1. Prerequisites
The reader of this information should be familiar with the:

• MediationZone® Platform

• Nokia SMS Center Billing Interface

• Nokia MMS Center Billing Interface

10.9.2. FTP/NMSC Collection Agents


The FTP/NMSC Collection Agent collects data files from SMS / MMS Center and inserts them into
a MediationZone® workflow by using the FTP protocol. To do this, two control files must be read
and validated. The NMSC agent uses two control files to handle the state of the data files. The agent
collects data files according to the specifications in the storage control file. For every successfully
collected data file the agent updates a corresponding record in the transaction control file.

10.9.2.1. Configuration
To configure the FTP/NMSC collection agent, on the workflow editor either double-click the agent
icon, or right-click it and then select Configuration; The agent configuration dialog box opens.

368
Desktop 7.1

Figure 277. FTP/NMSC Collection Agent Configuration - Switch Tab

The Switch tab consists of configuration settings that are related to the remote host and directory,
where the control and data files are located, and of timezone location specification.

Host Name Enter the name of the Host or the IP-address of the switch that is to be connected
User Name Enter the name of the user whose account on the remote Switch will enable the
FTP session to be created
Password Enter the user password
File Information Detailed specification about control files that are going to be collected by the
agent.
Data Type Select from the drop down list the data file format, either SMS or MMS, that
the agent should collect.
File Directory Enter the physical path to the source directory on the remote Host, where the
control and data files are saved.
Switch Time Zone Select the timezone location. Timezone is used when updating the transaction
control file.

369
Desktop 7.1

Figure 278. FTP/NMSC Collection Agent Configuration - Advanced Tab

The Advanced tab includes configuration settings that are related to more specific use of the FTP
service.

File Name Detailed specification about the data file that is to be collected by the agent
Prefix Enter the data files name prefix
Number Positions Select the length of the number-part in the data file name as follows:

• 01 for a two digit number

• 001 for a three digit number

• 0001 for a four digit number

For example: If you select 0001, data file number 99 will include the follow-
ing four digits: 0099.
Settings Detailed specification of specific use of the FTP service.
Command Port Enter the port number for the FTP server to connect to, on the remote
Switch.
Local Data Port Enter the local port number that the agent will listen on for incoming data
connections.

This port will be used when communication is established in Active Mode.


If the default value, zero, is not changed, the FTP server will negotiate about
which port the data communication will be established.
Number Of Retries Enter the number of attempts to reconnect after temporary communication
errors.
Retry Interval Enter the time interval, in milliseconds, between connection attempts.
Active Mode (PORT) Check to set the FTP connection mode to ACTIVE. Otherwise, the mode
is PASSIVE.
Transfer Type Select either Binary or ASCII transfer of the data files.

370
Desktop 7.1

Setting Transfer Type to the wrong type might corrupt the transferred
data files.

FTP Command Trace Check to generate a printout of the FTP commands and responses. This
printout is logged in the Event Area of the Workflow Monitor.

Use this option only to trace communication problems, as workflow per-


formance might deteriorate.
TTS Header Size This is the size of the header record in the TTS file. If the size is not spe-
cified, the default value 9 will be used for MMS, and 8 for SMS.
TTT Header Size This is the size of the header record in the TTT file. If the size is not spe-
cified, the default value 8 will be used for MMS, and 7 for SMS.

10.9.2.2. Transaction Behavior


10.9.2.2.1. Emits

Command Description
Begin Batch Will be emitted right before the first byte of each collected file is fed into a workflow.
End Batch Will be emitted right after the last byte of each collected file has been fed into the
system.
Cancel Batch Never.

10.9.2.2.2. Retrieves

Command Description
Begin Batch Nothing.
End Batch Nothing.
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

If the Cancel Batch behavior that is defined on workflow level, is configured to


abort the workflow, the agent will never receive the last Cancel Batch message.
In such case ECS will not be involved, and the file will not be deleted.

10.9.2.3. Introspection
The agent consumes bytearray types.

10.9.2.4. Meta Information Model


10.9.2.4.1. Published MIMs

File Creation Timestamp A parameter that contains a time stamp, that indicates when the file
has been created. The value originates from the Data Storage Control
File and is expressed in local time.

Data type date, set as defined in the MIM type header.


File Retrieval Timestamp A parameter that contains a time stamp that indicates when the file
transfer starts.

371
Desktop 7.1

Data type date, set as defined in the MIM type header.


Source Filename A parameter that contains the name of the file that is currently being
processed, as defined at the source.

Type string, set in the MIM type header.


Source Host The IP-address or host name of the switch.
Source Username The name of the logged in user.
Source Pathname The value of the directory where the control files will be read.

10.9.2.4.2. Accessed MIMs

None.

10.9.2.5. Agent Message Events


Ready with file:name

Reported together with the name of the file that have been collected and inserted into the workflow.

File cancelled:name

Reported together with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted; See Transaction behavior, Cancel Batch.

Command trace:trace

A printout of the control channel trace. This is only valid if FTP command trace in the Advanced
tab is selected.

10.10. GTP' Agent


10.10.1. Introduction
This section describes the GTP' agent. This is a real-time extension of the DigitalRoute® Medi-
ationZone® platform.

10.10.1.1. Prerequisites
The reader of this information should be familiar with:

• MediationZone® Platform

• GPRS Tunneling Protocol (GTP) across the Gn and Gp Interface [3GPP TS 29.060 V4.2.0]:ht-
tp://www.3gpp.org/ftp/Specs/archive/29_series/29.060/29060-420.zip

• Call and event data for the Packet Switched (PS) domain [3GPP TS 32.015 V3.11.0]: ht-
tp://www.3gpp.org/ftp/Specs/archive/32_series/32.015/32015-3b0.zip

10.10.2. Overview
The GTP' Agent collects from GSM agents messages and datagrams of charging protocol of type GTP'.
By collecting this information the GTP' Agent enables MediationZone® to act as a Charging Gateway
device, providing Charging Gateway Functionality (CGF) within UMTS/GPRS networks.

372
Desktop 7.1

Figure 279. MediationZone® supports UMTS/GPRS Charging Gateway Functionality

The GTP' agent awaits initialization from the GSN nodes of the types SGSN and GGSN. When initiated,
there are two protocols with which the Agent can interacts with the nodes:

• Transmission Control Protocol(TCP), and

• Datagram Protocol(UDP)

A GTP' workflow can alternate implementation of two different protocols by using two GTP' agents.
One for each protocol.

In case of failure, the GTP' agent can be configured to notify the GSN nodes to route the incoming
data to another host. An alternative configuration is to set up a second and identical workflow, on a
separate MediationZone® Execution Context.

The agent counts the received requests and publishes those values as MIM values. Those MIM values
can also be viewed from the commandline tool with the wfcommand printcounters command.

10.10.2.1. Interaction Scenario


The following scheme demonstrates the message and data transfer between the GSN nodes and the
GTP' agent when using UDP:

1. When started, the GTP' agent sends a Node Alive Request message to all configured GSN
nodes.

2. The GTP' agent awaits a Node Alive Response and will transmit Node Alive Request
repeatedly, according to the Advanced tab settings. For further information see Section 10.10.3.3,
“Advanced Tab”

3. After a successful Node Alive Response the GSN node starts to transmit Data record Transfer
Requests to the agent. When safely collected, the agent replies with a Data record transfer Response.

4. When the workflow is stopped, the message Redirection Request is automatically sent to
all configured GSN nodes. The workflow will not stop immediately, but waits for a Redirection
Response from each of the GSN nodes. If the Max Wait for a Response (sec) value is exceeded,
the workflow stops, regardless of whether Redirection Response from the GSN nodes has
been received or not.

Note! When using TCP, the behavior is different. For further information see Section 10.10.9,
“Limitations - GTP' Transported Over TCP ”.

373
Desktop 7.1

10.10.3. Configuration
You open the GTP' agent configuration view from the Workflow Editor either by double-clicking
the agent icon, or by right-clicking it and then selecting Configuration.

10.10.3.1. Source Tab


The Source tab includes connection type settings.

Figure 280. The GTP' Agent Configuration View - Source Tab

Protocol Enter the protocol type that you want the agent to use: Either TCP or UDP. For
further information see Section 10.10.9, “Limitations - GTP' Transported Over
TCP ”.
Port Enter the port through which the agent should await incoming data packages. This
port must be located on the host where the Execution Context is running.

Default value is 3386.

Note! Two workflows that are running on the same Execution Context,
can subscribe to the same Port and GSNs, if they use different Protocol
settings.

GSN IP Address The IPv4 or IPv6 addresses of the GSN nodes that provide the data.
Server Port Enter the server port number for each node. The agent will detect GGSN source
port changes via the Node Alive Request or Echo Request messages. If a change
is detected, it is is registered in the System Log. The agent's internal configuration
is updated.

374
Desktop 7.1

10.10.3.2. Miscellaneous Tab

Figure 281. The GTP' Agent Configuration View - Miscellaneous Tab

The Miscellaneous tab includes collected format and storage settings of the data that is collected by
the agent.

Note! GTP' formats containing fields of the types IPv4 and IPv6 are supported.

Format Must match the Data Record Format in Data Record Packet IE.
This is applicable if the Packet Transfer Command is either Send Data
Record Packet or Send possibly duplicated Data Record
Packet.
Perform Format When checked, the Data Record Format Version in Data Record
Version Check Packet IE must be identical to the setting in Format Version.
Format Version Should match the Data Record Format version in Data Record
Packet IE. This is applicable if the Packet Transfer Command is Send
Data Record Packet or Send possibly duplicated Data
Record Packet.
Directory Enter either the relative pathname to the home directory of the user account,
or an absolute pathname of the target directory in the file system on the local
host, where the intermediate data for the collection is stored.

The intermediate data includes:

• Files that keep track of sequence numbers and restart values

• A directory called duplicates, where duplicates are saved.

Note! This directory must be attended to manually.

375
Desktop 7.1

Note! When using several Execution Contexts, make sure that the file
system that contains the GTP' information is mounted on all ECs.

Acknowledgement Check to enable acknowledgement from the APL module that follows the GTP'
from APL agent. This way the GTP' agent will expect a feedback route from the APL
module, as well. No GTP packets will be acknowledged before all the data that
is emitted into the workflow is routed back to the collector. Controlling acknow-
ledgement from APL enables you to make sure that data is transmitted complete.

Clear this check-box if you want the agent to acknowledge incoming packets
before any data is routed into the workflow.
No Private Exten- Check to remove the private extension from any of the agent's output messages.
sion
Use seq num of Can- Check to change the type of the sequence numbers that are populated in a Data
cel/Release req Record Transfer Response to either Release Data Record
Packet, or Cancel Data Record Packet, in the Requests Responded
field. Otherwise, the agent applies the sequence number of the released, or
cancelled, Data Record Packet.

10.10.3.3. Advanced Tab

Figure 282. The GTP' Agent Configuration view - Advanced Tab

Max Wait for a Re- The maximal period while the GTP' agent should expect a Node Alive
sponse (sec) Response message.

If both this value and the Max Number of Request Attempts value, are
exceeded, a message appears on the System Log.

For example: If Max Wait for a Response is 20 and Max Number of


Request Attempts is 5, the warning message will be logged after 100
seconds.

The value also indicates the maximal period during which the GTP' agent
awaits a Redirection Response. This period begins right after the
agent releases a Redirection Request to the agents that it is con-
figured to receive data from.

376
Desktop 7.1

Max Number of Re- Enter the maximum number of attempts to perform in order to receive a
quest Attempts Node Alive Response and Redirection Response.
Max Outstanding Num- Enter the maximal number of packages that you want kept in memory for
bers sequence number checking.
Max Drift Between Two Enter the maximum numbers that can be skipped between two sequence
Numbers numbers.
Clear Checking Check to avoid saving the last sequence number when the workflow has
been stopped.

Clear to have the agent save the sequence number of the last collected
package when the workflow is stopped. This way, as soon as the workflow
is restarted, a package with the subsequent number is expected and the
workflow continues processing from the where it had stopped.
Agent Handles Duplic- Select this option to store duplicates in a persistent data directory that you
ates specify on the Miscellaneous tab. The packet will remain in the directory
until the agent receives a request to release or to cancel it.
Route Duplicates to Select this option to route duplicates to a link that you select from the drop-
down list.
Alternate Node Enter the IP-address of a host that runs an alternate Charging Gateway
device as a backup.

If you enter an IP address, the GTP' agent will include it in the redirection
request that it sends to the GSN.

Note! The GTP' agent does not backup any of the data that it man-
ages. Make sure that the GSN node takes care of backup.

10.10.3.4. Decoder Tab

Figure 283. The GTP' Agent Configuration View - Decoder Tab

Decoder Click Browse and select a pre-defined decoder. These decoders are defined in the
Ultra Format Editor, and are named according to the following syntax:

<decoder> (<module>)

The option MZ Format Tagged UDRs indicate that the expected UDRs are stored
in one of the built-in MediationZone® formats. If the compressed format is used,
the decoder will automatically detect this. Select this option to make the Tagged
UDR Type list accessible for configuration.
Tagged UDR Click Browse and select a pre-defined Tagged UDR Type. These UDR types are
Type stored in the Ultra and Code servers. The naming format is:

<internal>(<module>)

377
Desktop 7.1

If the decoder is intended to reprocess UDRs of an internal format, the Decoder MZ


format tagged UDRs has to be selected enabling this list. Once enabled, the internal
format must be selected.
Full Decode • Check to enable full decoding of the UDR, before it enters the workflow. This, in
turn, might reduce performance rate.

• Clear to minimize decoding work in the agent. By clearing this check-box you
postpone decoding, and discovery of corrupt data to a later phase in the workflow.

10.10.4. Introspection
The introspection is the type of data that an agent expects and delivers.

The agent produces UDR types in accordance with the Decoder tab settings.

10.10.5. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

10.10.5.1. Publishes

MIM Value Description


Cancel Data Count (long) The number of received Cancel Data Record Packet re-
quests.
Data Record Count (long) The number of received Send Data Record Packet requests.
Duplicate Message Count (long) The number of received duplicates.
Last Request Timestamp (long) This MIM value contains the timestamp for the last received packet.
Message Error Count (long) The number of received errorneous messages.
Out of Sequence Count (long) The number of received records which is not in sequence.
Possible Duplicate Count (long) The number of received Send possibly duplicated Data
Record Packet requests.
Release Data Count (long) The number of received Release Data Record Packet re-
quests.

10.10.5.2. Accesses
The agent does not access any MIM parameters.

10.10.6. MZSH Commands


In case you want to see the counters that are published as MIM values, you can use the mzsh wfcom-
mand. See the Commandline Tool User's Guide for further information about this.

10.10.7. Agent Message Events


None.

10.10.8. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

378
Desktop 7.1

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

10.10.9. Limitations - GTP' Transported Over TCP


Node Alive Request and Redirection Request are not transmitted to GSN nodes when
using the TCP protocol.

10.11. HiCAP Agent


10.11.1. Introduction
This section describes the HiCAP agent. This is a real-time extension of the DigitalRoute® Medi-
ationZone® platform.

Note! The HiCAP agent requires OS dependent third party software and it is currently supported on
RedHat Enterprise Linux only.

10.11.1.1. Prerequisites
The reader of this information should be familiar with:

• MediationZone® Platform

• Flexent/AUTOPLEX, Wireless Networks, High Capacity AMATPS API (Lucent Technologies,


Bell Labs Innovations, 401-615-004)

• UDR structure and contents

10.11.2. Overview
The HiCAP agent collects data from an Automatic Message Account Transmitter (AMAT) using the
High Capacity AMATPS (Automatic Message Account Teleprocessing) API over an RPC interface.

Figure 284. HiCAP Collection Overview

The data is collected via primary or secondary AMA data files. A primary AMA data file contains
AMA blocks not previously sent and acknowledged by the collector. A secondary AMA data file
contains AMA data previously sent by the AMAT with receipt acknowledged by the collection. Upon
activation, the collector binds to the pre-defined RPC port and waits for connections to be accepted.

For further information about the HiCAP AMATPS API, see the Flexent/AUTOPLEX, Wireless
Networks, High Capacity AMATPS API documentation.

The UDRs generated by the HiCAP agent includes decoded connection information. The original data
from the AMAT is stored in a bytearray that you can decode in an Analysis agent e g by using the

379
Desktop 7.1

APL function udrDecode. Alternatively, you can route the bytearray to a batch workflow that performs
the decoding.

Figure 285. HiCAP Agent UDRs

Figure 286. Workflow with HiCAP Agent

10.11.3. Configuration
You open the HiCAP agent configuration view from the Workflow Editor either by double-clicking
the agent icon, or by right-clicking it and then selecting Configuration.

10.11.3.1. Connection Tab


The Connection tab contains configuration settings related to the AMAT and AMATPS RPC server.

380
Desktop 7.1

Figure 287. The HiCAP Agent Configuration View - Connection Tab

Host The hostname or IP-address of the AMATPS RPC server.


DCF The name or path of the Device Configuration File.
Timeout (s) The transaction timeout in seconds.
Password The collector password.
Sensor Id The switch office id.
Sensor Type The switch office type.
Sending Unit The AMAT Sending Unit Number.
Connect Session Retries Number of retries before call reset server function.
Connect Sleep Time (ms) Sleep time before retry to connect.
Files to Test Number of iterations for the test connection function when the HiCAP
agent is trying to establish a connection.
Reset Server Password AMATPS reset password. An alphanumeric value.
Reset Server Sleep Time (s) Sleep time while the AMATPS server restarts and retry to connect.

10.11.3.2. File Polling Tab


The File Polling tab includes settings for type and interval of polling.

381
Desktop 7.1

Figure 288. The HiCAP Agent Configuration View - File Polling Tab

Poll Function Sets the collector to poll for either Primary or Secondary data. For inform-
ation about Primary and Secondary data, see the Flexent/AUTOPLEX,
Wireless Networks, High Capacity AMATPS API documentation.
Poll Sleep Time (ms) The interval between polls for Primary or Secondary data.
Remove Invalid Trail- The header in the AMA files contains the number of data blocks. Select this
ing Blocks checkbox to remove any trailing blocks that exceeds the number of blocks
specified in the header.
Starting Optional setting of start block in sequence. This setting is available when the
Poll Function is set to Secondary For information about block sequence
numbers, see the Flexent/AUTOPLEX, Wireless Networks, High Capacity
AMATPS API documentation.
Ending Optional setting of end block in sequence. This setting is available for second-
ary AMA data blocks. For information about block sequence numbers, see
the Flexent/AUTOPLEX, Wireless Networks, High Capacity AMATPS API
documentation.

10.11.3.3. Trace Tab


The Trace tab includes settings for controlling the logging that is performed on the AMAT server.

382
Desktop 7.1

Figure 289. The HiCAP Agent Configuration View - Trace Tab

Class Specifies the type of logging that shall be performed. For information about classes
and log messages, see the Flexent/AUTOPLEX, Wireless Networks, High Capacity
AMATPS API documentation.
Level Specifies the level on which trace messages are recorded for the selected set of
classes. For information about trace levels, see the Flexent/AUTOPLEX, Wireless
Networks, High Capacity AMATPS API documentation.
File Prefix The trace function appends each message to primary log file named
TRACE_DIR/<file_prefix>.pid on the AMAT. For information about
trace files, see the Flexent/AUTOPLEX, Wireless Networks, High Capacity
AMATPS API documentation.
File Size The maximum file size in bytes of the primary file. For information about trace
files, see the Flexent/AUTOPLEX, Wireless Networks, High Capacity AMATPS
API documentation.
Enable Debug Select this checkbox to enable debugging of the AMATPS RPC Client of the
Log HiCAP agent. The debug information is stored in the Excecution Context log.

10.11.4. Introspection
The introspection is the type of data that an agent expects and delivers.

Depending on the settings in the File Polling tab, the agent may produce one of the following UDR
types:

• PrimaryUDR (HiCAP)

• SecondaryUDR (HiCAP)

10.11.5. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

383
Desktop 7.1

The agent does not publish nor access any MIM parameters.

10.11.6. Agent Message Events


There are no agent message events for this agent.

10.11.7. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

10.12. HTTPD Agent


10.12.1. Introduction
This section describes the HTTPD agent. This is a real-time extension agent on the DigitalRoute®
MediationZone® platform.

It also includes APL functions for connecting as a client to an external HTTP server.

10.12.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• Hypertext Transfer Protocol version 1.1

(RFC 2616: http://www.ietf.org/rfc/rfc2616.txt)

10.12.2. HTTPD Agent


10.12.2.1. Overview
The HTTPD agent (in combination with Analysis or Aggregation agents) can act as a web server, re-
ceiving requests and sending responses on a TCP/IP connection. The requests are turned into UDRs,
using the standard Hypertext Transfer Protocol, and inserted into a MediationZone® workflow.

When a workflow acting as a web server is started, the HTTPD agent opens a port for listening and
awaits a request. The workflow remains active until manually stopped. In addition, the agent offers
the possibility to use an encrypted communication setup through SSL.

384
Desktop 7.1

Note! To fully support HTTP pipelining, you must add the property ec.httpd.ordered.re-
sponse with value true in the executioncontext.xml file. If this property is set to
true, responses will be guaranteed to be sent in the same order as the pipelined requestes were
received.

To ensure that a request is not blocking responses from being sent for too long, the Server
Timeout (sec) should be configured. If a response is not sent for a request within the specified
time, the response for the next request will be sent.

This property should not be set unless support for pipelining is required!

Setting this property to true will also have some effect on the performance since the requests
will be cached until the responses have been sent.

10.12.2.2. Configuration
The HTTPD agent configuration window is displayed when double-clicking on the HTTPD agent in
a workflow, or right-clicking and selecting Configuration...

10.12.2.2.1. HTTPD Tab

Figure 290. HTTPD Collection Agent configuration window, HTTPD tab.

Use SSL If enabled, the communication channels will be encrypted.


Keystore A keystore file containing the server certificate to be used with SSL. Enter
the full path to a keystore file on the local or mounted disk on the execution
host or the keystore file, available in any bundle, through the code server.

The last alternative is to prefer if the disks on the execution host normally
cannot be reached for updates.
Password The password for the keystore file.
Local Address The local address that the server will bind to. If the field is left empty, the
server will bind to the default address.
Port The port the server will listen to. Default port for non-encrypted commu-
nication is 80 and for encrypted 443.

385
Desktop 7.1

Content Type The UDR Type, extended HttpdUDR, the collector will emit. Please refer
to Section 10.12.2.3.1, “Format” for an example.
Client Timeout (sec) Number of seconds a client can be idle while sending the request, before
the connection is closed. If the timeout is set to 0 (zero) no timeout will
occur. This is not recommended. Default value is 10.
Server Timeout (sec) The number of seconds before the server closes a request and a 500 Server
Error is sent back to the client. If the timeout is set to 0 (zero) no timeout
will occur. Default value is 0.
Character Encoding List of encoding options to use for handling of responses.
GZIP Compression Regulates compression data size. Valid levels are from 1-9, where 9 is
Level slowest but provides optimal compression.

10.12.2.3. The HTTPD UDR Type


The UDR type created in the HTTPD agent can be viewed in the UDR Internal Format Browser.
To open the browser right-click in the editing area of an APL Editor and select UDR Assistance....
The browser opens.

10.12.2.3.1. Format

The built-in HTTP format definition must be extended prior to usage of the HTTPD format.

To extend the HTTP format:

1. Open the Ultra Format Editor by clicking the New Configuration button in the upper left part
of the MediationZone® Desktop window, and then selecting Ultra Format from the menu.

2. Enter:

internal MYHTTPD: extends_class ("com.digitalroute.wfc.http.HttpdUDR")


{
// Additional fields (if required).
};

The following fields are included in the built-in HTTPD format:

Field Description
accept(string) Media types that are acceptable in the response, e g
text/plain. This field is included in the request header.
acceptEncoding(string) Restricts the content encodings that are acceptable in the
response, e g gzip. This field is included in the request
header.
clientHost(string) The host.
content(string) The content itself.
contentEncoding(string) Additional content encodings that have been applied to the
entity-body, e g gzip. This field is included in the response
header.
contentLength(int) The length of the content in bytes.
contentType(string) The type of the content, e g "image/gif".
errorMessage(string) This field is populated by the HTTPD agent when an error
occurs. It is not included in the request or response.
query(string) The query.
redirectURL(string) In case you want to redirect a request, this field should
contain the URL to which you want to redirect.

386
Desktop 7.1

Field Description
requestMethod(string) The request method, e g GET, POST.
response(string) The response which will be returned to the requesting user.
responseBinary(bytearray) The response in binary format.
responseStatusCode(string) HTTP header reponse code.
responseType(string) The type of the response, e g "text/html".
userAgent(string) Shows user information, e g browser type and browser
version used.

Additional fields may be entered. This is useful mainly for transportation of variable values to
subsequent agents.

3. Save your Ultra by clicking on the Save button and entering the name of the Ultra.

10.12.2.4. Introspection
The introspection is the type of data an agent expects and delivers.

The agent consumes and produces UDR types extended with the built-in HTTP format.

10.12.2.5. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

The agent does not publish nor access any MIM parameters.

10.12.2.6. Agent Message Events


There are no message events for this agent.

10.12.2.7. Debug Events


There are no debug events for this agent.

10.12.3. APL Functions


10.12.3.1. HTTP Client Functions
The client functions are used to exchange data with a HTTP/HTTPS server. There are specific functions
for GET and POST as well as functions for general HTTP requests. Either plain text or encrypted
communication can be used and basic authentication is supported.

For information about HTTP client functions that are available in APL, see the APL Reference Guide.

10.12.3.2. HTTP Server Functions


You use the server functions to manage requests (UDRs) from the HTTPD collector agent.

For information about HTTP server functions that are available in APL, see the APL Reference Guide.

387
Desktop 7.1

10.13. HTTP Batch Agent


10.13.1. Introduction
This section describes the HTTP Batch agent. .

MediationZone® supports both HTTP and HTTPS. The filename of the file to be collected must be
known before collection.

10.13.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• HTTP/HTTPS protocol

• UDR structure and contents

10.13.2. HTTP Batch Collection Agent


The HTTP Batch Collection agent collects from either a single URL or, in Index Based Collection
mode, it collects all linked-to URLs found in an HTML-formatted document (Anchor HREF attributes).

The agent will download the files from the web server as a byte stream and route the content of the
file into the workflow in parts of up to 32768 bytes.

10.13.2.1. Configuration
The HTTP Batch Collection agent configuration window is displayed when the agent in a workflow
is right-clicked selecting Configuration... or double-clicked.

10.13.2.1.1. Connection

Figure 291. HTTP Batch Collection agent configuration window, Connection Tab

URL URL to the file that will be collected, the full URL to a file must be given.

If collected file contains any links to other pages these will only be followed if
Index Based Collection is checked. Refer to Enable Index Based Collection in
the Section 10.13.2.1.2, “Source” tab.

Username HTTP authorization username used in requests.


Password HTTP authorization password used in requests.

388
Desktop 7.1

10.13.2.1.2. Source

Figure 292. HTTP Batch Collection agent configuration window, Source Tab

Compression Select if the agent should try to decompress the data collected before routing it
into the workflow. The options are 'No Compression' and 'Gzip'.

If Enable Index Based Collection is selected, only the links in the given
URL will be decompressed upon collection.

Enable Index Based Select to Enable Index Based Collection. All linked-to URLs found in the HTML-
Collection formatted document will be collected. The URL is pointed out in the URL field
in the Section 10.13.2.1.1, “Connection” tab.
URL Pattern Either leave empty or enter a regular expression filtering the full URL. If empty
all files are collected, otherwise files matching the URL Pattern will be collec-
ted.

The URL itself will not be routed into the workflow.


Enable Control File When selected, the agent will only collect files with a control file present. The
Based Collection appearance of the control file is made by defining Position and the appearance
of the expected control file.
Position The control filename consists of an extension added either before or after the
shared filename part. There are two choices: Prefix or Suffix refer to Ex-
ample 69, “Control File Extenstions” for more information.
Control File Exten- The Control File Extension is used to define when the data file should be collec-
sion ted. A data file will only be collected if the corresponding control file exists.

The text entered in this field is the expected extension to the shared filename.
The Control File Extension will be attached to the shared filename depending
on the setting made in the Position field, refer to Example 69, “Control File
Extenstions” for more information.
Data File Extension The Data File Extension is an optional field that is used when a stricter definition
of files to be collected is needed, refer to Example 69, “Control File Extenstions”
for more information. It is only applicable if the Position is set to Suffix.

389
Desktop 7.1

Example 69. Control File Extenstions

Consider a directory containing 5 files:

• FILE1.dat

• FILE2.dat

• FILE1.ok

• ok.FILE1

• FILE1

1. The Position field is set to Prefix and the Control File Extension
field is set to .ok.

The control file is ok.FILE1 and FILE1 will be the file collected.

2. The Position field is set to Suffix and the Control File Extension
field is set to .ok.

The control file is FILE1.ok and FILE1 will be the file collected.

3. The Position field is set to Suffix and the Control File Extension
field is set to .ok and the Data File Extension field is set to .dat.

The control file is FILE1.ok and FILE1.dat will be the file col-
lected.

Enable HTTP DE- Selecting this will issue the web server to delete the file and the control file after
LETE the file has been successfully collected. If unchecked the file will be ignored
after collection, that is the file will be left in on the webserver.

10.13.2.1.3. Advanced

Figure 293. HTTP Batch Collection agent configuration window, Advanced Tab

Keystore Name of the keystore file that has been imported and will be used by the agent.

Select Import Keystore and select the file to be used by the agent.
Keystore Password Password to be used on the selected keystore file.
Read Timeout (ms) The maximum time, in milliseconds, to wait for response from the server. 0
(zero) means to wait forever

390
Desktop 7.1

10.13.2.1.4. Duplicate Check

Figure 294. HTTP Batch Collection agent configuration window, Duplicate Check Tab

The Duplicate Check feature is only used when Enable Index Based Collection found in the Sec-
tion 10.13.2.1.2, “Source” is enabled.

Enable Duplicate When selected, the agent will store every collected URL in a (configurable)
Check number of days. The storage will be checked to make sure that no URL is col-
lected again as long as it remains in the storage.
Database Profile Each collected URL will be stored in the database defined in the profile selected.
The schema must contain a table called "duplicate_check", for more information
about this table refer to Section 10.13.3, “Appendix”.
Max Cache Age The number of days to keep collected URLs in the database. When the workflow
(Days) starts, it will delete entries that are older than this number of days.

If a duplicate-check workflow runs on more than one Execution Context


on separate servers, and the system clocks are not synchronized, there is
a risk that UDR duplicates are prematurely deleted. For example: If two
system clocks are 12 hours apart and Max Cashed Age is set to 1 day,
duplicate UDRs might be deleted after only 12 hours, instead of 24.

10.13.2.2. Transaction Behavior


10.13.2.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch The agent will emit beginBatch before the first content of the file is routed into the
workflow. The agent will also use the ECS batch service and route the data to it.
End Batch The agent will emit endBatch after the final part of the file has been routed into the
workflow.

10.13.2.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Hint End Batch When hintEndBatch is called the agent will call endBatch as soon as the current data
block has been routed from the agent. If more data is available from the web server
the agent will call beginBatch and then continue to process the rest of the file.

391
Desktop 7.1

Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

If the Cancel Batch behavior defined on workflow level is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation ECS will not be involved, and the file will not be moved.

APL code where hintEndBatch is followed by a cancelBatch will always result


in workflow abort. Make sure to design the APL code to first evaluate the
Cancel Batch criteria to avoid this sort of behavior.

10.13.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray types.

10.13.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

10.13.2.4.1. Publishes

MIM Parameter Description


Source URL This MIM parameter contains the full absolute URL to the resource that
is being collected.
Source Filename This MIM parameter contains the filename from the URL Path-part. In
case the URL points to a directory (a path that ends with a slash) then an
empty string is returned.
Source File Count If Index Based Collection is enabled this MIM parameter contains the
number of files that will be collected from the index, otherwise it returns
1.
Source Files Left This MIM parameter contains the number of files remaining to be collected.
File Retrieval Timestamp This MIM parameter contains the time when the HTTP file retrieval started.

10.13.2.4.2. Accesses

The agent does not access any MIM resources.

10.13.2.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

Ready with file Reported, along with the name of the URL, when the file is collected
and inserted into the workflow.
Failed to collect file Reported, along with the name of the URL, when the file failed to
be collected.
URL cancelled Reported, along with the name of the current URL, when a cancel-
Batch message is received. This assumes the workflow is not
aborted; refer to Section 10.13.2.2, “Transaction Behavior” for
further information.

392
Desktop 7.1

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

10.13.2.6. Debug Events


There are no debug events for this agent.

10.13.3. Appendix
10.13.3.1. Database Requirements for Duplicate Check
The Duplicate Check feature stores the collected URLs in an external database pointed out by a Database
profile. The schema of this database must contain a table definition that matches the needs of the agent.

10.13.3.1.1. Table and Column Names

The schema table name must be "duplicate_check". It must contain all the columns from this table:

Table column Description


txn The transaction id of the batch that collected the URL (in the case the file is split
into several chunks using hintEndBatch, it is the last and final transaction id.)
tstamp The timestamp when the URL was committed by the workflow.
workflow_key A uniquely identifying id of the workflow collecting the URL. It allows workflows
to be renamed without changing the table data.
url The full absolute URL collected.

10.13.3.1.2. Column Types

The column types are defined by how the specific JDBC driver converts JDBC types to the database.

• The txn column is a JDBC VARCHAR.

• The tstamp column is a JDBC TIMESTAMP type.

• The workflow_key and url columns are of JDBC VARCHAR type.

10.13.3.1.3. Oracle Example

<![CDATA[-- Table definition usable for ORACLE

CREATE TABLE duplicate_check(


txn long,
tstamp timestamp,
workflow_key varchar2(32),
url varchar2(256)
);
]]>

10.14. IBM MQ Agent


10.14.1. Introduction
This section describes the IBM MQ Collection agent and IBM MQ APL commands being part of the
IBM MQ extension package, available on the DigitalRoute® MediationZone® Platform.

393
Desktop 7.1

10.14.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• IBM WebSphere MQ:

For information about IBM WebSphere MQ, see


http://www-03.ibm.com/software/products/us/en/wmq/.

10.14.2. IBM MQ Collection agent


10.14.2.1. Overview
The IBM MQ Collection acts as a client to an IBM Websphere MQ Queue Manager. It collects messages
from a defined number of message queues, topics and durable subscriptions, and routes the data as
UDRs to a real-time workflow.

10.14.2.1.1. Connection

At startup, a connection towards an Queue Manager is set up to listen to a number of queues, topics
or durable subscriptions. This can be configured either directly in the IBM MQ Collection agent or be
dynamically set within an Analysis agent.

If the agent will fail to connect to all configured queues, topics or durable subscriptions, the workflow
will abort.

10.14.2.1.2. Message Queues

Message queues are used for storing messages in Websphere MQ Server. The messages consist of two
parts; the binary data used by the application and the delivery information handled by the Queue
Manager. The Queue Manager provides a logical container for message queues and is responsible for
transferring the data between local and remote queues.

The IBM MQ agent will read the messages in the configured local message queues and each message
data will be transferred as a UDR in to the workflow. Depending on the agent's configuration the Queue
Manager will remove the message from the queue directly or it will wait until it has been processed.

New messages can also be sent to the Queue Manager with the IBM MQ APL commands.

10.14.2.1.3. Topics and Durable Subscriptions

As opposed to the point-to-point communication, IBM Websphere offers the possibility to publish and
subscribe to topics. Neither the publisher or the subscriber need to know where the other part is located.
All interaction between publishers and subscribers are controlled by the Queue Manager.

The IBM MQ agent acts as a subscriber and will register which topics or durable subscriptions to listen
for at the Queue Manager. The Queue Manager will then examine every incoming publication and
place matching messages on the subscribers queue, which will be read by the IBM MQ agent and
transferred as UDRs into the workflow.

10.14.2.2. Preparations
The following jar files are required by the IBM MQ Collection agent:

com.ibm.mq.jar

394
Desktop 7.1

com.ibm.mq.jmqi.jar
connector.jar
com.ibm.mq.headers.jar
com.ibm.mq.commonservices.jar

The classpath for the jar files is specified for each executioncontext.xml file for each execution
context. For example:

<classpath path="/opt/mqm/java/lib/com.ibm.mq.jar"/>

After the classpath has been set, the jar file should be manually distributed to be in place when the
Execution Context is started.

10.14.2.3. Configuration
The IBM MQ Collection agent configuration window is displayed when double-clicking on the agent
in a workflow, or right-clicking on the agent and selecting Configuration...

Depending on the selected Connection Mode, different configuration fields are available.

10.14.2.3.1. Common fields

The following fields are available for all Connection Modes.

Dynamic Initialization When this option is set, the configuration of the IBM MQ Collection agent
will not be made from the configuration window. The agent will instead
send a connection UDR to an Analysis agent which will populate the UDR
and send it back to the IBM MQ Collection agent. See Section 10.14.3.1,
“Connection UDRs” for more information regarding the connection UDRs.
MQ Host The host name of the queue manager host.
Port The port for the queue manager.
Channel The name of the MQ channel.
Queue Manager The name of the queue manager.
Connection Mode Select which Connection Mode the agent should use. The possible values
are Queues, Topics and Durable Subscriptions

10.14.2.3.2. Fields for Queues

The following fields are available if Queues is selected as Connection Mode.

Auto Remove This check box is only available if you have selected Queues as Connection Mode.

395
Desktop 7.1

Select this check box if the message should be removed from the queue without re-
quiring that the MQMessage UDR is routed back to the agent.
Queues List the queues that the agent should listen to.

10.14.2.3.3. Fields for Topics

The following field is available if Topics is selected as Connection Mode.

Topics List the topics that the agent should listen to.

10.14.2.3.4. Fields for Durable Subscriptions

The following field is available if Durable Subscriptions is selected as Connection Mode.

Durable Subscriptions List the subscriptions that the agent should listen to.

10.14.2.4. Introspection
If the IBM MQ Collection agent is configured to read connection parameters dynamically, it will de-
liver and expect a connection UDR during initialization. Depending on the configuration, the connection
UDR can be of the following types:

• MQConnectionInfo if Queues has been selected as Connection Mode

• MQConnectionInfoTopic if Topics has been selected as Connection Mode

• MQConnectionInfoDurableTopic if Durable Subscriptions has been selected as Connection


Mode

If the IBM MQ agent is configured for Queues, messages are delivered as MQMessage UDRs, while
Topics and Durable Subscriptions will deliver messages as MQMessageTopic and the agent expects
the same UDR type back.

All UDRs are described in Section 10.14.3, “IBM MQ UDRs”.

10.14.3. IBM MQ UDRs


The IBM MQ UDRs are designed to handle the connection towards the MQ message queues and the
messages that are read and written.

If the agent is using dynamic initialization, the connection UDRs are used for setting up the connection.

The types MQMessage and MQMessageTopic UDR Types are used for handling the messages.

APL commands are used for producing outgoing messages and the UDR types used for this are
MQQueueManagerInfo, MQQueue and MQMessage.

10.14.3.1. Connection UDRs


If the agent is configured dynamic initialization, a connection UDR is sent to the Analysis agent at
startup. The Analysis agent populates the UDR and routes it back to the IBM MQ Collection agent.
The content of the connection UDR will then be used to configure the agent.

10.14.3.1.1. MQConnectionInfo

If the connection mode is set to Queues, MQConnectionInfo will be used as the connection UDR.

The following fields are included in the MQConnectionInfo UDR:

396
Desktop 7.1

Field Description
ChannelName (string) The name of the MQ channel.
Host (string) The host name of the queue manager host.
Port (integer) The port for the queue manager.
Properties (map A map of optional properties to be set, for
<any,any>)(optional) example, user name.
QueueManager (string) The name of the queue manager.
Queues (list <string>) A list of queues to listen to.

10.14.3.1.2. MQConnectionInfoTopic

If the connection mode is set to Topics, MQConnectionInfoTopic will be used as the connection
UDR.

The following fields are included in the MQConnectionInfoTopic UDR:

Field Description
ChannelName (string) The name of the MQ channel.
Host (string) The host name of the queue manager host.
Port (integer) The port for the queue manager.
Properties (map A map of optional properties to be set, for
<any,any>)(optional) example, user name.
QueueManager (string) The name of the queue manager.
TopicNames (list <string>) A list of topics to subscribe for.

10.14.3.1.3. MQConnectionInfoDurableTopic

If the connection mode is set to Durable Subscriptions, MQConnectionInfoDurableTopic will


be used as the connection UDR.

The following fields are included in the MQConnectionInfoDurableTopic UDR:

Field Description
ChannelName (string) The name of the MQ channel.
DurableSubscriptions (list A list of subscriptions to listen to.
<string>)
Host (string) The host name of the queue manager host.
Port (integer) The port for the queue manager.
Properties (map A map of optional properties to be set, for
<any,any>)(optional) example, user name.
QueueManager (string) The name of the queue manager.

10.14.3.2. MQMessage
For each message in the MQ message queue, a UDR is created and sent into the workflow. When the
IBM MQ agent receives the MQMessage in return it will remove the message from the queue.

The following fields are included in the MQMessage UDR:

Field Description
CorrelationID (bytearray) This ID can be used for correlating mes-
sages that are related in some way or anoth-

397
Desktop 7.1

Field Description
er, e g requests and answers. The length
of this field will always be 24, meaning
that fillers will be added to IDs that are
shorter, and IDs that are longer will be cut
off.
Id (bytearray) The message id.
Message (bytearray) The message.
Persistent (boolean) If set to "true", the message will be sent as
a persistent message, otherwise the queue
default persistence will be used.
ReplyToQueue (string) The name of the queue to reply to.
ReplyToQueueManager (string) The name of the queue manager to reply
to.
SourceQueueName (string) The name of the source queue.

10.14.3.3. MQMessageTopic
For each topic message, a UDR is created and sent into the workflow.

The following fields are included in the MQMessageTopic UDR:

Field Description
DataMessage (bytearray) The message id.

10.14.3.4. MQQueue
The MQQueue UDR is a reference to an IBM MQ queue when using APL commands. The UDR is
created by the mqConnect function and all fields are read-only.

The following fields are included in the MQQueue UDR:

Field Description
CurrentDepth (integer) The number of messages currently in the
queue.
ErrorDescription (string) A textual description of an error.
IsError (boolean) Returns true if the UDR contains an error
message.
IsOpen (boolean) Returns true if the connection was success-
fully opened.
MaxDepth (integer) The maximum number of messages al-
lowed in the queue.
MqError (string) The error code provided by IBM MQ when
a connection attempt fails or in case of an
error related to the mqPut or mqClose
commands occurs.
QueueManager (string) The name of the queue manager.
QueueName (string) The name of the queue to connect to.

398
Desktop 7.1

10.14.3.5. MQQueueManagerInfo
The MQQueueManagerInfo UDR type is used by the APL functions when establishing a connection
towards a queue on the Queue Manager for outgoing messages.

The following fields are included in the MQQueueManagerInfo UDR:

Field Description
ChannelName (string) The name of the MQ channel.
Host (string) The host name of the queue manager host.
Port (integer) The port for the queue manager.
Properties (map<any,any>) A map of optional properties to be set, for
example, user name.
QueueManager (string) The name of the queue manager.

10.14.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

The agent does not publish nor access any MIM parameters.

10.14.5. Agent Message Events


There are no agent message events for this agent.

10.14.6. Agent Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

Depending on the selected Connection Mode, different debug events are sent.

10.14.6.1. Common Debug Events


• Connecting to queue manager queueManagerName and queues: queue_1, queue_2

Reported before connecting.

• Connection to queue manager successful.

Reported after a connection is made.

• Lost connection to the queue manager queueManagerName.

Reported if the queue manager has not been opened by the IBM MQ Collection agent.

• Unable to connect to IBM WebSphere MQ

Reported when a queue manager connection attempt fails.

10.14.6.2. Debug Events for Queues


• Connected to queue queueName.

399
Desktop 7.1

Reported after a successful connection to a queue from the IBM MQ Collection agent.

• Unable to open queue queueName [errorCode]

Reported when a queue cannot be opened.

10.14.6.3. Debug Events for Topics


• Successfully subscribed to topic topicName.

Reported after a successful connection to a topic subscription.

• Unable to subscribe to topic topicName.

Reported when the MQ Collection Agent failed to subscribe to a topic.

• Unable to subscribe to any topic.

Reported when the MQ Collection Agent failed to subscribe to all configured topics.

• Failed to close the connection to topic topicName.

Reported when the MQ Collection Agent has failed to unsubscribe to a topic.

• Error while getting a message from topic topicName. [errorCode]

Reported if an error occurred when fetching a topic message.

10.14.6.4. Debug Events for Durable Subscriptions


• Successfully subscribed to subscription subscriptionName.

Reported after a successful connection to a durable subscription.

• Unable to subscribe to subscription subscriptionName.

Reported when the MQ Collection Agent failed to connect to a durable subscription.

• Unable to subscribe to any subscription.

Reported when the MQ Collection Agent failed to connect to all configured durable subscriptions.

10.14.7. IBM MQ APL functions


10.14.7.1. Overview
IBM MQ APL functions are used to send data to an IBM WebSphere MQ queue.

10.14.7.2. mqConnect
This function will open a connection to a queue and queue manager.

MQQueue mqConnect(MQQueueManagerInfo info, string queueName)

Parameters:

info The information needed to connect to the queue manager MQQueueManagerInfo


UDR. For further information about the MQQueueManagerInfo UDR type see
Section 10.14.3.5, “MQQueueManagerInfo”
queueName The name of the queue

400
Desktop 7.1

Returns Returns an MQQueue UDR. For further information about the MQQueue UDR type
see Section 10.14.3.4, “MQQueue”

Note! If there is no available queue status for some reason, the MaxDepth and CurrentDepth
fields will be assigned the value "-1" and the mqConnect function will still be able to connect.

10.14.7.3. mqPut
This function will put a message on a queue.

If the function fails, it will populate the ErrorDescription field with a description and set
isError to true. If the error was generated from an MQ exception it will also update the MqError
field in the MQQueue UDR.

string mqPut(MQQueue queue, MQMessage message)

Parameters:

queue The MQQueue UDR that is the result from the mqConnect function. For further inform-
ation about the MQQueue UDR type see Section 10.14.3.4, “MQQueue”
message The message to add to the queue. For further information about the MQMessage UDR
type see Section 10.14.3.2, “MQMessage”
Returns Returns null if the function was successful and an error message if it failed.

10.14.7.4. mqClose
This function closes the connection to the queue manager.

If the function fails, it will populate the ErrorDescription field with a description and set
isError to true. If the error was generated from an MQ exception it will also update the MqError
field in the MQQueue UDR.

string mqClose(MQQueue queue)

Parameters:

queue The MQQueue UDR that is the result from the mqConnect function. For further information
about the MQQueue UDR type see Section 10.14.3.4, “MQQueue”
Returns Returns null if the function was successful and an error message if it failed.

10.14.7.5. mqStatus
This function will query the queue for MaxDepth and CurrentDepth and populate the corresponding
fields in the MQQueue UDR.

If the function fails, it will populate the ErrorDescription field with a description and set
isError to true. If the error was generated from an MQ exception it will also update the MqError
field in the MQQueue UDR.

string mqStatus(MQQueue queue)

Parameters:

queue The MQQueue UDR that is the result from the mqConnect function. For further information
about the MQQueue UDR type see Section 10.14.3.4, “MQQueue”
Returns Returns an error message describing the problem, or null if the function was successful.

401
Desktop 7.1

10.14.8. Examples
In an IBM MQ Collection agent workflow there are four different UDRs created and sent. This requires
one ore more agents containing APL code (Analysis or Aggregation) to be part of the workflow.

10.14.8.1. Collect and Process IBM MQ Message


If "dynamic initialization" has been configured, a configRequest, including an empty MQConnec-
tionInfo UDR, is sent to an Analysis agent at startup. The Analysis agent populates the UDR with
configuration info and returns it to the IBM MQ Collection agent.

For each collected message, the IBM MQ Collection agent sends an InputMessage including an
MQMessage UDR to the Analysis agent. If "Auto Remove" has been configured, the message is re-
moved from the queue and a remMessage is sent to the agent.

Figure 295. Realtime workflow example - a message is collected from the IBM MQ Collection
agent

10.14.8.1.1. Dynamic Initialization

The following APL code example shows how to use dynamic initialization.

consume {
if (instanceOf(input, mq.MQConnectionInfo)) {
mq.MQConnectionInfo info = (mq.MQConnectionInfo) input;
info.Host = "mymqhost";
info.Port = 1415;
info.ChannelName = "CHANNEL2";
info.QueueManager = "mgr2.queue.manager";

list<string> queues = listCreate(string);


listAdd(queues,"Q1.QUEUE");
listAdd(queues,"Q2.QUEUE");

info.Queues = queues;
udrRoute(info);
}
}

402
Desktop 7.1

10.14.8.1.2. Process the MQ Message

The following APL code example shows how to process the IBM MQ message and remove it from
the queue.

consume {
if (instanceOf(input, mq.MQMessage)) {
mq.MQMessage msg = (mq.MQMessage) input;
//Process the MQ Message
handleResponse(msg);
//Remove the message from the queue
udrRoute(msg, "remMessage");
}
}

10.14.8.2. Send messages to the IBM MQ Queue Manager


In the following figure the workflow forwards a message to an IBM MQ queue manager.

Figure 296. Workflow example - a message is sent to a queue manager

The following APL code example shows how to send an IBM MQ message to a queue.

mq.MQQueue queue;
initialize {
mq.MQQueueManagerInfo conUDR = udrCreate(mq.MQQueueManagerInfo);
conUDR.ChannelName = "CHANNEL1";
conUDR.Host = "10.46.100.86";
conUDR.Port = 1414;
conUDR.QueueManager = "mgr1.queue.manager";
queue = mqConnect(conUDR, "Q1.QUEUE");
}

consume {
mqStatus(queue);
debug("Queue Depth: "+queue.CurrentDepth);
debug("Queue Max Depth: "+queue.MaxDepth);
mq.MQMessage msg = udrCreate(mq.MQMessage);
msg.Message = input;
mqPut(queue, msg);
}

deinitialize {
mqClose(queue);
}

403
Desktop 7.1

10.15. Netflow Agent


10.15.1. Introduction
This section describes the NetFlow agent. This is an extension agent of the DigitalRoute® Medi-
ationZone® Platform.

10.15.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• The Cisco NetFlow export formats, see Cisco's web site for further information

10.15.1.2. Terminology
For information about Terms and Abbreviations used in this document, see the Terminology document.

10.15.2. NetFlow Agent


10.15.2.1. Overview
The NetFlow agent gathers traffic data from one or many Cisco routers. NetFlow data contains inform-
ation, such as source and destination IP address, down- and uploaded bytes, which is commonly used
for statistical purposes.

Each router can potentially be identified through several IP addresses (interfaces) and if so, it may
send UDP packets on any of these interfaces to the agent. The agent offers a possibility of mapping
all these IP addresses into one that enables detection of the fact that they originated from the same
router.

Figure 297. Example of NetFlow network.

When activated, the agent will connect to the configured port and start listening for incoming packets
from the routers. Each received packet will be unpacked into one or several flow records. Based on
the information in the flow record, the agent will create and populate one of the standard NetFlow
UDR types available in MediationZone® and forward the UDR into the workflow. If the agent fails
to unpack or read the packet/flow record, it will silently be removed from the stream.

Since Cisco routers do not offer the possibility of re-requesting historic data, the agent will lose all
data delivered from the router while the agent is not active.

Note! The real-time job queue may fill up, in which case a warning will be raised in the System
Log stating that the job queue is full. Records arriving to a full queue will be thrown away. A
message in the System Log will state when the queue status is back to normal.

404
Desktop 7.1

10.15.2.1.1. NetFlow Related UDR Types

The UDR types created by default in the NetFlow agent can be viewed in the UDR Internal Format
Browser in the netflow folder. To open the browser open an APL Editor, right-click in the editing
area, and select the UDR Assistance... option in the pop-up menu.

10.15.2.2. Configuration
The NetFlow agent configuration window is displayed when double-clicking on the agent, or right-
clicking on the agent and selecting Configuration...

10.15.2.2.1. Connection Tab

Figure 298. NetFlow agent configuration window, Connection tab.

Port The port number where the NetFlow agent will listen for packets from the routers.

Note! Since the routers will be configured to communicate with a specific


host on this port, it is important that the workflow containing the NetFlow
agent is configured to execute on that specific host and not on a random
host.

Two NetFlow agents may not be configured to listen on the same port on
the same host.

Only from Pre- If enabled, the agent will only accept packets from hosts specified in the Interface
defined Hosts Mapping tab. Data from other hosts will be discarded.

If disabled, all arriving data will be accepted. This may be suitable if a combin-
ation of routers is used. When a majority of the routers only send from one inter-
face (IP-address) each, and some is set according to the Interface Mapping tab.
Hence, when disabling this option, the one-interface-routers do not have to be
added to the interface mapping list.
Warn on Sequence Determines if a warning will be raised in the System Log when the sequence
Gap number gap between two sequential PDUs from the same router is equal to or
larger than specified in the Minimum Gap.
Minimum Gap The minimum sequence number gap between two flow records that will cause
warning in the System Log.

405
Desktop 7.1

10.15.2.2.2. UDR Type Tab

Figure 299. NetFlow agent configuration window, UDR Type tab.

UDR Type UDR Types expected to be delivered to this agent. If other types arrive, the NetFlow
agent will abort.

10.15.2.2.3. Interface Mapping Tab

Figure 300. NetFlow agent configuration window, Interface Mapping tab.

Maps several interface IP addresses to one main IP address. Each router using more than one interface
IP address when sending data to the agent, must be registered here. One of the IP addresses, supported
by the router, must be registered as Main IP Address. The others are configured in the IP Address
list.

If a packet arrives from an IP address configured in the IP Address list, it will be mapped to the cor-
responding Main IP Address. This way it will appear as if all packets originate from the same IP ad-
dress.

Main IP Address Each router that supports multiple interfaces must add one adress to this list.
When an existing row is selected, the content in the IP Address table will reflect
the slave IP addresses for the selected Main IP Addresses.
IP Addresses Additional IP addresses mapped to their corresponding main IP address by the
agent.

10.15.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

Depending on the incoming flow records, the agent may produce one of the following UDR types.
Their names reflect the NetFlow versions:

• V1UDR (netflow)

• V5UDR (netflow)

406
Desktop 7.1

• V7UDR (netflow)

• V8ASMatrixUDR (netflow)

• V8DestinationPrefixMatrixUDR (netflow)

• V8PrefixMatrixUDR (netflow)

• V8ProtocolPortMatrixUDR (netflow)

• V8SourcePrefixMatrixUDR (netflow)

• V9UDR (netflow)

10.15.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

10.15.2.4.1. Publishes

MIM Parameter Description


Incoming PDUs This MIM parameter contains the number of received packets.

Incoming PDUs is of the long type and is defined as a global MIM context
type.

10.15.2.4.2. Accesses

The agent does not access any MIM parameters.

10.15.2.5. Agent Message Events


There are no message events for this agent.

10.15.2.6. Debug Events


There are no debug events for this agent.

10.15.3. Netflow V9 considerations


10.15.3.1. The Netflow agent
The V9 UDR format is template based, where the template provides a description of the fields that
will be present in the UDRs. See the section NetFlow Version 9 Flow-Record Format on Cisco's web
site for detailed information about the format.

The Netflow Agent does not itself detect templates, or map incoming data to the corresponding template,
or create UDRs of the incoming data. This functionality must be implemented in APL, as described
in section Section 10.15.3.2, “Workflow Design for V9UDR”. The agent will forward the Netflow
data using the rawData-field to the workflow.

10.15.3.2. Workflow Design for V9UDR


When using Netflow with the V9UDR format, the workflow design must handle certain functions.

10.15.3.2.1. Dynamic format

Since the V9UDR format is dynamic, the workflow may not have access to the template when the first
UDRs arrive, or the template may have changed and not yet been sent to the workflow.

407
Desktop 7.1

For this reason, it is recommended to let the real time workflow with the Netflow collection agent(s)
forward the UDRs via Inter Workflow, or Workflow Bridge agents, to a batch workflow that stores
them on disk.

A third workflow may then collect, decode and aggregate the UDRs.

10.15.3.2.2. Decoding and aggregation

In order to decode the UDRs, you first have to decode the template, and this has to be done by defining
an Ultra format for the template. The template should then be sent to an Aggregation agent to start a
session, which will correlate all the UDRs that use the template. An Ultra defining the aggregation
session handling will also have to be created.

Since the aggregation has to be based on a template specific field, the templates have to be routed one
at the time to the Aggregation agent.

The APL code in the Aggregation agent will then have to handle the decoding of the actual UDRs.

10.16. Nokia IACC Agent


10.16.1. Introduction
This section describes the NokiaIACC agent. This is an extension agent of the DigitalRoute® Medi-
ationZone® Platform.

10.16.1.1. Prerequisites
The reader of this document should be familiar with:

• The MediationZone® Platform

• Nokia network elements MMSC/SMSC (IACC Server)

10.16.2. Nokia IACC Agent


The Nokia IACC agent in combination with Analysis agents receives requests and sends responses
over a CORBA connection. The requests are turned into UDRs by the NokiaIACC agent and are inserted
into a MediationZone® workflow.

When a workflow is activated, the NokiaIACC agent opens a port and waits for requests. The workflow
remains active until manually stopped.

10.16.2.1. CORBA Naming Service


The Nokia IACC agent is dependent on a CORBA naming service - the CORBA COS (Common Object
Services) Naming Service. For an application to use the COS Naming Service, its ORB must know
the port of the naming service and the host it is running on.

The orbd supplied with JDK could be used. The Object Request Broker Daemon orbd is used as
the naming service to enable clients to transparently locate and invoke objects on the Nokia IACC
agent in the CORBA environment.

The ORBInitialPort argument is a required argument for orbd, and is used to set the port number
on which the Naming Service will run.

When orbd starts up, it also starts a Naming Service. A Naming Service is a CORBA service that
allows CORBA objects to be named by means of binding a name to an object reference. The name

408
Desktop 7.1

binding may be stored in the naming service, and a client may supply the name to obtain the desired
object reference.

When using Solaris software, the root user must be used in order to start a process on a port under
1024. For this reason, is is recommended to use a port number greater than or equal to 1024. A different
port can be substituted if necessary.

To start orbd from a UNIX command shell:

$JAVA_HOME/bin/orbd -ORBInitialPort 1050&

From an MS-DOS system prompt (Windows):

start orbd -ORBInitialPort 1050

10.16.2.2. IACC-Methods vs. UDRs


The NokiaIACC agent supplies three IACC-methods to the workflow: hasCredit2, commitRe-
servation2 and cancelReservation2.

The fourth IACC-method, isServerUp, is not supplied for the workflow. This method returns a
string running to the calling client if the agent is up and does not have a UDR data representation.

10.16.2.2.1. IACC_UDR

The NokiaIACC agent retrieves data via the IACC-method call and produces one type of UDR; the
IACC_UDR. The IACC UDR contains a request and a response field. These two fields are of
type Subscribers_UDR.

Figure 301. IACC_UDR contains a request and a response field.

10.16.2.2.2. Subscribers_UDR Types

The Subscribers_UDR can in turn be of type hasCredit_UDR, commitReservation_UDR


or cancelReservation_UDR. The hasCredit_UDR has an additional field; operation.

Figure 302. Subscribers_UDR can be of type hasCredit2_UDR,


commitReservation2_UDR or cancelReservation2_UDR.

10.16.2.2.3. Subscribers_UDR

Each of these IACC- method UDRs has a list of Subscriber UDRs called subscribers. This list
needs to be created in the APL code using listCreate. The Subscriber UDR has a list of OneAt-

409
Desktop 7.1

tribute_UDRs called attributes. This list needs to be created in the APL code using listCre-
ate.

Figure 303. Subscribers_UDR has a list of Subscriber_UDRs that has a list of


OneAttribute_UDRs.

10.16.2.2.4. NokiaIACC Related UDR Types

The UDR types created by default in the NokiaIACC agent can be viewed in the UDR Internal Format
Browser in the IACC folder. To open the browser open an APL Editor, in the editing area right-click
and select UDR Assistance...; the browser opens.

10.16.2.3. Configuration
The NokiaIACC agent configuration window is displayed when the agent in a workflow is double-
clicked, or right-clicked selecting Configuration...

10.16.2.3.1. Nokia IACC Tab

Figure 304. Nokia IACC agent configuration window, Nokia IACC tab.

Host The host defines the IP-address or hostname where the Naming Service is to be found.
Port The port defines the port to be used for the Nameserver.
Name The service name the agent is to be connected with.
Timeout The maximum time to wait for an answer in seconds. If timeout is zero, however, then
real-time is not taken into consideration and the agent simply waits until notified with
an answer.
Server Host The Server Host defines the IP-address or hostname where the Nokia IACC server is
running. If the Server Host is empty local host will be used.

For the communication with the Nokia IACC agent to work, each Nokia network element needs to be
configured with the same Naming Service host, port and name.

If one of the configuration fields is incorrectly populated, the workflow will abort with a communication
failure.

410
Desktop 7.1

10.16.2.4. Transaction Behavior


10.16.2.4.1. Emits

None.

10.16.2.4.2. Retrieves

None.

10.16.2.5. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

The agent does not publish nor access any MIM parameters.

10.16.2.6. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Timeout on a response.

The timeout occurs when an expected answer is not received within 1000 milliseconds.

• When a UDR is sent back

The message is sent when a response UDR is sent back via CORBA.

10.16.2.7. Debug Message


Debug messages are dispatched when debug is used. The messages are during execution shown in the
Workflow Monitor and can also be stated according to the configuration done in the Event Notification
Editor.

For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.

10.16.3. An Example
Here is an example of what a workflow design using the Nokia IACC agent could look like. A workflow
containing a Nokia IACC agent can be set up to receive requests and send responses. This requires an
Analysis agent to be part of the workflow.

Figure 305. An example workflow with a Nokia IACC agent sending an updated IACC_UDR
back to the source.

To keep the example as simple as possible, the valid records are not processed. To illustrate how the
workflow is defined, an example is given where an incoming UDR is validated, resulting in the field
response being updated and sent back as a reply to the source. Usually, no reply is sent back until
the UDRs are fully validated and processed. The example aims to focus on the request and response
handling only.

411
Desktop 7.1

10.16.3.1. NokiaIACC Connection


The Nokia IACC agent will allow multiple client connections at the same time. The example workflow
could be modified by connecting several clients to the Nokia IACC agent through the CORBA Naming
Service, if preferred.

10.16.3.1.1. NokiaIACC Collection Agent

Drag and drop a Nokia_IACC collection agent into the workflow. To be able to recieve and send requests
and responses the Nokia IACC agent needs to connect with the CORBA Naming Service. Therefore
the configuration window of the Nokia IACC agent needs to be updated with the appropriate values
for the Host, Port and service Name.

10.16.3.1.2. The Analysis Agent

The Analysis agent handles both the validation of the incoming reqest, as well as sending the response.

Connect an Analysis agent to the Nokia IACC agent. Drag and release in the opposite direction to
create a response route in the workflow.

Note the use of the instanceOf function. This to verify the type of request and to be able to handle
it accordingly. This example assumes a request to the hasCredit2 method. Therefore, this will be
the only response populated and sent back with an updated response field.

consume {
if(instanceOf(input.request, HasCredit2)){

// Create and populate a response


list<Subscriber> sList = listCreate(Subscriber);
Subscriber sub = udrCreate(Subscriber);
list<OneAttribute> aList = listCreate(OneAttribute);
OneAttribute one = udrCreate(OneAttribute);
one.level = 1;
one.name = "hasCredit name1";
one.value = "hasCredit value1";
listAdd(aList, one);
OneAttribute two = udrCreate(OneAttribute);
two.level = 2;
two.name = "hasCredit name2";
two.value = "hasCredit value2";
listAdd(aList, two);
sub.attributes = aList;
listAdd(sList, sub);
input.response.subscribers = sList;

//Send a response to the network element


udrRoute(input);

} else if (instanceOf(input.request, CancelReservation2)){


debug("a CancelReservation");
} else if(instanceOf(input.request, CommitReservation2)){
debug("a CommitReservation");
}
}

412
Desktop 7.1

10.17. Merge Files Agent


10.17.1. Introduction
This section describes the Merge Files agent, part of the DigitalRoute® MediationZone® Platform.

10.17.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

10.17.2. Merge Files Collection Agent


The Merge Files collection agent collects files from a local file system and inserts them into a Medi-
ationZone® workflow. Initially, the agent will scan the base directory for all sub directories matching
the Sub Directory regular expression. The agent will collect the files matching the files regular expres-
sion. In addition, the Sort Order may be used to sort the matched files in a per sub directory basis.
The files found will then be inserted into a CollectedFileUDR and routed into the workflow.

When a file has been successfully processed, the agent offers the possibility of moving, renaming, re-
moving or ignoring the original file. The agent can also be configured to keep files for a set number
of days. When all files in the batch are successfully processed, the agent stops awaiting the next activ-
ation, scheduled or manually initiated.

When Force Single UDR on the Merge Files tab is checked, the agent will try to read the complete
file into one UDR. The agent will however only be able to handle files with a file size that is smaller
than Integer.MAX_VALUE. While reading a file, if an exception such as OutOfMemoryError
or ArrayIndexOutOfBounds occurs, the workflow aborts and a message is logged indicating the
name of the file that caused the exception. For information about the Integer.MAX_VALUE type
see the Java documentation.

10.17.2.1. Configuration
The Merge Files collection agent configuration window is displayed when the node in a workflow is
double-clicked, or right-clicked, selecting Configuration.... Parts of the configuration may be done
in the Sort Order tab. For further information, see Section 4.1.6.2.3, “Sort Order Tab”.

10.17.2.1.1. Merge Files Tab

The Merge Files tab contains configurations related to the location and handling of the source files
collected by the agent.

413
Desktop 7.1

Figure 306. Merge Files Collection agent configuration, Merge Files tab.

Base Dir- Pathname of the source base directory on the local file system of the execution context,
ectory where the source files reside.
Filename Name of the source files collected from the sub directory. Regular expressions according
to Java syntax applies. For further information, see

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

Example 70.

To match all log filenames beginning with INF, type: ^INF.*

Sub Dir- Name of the sub directory from where files will be collected (the Base Directory will al-
ectory ways be a match).
Compres- Compression type of the source files. Determines if the agent will decompress the files
sion before passing them on into the workflow.

• No Compression - agent does not decompress the files. Default setting.

• Gzip - agent decompresses the files using gzip.

File Limit The maximum number of files processed in each batch.


Byte Lim- The maximum number of bytes processed in each batch.
it

Note that limits are set per directory, that is, the batch will be closed when the last
file of a sub directory has been processed even if the File Limit or Byte Limit
closing condition has not been reached.

414
Desktop 7.1

Move Be- If enabled, the source files are moved to the automatically created subdirectory
fore Col- DR_TMP_DIR under the directory from which they originated, prior to collection. This
lecting option supports safe collection of source files.
Inactive If the specified value is greater than zero, and if no file has been collected during the
Source specified number of hours, the following message is logged:
Warning
(hours) The source has been idle for more than <n> hours, the last
inserted file is <file>.

Move to If enabled, the files will after collection be moved to the sub directory specified in the
Directory filed. If Move Before Collecting is selected the file will be moved from the
directory DR_TMP_DIR. to a sub directory relative to the files original location.

The fields Prefix, Suffix and Keep (days) will be enabled when the Move to are set. In-
formation about them will follow.
Rename If enabled, the source files will after the collection be renamed and kept in the source
directory from which they were collected. If using Move Before Collecting the files will
after the renaming be moved from the DR_TMP_DIR directory back to the original location.

If Rename is enabled, the source files will be renamed in the current directory
(source or DR_TMP_DIR). Make sure that the new name does not match the regular
expression or the file will be collected over and over again.

Remove If enabled, the source files will, after successfully being processed, be removed from the
source directory (or from the DR_TMP_DIR directory if Move Before Collecting is
used).
Ignore If enabled, the source files will remain in the source directory.
Directory Pathname relative to the current position of a file where the source files will be moved.

This field is only enabled if Move to is selected.


Prefix/Suf- Prefix and/or suffix that will be appended to the beginning respectively the end of the
fix name of the source files.

These fields are only enabled if Move to to or Rename is selected.


Keep Number of days to keep source files after the collection. In order to delete the source files,
(days) the workflow has to be executed (scheduled or manually) again, after the configured
number of days.

Note, a date tag is added to the filename, determining when the file may be removed. This
field is only enabled if Move to or Rename is selected.

After each successful execution of the workflow the agent will search recursively under
Base Directory for files to remove.
Force If this is disabled the output files will automatically be divided in multiple UDRs per file.
Single The output files will be divided in suitable block sizes.
UDR

10.17.2.2. Transaction Behavior


10.17.2.2.1. Emits

Begin Batch Emitted right before the first file in a Merged File group batch.

415
Desktop 7.1

End Batch Emitted when the last file of a sub directory has been processed or when a Merge
Closing Condition is reached.

10.17.2.2.2. Retrieves

Cancel Batch In case a cancelBatch is generated, all files in that merged set will be canceled as
one batch. Depending on the workflow configuration, the batch (consisting of several
input files) will either be stored in ECS or the workflow will abort and the files be-
longing to the batch will be left untouched.
Hint End Batch If a Hint End Batch message is received, the collector splits the batch after the current
file has been processed.

After a batch split, the collector emits an End Batch message, followed by a Begin
Batch message (provided that there is data in the subsequent block).

10.17.2.3. Introspection
The agent produces CollectedFileUDR types.

10.17.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

10.17.2.4.1. Publishes

MIM Parameters Description


Source File Count This MIM parameter contains the number of files, available to this instance
for collection at startup. The value is constant throughout the execution of the
workflow, even if more files arrive during the execution. The new files will
not be collected until the next execution.

Source File Count is of the long type and is defined as a global MIM
context type.
Batch Retrieval This MIM parameter contains a timestamp, indicating when the batch was read
Timestamp in the beginBatch block.

Batch Retrieval Timestamp is of the date type and is defined as a


header MIM context type.
Base Directory This MIM parameter contains the name of the Base directory from which the
agent locates files. This is defined in the agent configuration.

Base Directory is of the string type and is defined as a global MIM


context type.
Source Files This MIM parameter contains a list of all source files that will be included and
processed in the upcoming batch, that is the file content will be set to null.

Source Files is of the list<CollectedFileUDR> type and is defined


as a header MIM context type.
Source Files Left This MIM parameter contains the number of source files that are yet to be
collected. This is the number that appears in the Execution Manager backlog.

Source Files Left is of the long type and is defined as a header MIM
context type.

416
Desktop 7.1

10.17.2.4.2. Accesses

The agent does not access any MIM parameters.

For a list of the general MIM parameters, see Section 2.2.10, “Meta Information Model”.

10.17.2.5. Agent Message Events


None.

10.17.2.6. Debug Events


There are no debug events for this agent.

10.17.2.7. CollectedFileUDR
The agent produces and routes CollectedFileUDR types with a structure described in the following
example.

Example 71.

CollectedFileUDR:

internal CollectedFileUDR {
string fileName;
string baseDirectoryPath;
string subDirectory; // relative to base directory
int sizeOnDisk; // will differ if file was compressed
boolean wasCompressed; // true if file was decompressed
on collection
date fileModifiedTimestamp
int fileIndex // Index number within the current merged
batch, starts with 1.
bytearray content;
boolean isLastPartial; // True if last UDR of the input file
int partialNumber; // Sequence number of the UDR within the
file. 1 for first, 2 for second so on.
};

10.18. Latency Statistics


10.18.1. Introduction
This section describes the real-time agent Latency Statistics, an extension agent of the DigitalRoute®
MediationZone® Platform.

10.18.1.1. Prerequisites
The reader of this information has to be familiar with:

• The MediationZone® Platform

• UDR structure and contents

417
Desktop 7.1

• Analysis Programming Language

10.18.1.2. Supplementary Documentation


The Analysis Programming Language description and syntax is listed in the MediationZone® APL
Reference Guide.

The Ultra Format Definition Language is described in the MediationZone® Ultra Reference Guide.

10.18.2. Latency Statistics Agent


The Latency Statistics agent is a collection agent used in realtime workflows to collect latency statistics
on data processing in the workflow. To collect latency statistics the user must configure corresponding
APL code in the Analysis agent.

The latency information is presented as an agent specific UDR type. Depending on costumer specific
needs the information can be converted in a number of different ways.

10.18.2.1. Overview
The Latency Statistics agent collects latency information on workflow level. Only one independent
latency measurement agent can be used per workflow since there is only one histogram measurement
collection point per workflow.

The latency collector agent starts latency measurements in initialize.

Note: Latencey measurement is enabled in a workflow that contains a Latency collector agent.

10.18.2.2. Configuration
The Latency Statistics agent configuration window is displayed when the agent in a workflow is right-
clicked selecting Configuration... or double-clicked.

Figure 307. Latency Statistics Agent, Latency Statistics tab.

Granularity (ms) Specifies the number of millisecond increments used when measuring latency.
Default value is 5 milliseconds.
Bucket Count Specifies the number of buckets used to record latency history. Each bucket is an
increment of granularity. Maximum latency recorded will be: Granularity x
Number of buckets. Default value is 400.
Timeout (ms) Number of latency milliseconds before MediationZone® assumes that a response
from a latency request will not arrive. Default value is 5000 milliseconds.
Duration (s) Number of seconds between which the agent emits all latency histograms. The
agent will emit a LatencyHistogramList UDR on each of its output routes
after every reached interval time. Default value is 10 seconds.

418
Desktop 7.1

10.18.2.3. Transaction Behavior


10.18.2.3.1. Emits

The agents does not emit any commands.

10.18.2.3.2. Retrieves

The agents does not retrieve any commands.

10.18.2.4. Introspection
The agent emits UDRs of LatencyHistogramList type.

10.18.2.5. Meta Information Model


This agent does not publish nor access any MIM resources.

APL offers the possibility of both publishing and accessing MIM resources and values. For a
list of the general MIM parameters, see Section 2.2.10, “Meta Information Model”.

10.18.2.6. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

10.18.3. Latency Related APL Functions


If the time arguments (startTime, stopTime) are not specified, the current time according to the local
JVM will be used.

10.18.3.1. latencyStart
Starts a latency measurement and (unless it already exists) creates its associated histogram.

any latencyStart
( string key1,
string key2
[,long startTime] //Optional
);

Parameters:

key1 Primary measurement identifier. This denotes the end-points between which to measure
the latency. It is an arbitrary string such as “CCR_CCA” (which could indicate that
the latency between reception of a CCR and response of CCA is measured).
key2 Traffic case discriminator. Arbitrary string to further classify the histogram. Key2
combined with key1 identifies a unique set of buckets that can be added together to a
latency histogram. This key could be constructed from usage data in the request for
example source, event type, etc. A null value indicates no further classification.
startTime The time when the latency should begin being measured. If the startTime parameter
is not defined in another way, the APL function dateNanoseconds() will on entry,
assign the startTime.

419
Desktop 7.1

Returns statID a unique identifier. It is used as argument in the latencyStop function.

Note if the Latency Statistics agent is not present in a workflow the function will always
return null.

10.18.3.2. latencyStop
The function stops a latency timer. When the latency value has been determined, it will increment the
corresponding bucket in the appropriate latency histogram (identified by key1 and key2 of latencyStart).

long latencyStop
( any statID,
[long stopTime] //Optional
);

Parameters:

statID A unique identifier returned from the latencyStart function.


stopTime The time when the latency measurement is ended.
Returns • >= 0: Recorded latency in nanoseconds.

• -1: Id mismatch, the statID was not found. Either because the argument was not
returned from latencyStart, or the timeout value of the Latency Collector was
exceeded. This is measured from the point where latencyStart was called.

• -2: Negative latency encountered. This happens if an explicit startTime used in


the latencyStart causes negative latency or if a stopTime is stated.

• -3: Latency not enabled.

10.18.3.3. latencyAdd
The function is a latencyStart and latencyStop combined.

long latencyAdd
( string key1,
string key2,
long startTime
[, long stopTime] //Optional
);

Parameters:

key1 Primary measurement identifier.


key2 Traffic case discriminator.
startTime The time when the latency should begin being measured.

Note that this parameter is not optional.


stopTime The time when the latency measurement is ended.
Returns • >= 0: Recorded latency in nanoseconds.

• -2: Negative latency encountered. This only happens if an explicit startTime


used in the latencyAdd causes negative latency.

• -3: Latency not enabled.

420
Desktop 7.1

10.18.3.4. isLatencyEnabled
boolean isLatencyEnabled;

Returns True, if latency measurements are enabled.

10.18.4. Related UDR Types


There are two UDR types used in the output from the collector, LatencyHistogram and
LatencyHistogramList. In the UDR Assistance Browser they will be visible under the
LatencyStat tree node.

Next is the UDR type presented.

10.18.4.1. LatencyHistogramList UDR


The LatencyHistogramList UDR Type is used to collect latency statistics

internal LatencyHistogramList {

long startTime;
// Time when measurement collection started as
// java.lang.System.currentTimeMillis()

int stopMismatchCount;
// Number of calls to latencyStop that did not
// match an id, i.e. where -1 was returned (see
// description for latencyStop)

long stopTime;
// Time when measurement collection stopped as
// java.lang.System.currentTimeMillis()

list <LatencyHistogram> histogramList;


// Unordered list of histograms that was created
// during the stopTime-startTime execution
};

10.18.4.2. LatencyHistogram UDR


The LatencyHistogram UDR Type is used to collect latency statistics for one histogram identity.

Next is the UDR type presented.

internal LatencyHistogram {
any key1; // e.g. CCR_CCA
any key2: // e.g. source, event_type, etc
int granularity; // as defined in workflow
int bucketCount; // as defined in workflow
long timeout; // as defined in workflow
list <int> buckets; // measurement counts assigned to
appropriate buckets

int totalCount;
// Sum of measurements in buckets[0..n] cells +
// outsideBucketCount

int outsideBucketCount;

421
Desktop 7.1

// Number of measurements that fell outside bucket


// duration but before timeout duration (i.e. had
// a confirmed latencyStop call before timeout
// milliseconds)

int notStoppedCount;
// Number of measurements that fell outside timeout
// duration. These were either lost or exposed
// to a configuration error (i.e. that latencyStart
// was called without corresponding latencyStop)

int negativeLatencyCount;
// Number of calls to latencyStop that resulted in
// a negative latency (see description for latencyStop)

};

422
Desktop 7.1

11. Appendix III - Processing agents

11.1. Aggregation Agent


11.1.1. Introduction
This section describes the Aggregation agent. This is a standard agent on the DigitalRoute® Medi-
ationZone® Platform.

11.1.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• UDR structure and contents

• Analysis Programming Language

• Couchbase database

11.1.2. Overview
The Aggregation agent enables you to consolidate related UDRs that originate from either a single
source, or from several sources, into a single UDR. Related UDRs, are grouped into sessions according
to configurable data fields in each of the UDRs. The agent uses APL coded criteria (if) series to as-
sociate a specific partial UDR to another, or to a session that already includes a matching UDR.

The agent stores the session in a file system or in a Couchbase database. When the agent is about to
save a group of UDRs, it creates a UDR list by using the APL code.

To ensure the integrity of the session's data in the storage, the Aggregation agent may use read- and
write locks. When using file storage and an active agent has write access, no other agent can read or
write to the same storage. It is possible to grant read-only access for mutliple agents, provided that the
storage is not locked by an agent with write access. When using Couchbase storage, multiple Aggreg-
ation agents can be configured to read and write to the same storage. In this case, write locks are only
enforced for sessions that are currently updated and not the entire storage. For information about how
to configure read-only access, see Section 11.1.3.4, “Agent Configuration - Batch” or Section 11.1.3.5,
“Agent Configuration - Real-Time”.

In a batch workflow, the aggregation agent receives collected and decoded UDRs one by one.

Figure 308. Aggregation in a Batch Workflow

In a real-time workflow, the aggregation agent may receive UDRs from several different agents sim-
ultaneously.

423
Desktop 7.1

Figure 309. Aggregation in a Real-Time Workflow

Figure 310, “The Aggregation Flow Chart” illustrates how an incoming UDR is treated when it is
handled by the Aggregation agent. If the UDR leaves the workflow without having called any APL
code, it is handed over to error handling. For detailed information about handling unmatched UDRs
see Section 11.1.3.4.1, “General Tab” and Section 11.1.3.5.1, “General Tab”.

Figure 310. The Aggregation Flow Chart

When several matching sessions are found, the first one is updated. If this occurs, redesign the workflow.
There must always be zero or only one matching session for each UDR.

11.1.3. Configuration
You configure the Aggregation agent with these steps:

1. Define a session UDR in Ultra format.

2. Define a Couchbase profile (when using Couchbase storage).

3. Define an Aggregation profile.

424
Desktop 7.1

4. Configure the agent.

11.1.3.1. SessionUDRType
Each Aggregation profile stores sessions of a specific Session UDR Type, defined in Ultra. For further
information about Ultra formats, see the MediationZone® Ultra Reference Guide.

You define a Session UDR Type in the same way as you define internal Ultra types, with only one
difference; replace the keyword internal with session.

Example 72.

session SessionUDRType {
int intField;
string strField;
list<drudr> udrList;
};

Note! Take special precaution when changing, updating or renaming formats. If the updated
format does not contain all the fields of the historical format, in which UDRs may already reside
within the ECS or Aggregation storage, data will be lost. When a format is renamed, it will still
contain all the fields. The data, however, cannot be collected.

Note! In general, the session UDR should be kept as small as possible. A larger UDR decreases
performance compared to a small one.

11.1.3.2. Couchbase Profile


For information about how to configure a Couchbase profile, see the Section 9.2, “Couchbase Profile”.

11.1.3.3. Aggregation Profile


You can apply an Aggregation profile to any number of workflow configurations. Aggregation sessions
created in the storage that is specified by the profile can be accessed by multiple active workflows
simulatenously.

When using file storage and sharing an Aggregation profile across several workflow configurations,
the read and write lock mechanisms that are applied to the stored sessions must be considered. For
information about read and write locks, see Section 11.1.3.4, “Agent Configuration - Batch” or Sec-
tion 11.1.3.5, “Agent Configuration - Real-Time”.

The Aggregation profile is loaded when you start a workflow that depends on it. Changes to the profile
become effective when you restart the workflow.

To open the configuration, click the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select Aggregation Profile from the menu.

11.1.3.3.1. Aggregation Profile Menu

The main menu changes depending on which configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all configurations and these are
described in Section 3.1.1, “Configuration Menus”.

The Edit menu is specific for Aggregation profile configurations.

425
Desktop 7.1

11.1.3.3.1.1. The Edit Menu

Item Description
External References To Enable External References in an agent profile field. For detailed instruc-
tions, see Section 9.5.3, “Enabling External References in an Agent Profile
Field”.

11.1.3.3.2. Aggregation Profile Buttons

The toolbar changes depending on which configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.

There are no additional buttons for Aggregation profile.

11.1.3.3.3. Session Tab

In the Session tab you can browse and select Session UDR Type and configure the Storage selection
settings.

Figure 311. The Aggregation Profile Editor - Session Tab

Session UDR Type Click on the Browse... button and select the Session UDR Type, defined in Ultra,
that you want ap- that you want to use, see Section 11.1.3.1, “SessionUDRType”.
plied.
Storage Select the type of storage for aggregation sessions. The available settings are
File Storage and Couchbase.

File Storage can be used for both batch and real-time workflows.

Couchbase can only be used for real-time workflows, for which it is the preferred
setting.

Couchbase as storage allows highly available systems with geographic redund-


ancy. The session data that is replicated within the storage is available across
workflows, Execution Contexts and systems. This serves to minimize data loss
in failover scenarios.

Note! Data stored in Couchbase is not available in the Aggregation Session


Inspector.

426
Desktop 7.1

11.1.3.3.4. Association Tab

You use the Association tab to configure rules that are used to match an incoming UDR with a session.
Every UDR type requires a set of rules that are processed in a certain order. In most cases only one
rule per incoming UDR type is defined.

You can use a primary expression to filter out UDRs that are candidates for a specific rule. If the UDR
is filtered out by the primary expression, it is matched with the existing sessions by using one or sev-
eral ID Fields as a key.

For UDRs with ID Fields matching an existing session, an additional expression may be used to specify
additional matching criteria. For example: If dynamic IP addresses are provided to customers based
on time intervals, the field that contains the IP address could be used in ID Fields while the actual
time could be compared in the Additional Expression.

Figure 312. The Aggregation Profile Editor - Association Tab

UDR Types Click on the Add button to select a UDR Type in the UDR Internal Format dialog.
The selected UDR type will then appear in this field. Each UDR type may have a
list of rules attached to it. Selecting the UDR type will display its rules as separate
tabs to the right in the Aggregation profile configuration.
Primary Ex- (Optional) Enter an APL code expression that is going to be evaluated before the ID
pression Fields are evaluated. If the evaluation result is false the rule is ignored and the
evaluation continues with the next rule.

Use the input variable to write this filtering expression.


ID Fields Click on the Add button to select additional ID Fields in the ID Fields dialog. These
fields, along with the Additional Expression settings, will enable MediationZone®
to determine whether a UDR belongs to an existing session or not. If the contents of
the selected fields match the contents of a session, and an Additional Expression
evaluation results in true, the UDR belongs to the session.

427
Desktop 7.1

Note! Make sure that the selected fields are of the same type and appear in
the same order for all the rules that are defined for the agent.

Additional Ex- (Optional) Enter an APL code expression that is going to be evaluated along with
pression the ID Fields.

Use the input variable to write this filtering expression.

The Additional Expression is useful when you have several UDR types with a
varying number of ID Fields, that are about to be consolidated. Having several UDR
types requires the ID fields to be equal in number and type. If one of the types requires
additional fields that do not have any counterpart in the other type or types, these
must be evaluated in the Additional Expression field. Save the field contents as a
session variable, and compare the new UDRs with it. For an example, see Sec-
tion 11.1.5.2, “Association - Radius UDRs”.
Create Session Select this check box to create a new session if no matching session is found. If the
on Failure check box is not selected, a new session will not be created when no matching session
is found.

Note! If you provide a primary expression, and it evaluates to false, the rule
is ignored and no new session is created.

If the order of the input UDRs is not important, all the rules should have this check
box checked. This means that the session object is going to be created regardless of
the order in which the UDRs arrive.

However, if the UDRs are expected to arrive in a particular sequence, Create Session
on Failure must only be selected for the UDR type/field that is considered to be the
master UDR i e the UDR that marks the beginning of the sequence. In this case, all
the slave UDR types/fields are targeted for error handling if they arrive before their
master UDR.

Note! At least one of all defined rules must have this check box selected.
Otherwise, no session will ever be created.

Add Rule Click on this button to add a new rule for the selected UDR Type. The rule will appear
as a new folder to the right of the UDR Types in the Aggregation profile configuration.

Usually only one rule is required. However, in a situation where a session is based
on IP number, stored in either a target or source IP field, two rules are required. The
source IP field can be listed in the ID Fields of the first rule and the target IP field
listed in the ID Fields of the second rule.
Remove Rule Click on this button to remove the currently displayed rule.

11.1.3.3.5. Storage Tab


The Storage tab contains settings that are specific for the selected storage i e File Storage or Couch-
base.

428
Desktop 7.1

11.1.3.3.5.1. File Storage

Figure 313. The Aggregation Profile Editor - File Storage Settings

Storage Host Select a Storage Host from the drop-down list.

For storage of aggregation sessions select either a specific Execution Context or


Automatic. If you select Automatic , the same Execution Context that has been used
by the running workflow will be applied. Alternatively, if the Aggregation Session
Inspector is used, a storage host is selected automatically. Please refer to Sec-
tion 11.1.4, “Aggregation Session Inspection” for further information on Aggregation
Session Inspector.

Note! It is recommended that you configure the aggregation workflow to run


on the same Execution Context that you have selected as Storage Host.

Directory Enter the directory on the Storage Host where you want the aggregation data to be
stored.

Note! If the Storage Host above, is configured to be Automatic, the corres-


ponding Directory has to be a shared file system between all the Execution
Contexts.

Partial File In this field you can enter the maximum number of partial files that you want to be
Count stored. Consider the following:

• Startup: All the files are read at startup. It takes longer if there are many partial
files. This is significant especially in a High Availability solution.

• Transaction commitment: When the transactions are committed, many small files
(large Partial File Count) increase performance.

In a batch workflow, use this variable to tune performance.

Note! In a real-time workflow, updates to sessions are saved on disk only if


the Timeout tab is configured with Storage Commit Conditions.

Max Cached Enter the maximum number of sessions to keep in the memory cache.
Sessions

429
Desktop 7.1

This is a performance tuning parameter that determines the memory usage of the
Aggregation agent. Set this value to be low enough so that there is still enough space
for the cache in memory, but not too low, as this will cause performance to deteriorate.
For further information see Section 11.1.3.12, “Performance Tuning with File Stor-
age”.

11.1.3.3.5.2. Enabling External Referencing

You can use External References for the following fields:

• Directory

• Partial File Count

• Max Cached Sessions

Note! The fields listed above are only applicable when using file storage for aggregation.

You enable External Referencing of profile fields from the profile view's main menu. For detailed in-
structions, see Section 9.5.3, “Enabling External References in an Agent Profile Field”.

11.1.3.3.5.3. Couchbase

Figure 314. The Aggregation Profile Editor - Couchbase Storage

Profile Select a Couchbase profile. This profile is used to access the primary storage for
aggregation sessions.
Mirror Profile Selecting this Couchbase profile is optional. It is used to access a secondary storage,
providing read only access for aggregation sessions. Typically, the mirror profile is
identically configured to a (primary) profile, that is used by workflows on a different
Execution Context or other MediationZone® system. This is useful to minimize data
loss in various failover scenarios. The read only sessions can be retrieved with APL
commands. For more information and examples, see the description of the Aggregation
functions in the APL Reference Guide.

430
Desktop 7.1

Figure 315. Mirror Profile Concept

11.1.3.3.6. Advanced Tab

The Advanced tab is only available when you have selected Couchbase Storage. It contains properties
that can be used for performance tuning. For information about performance tuning, see Sec-
tion 11.1.3.13, “Performance Tuning with Couchbase Storage”.

431
Desktop 7.1

Figure 316. The Aggregation Profile Editor - Advanced Tab

11.1.3.4. Agent Configuration - Batch


The batch Aggregation agent's configuration view contains two main tabs:

• The Aggregation tab - In a batch workflow this tab contains the three subsidiary tabs, General,
APL Code and Storage.

• The Thread Buffer tab - For further information about the Thread Buffer tab, see Section 4.1.6.2.1,
“Thread Buffer Tab”.

Note! The Thread Buffer tab is only available for batch workflows.

11.1.3.4.1. General Tab

The General tab enables you to assign an Aggregation profile to the agent and to define error handling.

With the Error Handling settings you can decide what you want to do if no timeout has been set in
the code or if there are unmatched UDRs.

432
Desktop 7.1

Figure 317. The Aggregation Agent Configuration View - General Tab

Profile Click Browse and select an Aggregation profile. In batch workflows, the profile must
use file storage.

All the workflows in the same workflow configuration can use different Aggregation
profiles. For this to work, the profile has to be set to Default in the Field settings tab
in the Workflow Properties dialog. After that, each workflow in the Workflow Table
can be assigned with the correct profile.
Force Read Select this check box to only use the Aggregation Storage for reading aggregation
Only session data. Selecting this check box also means that the agent cannot create new
sessions when an incoming UDR cannot be matched to an existing session. A UDR
for which no matching session is found is handled according to the setting If No UDR
Match is Found.

If you enable the read only mode, timeout and defragmentation handling is also dis-
abled.

When using file storage and sharing an Aggregation profile across several workflow
configurations, the read and write lock mechanisms that are applied to the stored ses-
sions must be considered:

• There can only be one write lock at a time in a profile. This means that all but one
Aggregation agent must have the Force Read Only setting enabled.

• If all of the Aggregation agents are configured with Force Read Only, any number
of read locks can be granted in the profile.

• If one write lock or more is set, a read lock cannot be granted.

If Timeout is Select the action to take if timeout for sessions is not set in the APL code using ses-
Missing sionTimeout. The setting is evaluated after each consume or timeout function
block has been called (assuming the session has not been removed).

The available options are:

• Ignore - Do nothing. This may leave sessions forever in the system if the closing
UDR does not arrive.

• Abort - Abort the agent execution. This option is used if a timeout must be set at
all times. Hence, a missing timeout is considered being a configuration error.

• Use Default Timeout - Allow the session timeout to be set here instead of within
the code. If enabled, a field becomes available. In this field, enter the timeout, in
seconds.

433
Desktop 7.1

If No UDR Select the action that the agent should take when a UDR that arrives does not match
Match is any session, and Create Session on Failure is disabled:
Found
• Ignore - Discard the UDR.

• Abort - Abort the agent execution. Select this option if all UDRs are associated
with a session. This error case indicates a configuration error.

• Route - Send the UDR on the route selected from the on list. This is a list of output
routes on which the UDR can be sent. The list is activated only if Route is selected.

11.1.3.4.2. APL Code Tab

The APL Code tab enables you to manage the detailed behavior of the Aggregation agent. You use
the Analysis Programming Language (APL) with some limitations but also with additional functionality.
For further information see the APL Reference Guide.

The main function block of the code is consume. This block is invoked whenever a UDR has been
associated with a session.

The timeout block enables you to handle sessions that have not been successfully closed, e g if the
final UDR has not arrived.

Figure 318. Aggregation agent configuration window - APL Code Tab

Code Area This is where you write your APL code. For further information about the code
area and its right-click menu, see Section 2.2.7, “Text Editor”.
Compilation Test... Use this button to compile the entered code and check for validity. The status
of the compilation is displayed in a dialog. Upon failure, the erroneous line is
highlighted and a message, including the line number, is displayed.

11.1.3.4.3. Storage Tab

The Storage tab contains settings that are specific for the selected storage in the Aggregation profile.
Different settings are available in batch and real-time workflows.

434
Desktop 7.1

Figure 319. The Aggregation Agent Configuration View - Storage Tab for File Storage

Defragment Session Storage For batch workflows, the Aggregation Session Storage can option-
Files ally be defragmented to minimize disk usage. When checked, con-
figure the defragmentation parameters:
Defragment After Every [] Run defragmentation after the specified number of batches. Enter
Batch(es) the number of batches to process before each defragmentation.
Defragment if Batch(es) Fin- Set a value to limit how long the defragmentation is allowed to run.
ishes Within [] Second(s) This time limitation depends on the execution time of the last batch
processed. If the last batch is finished within the specified number
of seconds, the remaining time will be used for the defragmentation.
The limit accuracy is +/- 5 seconds.
Defragment Session Files Older Run defragmentation on session storage files that are older than this
Than [] Minute(s) value to minimize moving recently created sessions unnecessarily
often.

Note! Defragmentation is only available for batch workflows using file storage.

11.1.3.5. Agent Configuration - Real-Time


The real-time Aggregation agent's configuration view includes the tabs: General APL Code and
Storage.

11.1.3.5.1. General Tab

The General tab enables you to assign an Aggregation profile to the agent and to define error handling.

With the Error Handling settings you can decide what you want to do if no timeout has been set in
the code or if there are unmatched UDRs.

Figure 320. The Aggregation Agent Configuration View - General Tab

Profile Click Browse and select an Aggregation profile.

435
Desktop 7.1

All the workflows in the same workflow configuration can use different Aggregation
profiles. For this to work, the profile has to be set to Default in the Field settings
tab in the Workflow Properties dialog. After this, each workflow in the Workflow
Table can be assigned with the correct profile.
Force Read Select this check box to only use the aggregation storage for reading aggregation
Only session data.

If you enable the read only mode, timeout handling is also disabled.

When using file storage and sharing an Aggregation profile across several workflow
configurations, the read and write lock mechanisms that are applied to the stored
sessions must be considered:

• There can only be one write lock at a time in a profile. This means that all but one
Aggregation agent must have the Force Read Only setting enabled.

• If all of the Aggregation agents are configured with Force Read Only, any number
of read locks can be granted in the profile.

• If one write lock or more is set, a read lock cannot be granted.

If Timeout is Select the action to take if timeout for sessions is not set in the APL code using
Missing sessionTimeout. The setting is evaluated after each consume or timeout
function block has been called (assuming the session has not been removed).

The available options are:

• Ignore - Do nothing. This may leave sessions forever in the system if the closing
UDR does not arrive.

• Abort - Abort the agent execution. This option is used if timeout must be set at
all times. Hence, a missing timeout is considered being a configuration error.

• Use Default Timeout - Allow the session timeout to be set here instead of within
the code. If enabled, a field becomes available. In this field, enter the timeout, in
seconds.

If No UDR Select the action that the agent should take when a UDR that arrives does not match
Match is any session, and Create Session on Failure is disabled:
Found
• Ignore - Discard the UDR.

• Log Event: Discard the UDR and generate a message in the System Log.

• Route: Send the UDR on the route selected from the on list. This is a list of output
routes through which the UDR can be sent. The list is activated only if Route is
selected.

11.1.3.5.2. APL Code Tab


The APL Code tab is identical for batch and real-time workflows. For a detailed description of this
tab see Section 11.1.3.4.2, “APL Code Tab”.

11.1.3.5.3. Storage

The Storage tab contains settings that are specific for the selected storage in the Aggregation profile.
Different settings are available in batch and real-time workflows.

436
Desktop 7.1

11.1.3.5.3.1. File Storage

When using file storage for sessions in a batch workflow, the Storage tab contains a setting to control
how often the timeout block should be executed. In this tab, it is also specified when the changes to
the aggregation data is written to file.

Figure 321. The Aggregation Agent Configuration View - Storage Tab for File Storage

Session Timeout Interval Determines how often, in seconds, the timeout block is activated
(seconds) for all outdated sessions.
Storage Commit Interval Determines how often, in seconds, the in-memory data is saved to
(seconds) files on disk.
Storage Commit Interval Determines the number of Processing Calls before the in memory
(#Processing Calls) data is saved to files on disk. A 'Processing Call' is an execution of
any of the blocks consume, command or timeout.

If both this option and the Storage Commit Interval (seconds)


are configured, commits are made when any of them are fulfilled.

Note!

• If Storage Commit Interval (seconds) and/or Storage Commit Interval (#Processing


Calls) are configured, data left in memory when the workflow stops will be saved to file.

• If Storage Commit Interval (seconds) and Storage Commit Interval (#Processing Calls)
are not configured, none of the sessions that are in the RAM are saved onto the local hard
disk. This also means that the session count displayed in the Aggregation Inspector will not
include these sessions.

• When the Max Cached Sessions in the Aggregation profile is exceeded, and Storage Commit
Interval (seconds) and Storage Commit Interval (#Processing Calls) are not configured,
the agent deletes the oldest session. This is done in order to allocate space for the new session
while still staying within the limit.

11.1.3.5.3.2. Couchbase

Figure 322. The Aggregation Agent Configuration View - Storage Tab for Couchbase

437
Desktop 7.1

If Error Occurs in Select the action that the agent should take when an error occurs in the storage:
Storage
• Ignore - Discard the UDR.

• Log Event - Discard the UDR and generate a message in the System Log.

• Route - Send the UDR on the route selected from the on list. This is a list
of output routes on which the UDR can be sent. The list is activated only in
case Route is selected.

11.1.3.6. Transaction Behavior - Batch Workflow


This section includes information about the Aggregation agent's transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”

11.1.3.6.1. Emits

The agent emits commands that change the state of the file currently processed.

Command Description
Cancel Batch The agent itself does not emit Cancel Batch messages. However, if the code contains
a call to the method cancelBatch this causes the agent to emit a Cancel Batch.
Hint End Batch If the code contains a call to the method hintEndBatch, this causes the agent to
emit a Hint End Batch.

11.1.3.6.2. Retrieves

The agent retrieves commands from other agents and, based on those commands, changes the state
change of the file currently processed.

Command Description
Begin Batch When a Begin Batch message is received, the agent calls the beginBatch function
block, if present in the code.
End Batch When an End Batch message is received, the agent calls the endBatch function
blocks, if present in the code.

Prior to End Batch, possible timeouts are called. Thus, when a time limit is reached,
the timeout function block will not be called until the next End Batch arrives. If the
workflow is in the middle of a data batch or is not currently receiving any data at all,
this could potentially be some time after the configured timeout.
Cancel Batch When a Cancel Batch message is received, the agent calls the cancelBatch function
block, if present in the code.

11.1.3.7. Transaction Behavior - Real-Time Workflow


This agent does not emit or retrieve any commands.

11.1.3.8. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces UDRs or bytearray types, depending on the code since UDRs may be dynam-
ically created. It consumes any of the types selected in the UDR Types list.

438
Desktop 7.1

11.1.3.9. Meta Information Model


The Aggregation agent publishes different MIM parameters depending on the storage selected in the
Aggregation profile. For information about the MediationZone® MIM and a list of the general MIM
parameters, see Section 2.2.10, “Meta Information Model”.

11.1.3.9.1. Publishes

11.1.3.9.1.1. File Storage

Note! The MIM parameters listed in this section are applicable when File Storage is selected
in the Aggregation profile.

MIM Parameter Description


Created Session This MIM parameter contains the number of sessions created.
Count
Created Session Count is of the long type and is defined as a global
MIM context type.

This counter is reset each time the EC is started, but it might also be reset using
the resetCounters alternative through a JMX client. See Section 8.5, “Aggreg-
ation Monitoring” for further information.
Online Session This MIM parameter contains the number of sessions cached in memory.
Count
Online Session Count is of the int type and is defined as a global MIM
context type.
Session Cache Hit When an already existing session is read from the cache instead of disk, a cache
Count hit is counted.

This MIM parameter contains the number of cache hits.

Session Cache Hit Count is of the long type and is defined as a


global MIM context type.

This counter is reset each time the EC is started, but it might also be reset using
the resetCounters alternative through a JMX client. See Section 8.5, “Aggreg-
ation Monitoring” for further information.
Session Cache Miss When an already existing session is requested and the Aggregation profile cannot
Count read the session information from the cache and instead reads the session inform-
ation from disk, a cache miss is indicated. If a non-existing session is requested,
this will not be counted as a cache miss.

This MIM parameter contains the number of cache misses counted by the Ag-
gregation profile.

Session Cache Miss Count is of the long type and is defined as a


global MIM context type.

This counter is reset each time the EC is started, but it might also be reset using
the resetCounters alternative through a JMX client. See Section 8.5, “Aggreg-
ation Monitoring” for further information.
Session Count This MIM parameter contains the number of sessions in storage on the file sys-
tem.

Session Count is of the int type and is defined as a global MIM context
type.

439
Desktop 7.1

11.1.3.9.1.2. Couchbase

Note! The MIM parameters listed in this section are applicable when Couchbase is selected
in the Aggregation profile.

MIM Parameter Description


Created Session This MIM parameter contains the number of sessions created.
Count
Created Session Count is of the long type and is defined as a global
MIM context type.

This counter is reset each time the workflow is started.


Session Remove This MIM parameter contains the number of sessions removed.
Count
Session Remove Count is of the long type and is defined as a global
MIM context type.

This counter is reset each time the workflow is started.


Mirror Attempt This MIM parameter contains the total number of attempts to retrieve a stored
Count mirror session.

Mirror Attempt Count is of the long type and is defined as a global


MIM context type.

This counter is reset each time the workflow is started.


Mirror Error Count This MIM parameter contains the number of failed attempts to retrieve a stored
mirror session, where the failure was caused by one or more errors.

Mirror Error Count is of the long type and is defined as a global MIM
context type.

This counter is reset each time the workflow is started.


Mirror Found Count This MIM parameter contains the number of successful attempts to retrieve a
stored mirror session.

Mirror Found Count is of the long type and is defined as a global MIM
context type.

This counter is reset each time the workflow is started.


Mirror Not Found This MIM parameter contains the number of attempts to retrieve a stored mirror
Count session that did not exist.

Mirror Not Found Count is of the long type and is defined as a global
MIM context type.

This counter is reset each time the workflow is started.


Mirror Latency This MIM parameter contains comma separated counters that each contains the
number of mirror session retrieval attempts for a specific latency interval. At-
tempts that failed due to errors are not counted.

The parameter contains 20 counters for a series of 100 ms intervals. The first
interval is from 0 to 99 ms and the last interval is from 1900 ms and up.

440
Desktop 7.1

Example 73.

The value 1000,100,0,0,0,0,0,0,0,0,0,0,0,0,1 should be


interpreted as follows:

• There are 1000 mirror session retrieval attempts with a latency of 99


ms or less.

• There are 100 mirror session retrieval attempts with a latency of 100
ms to 199 ms.

• There is one mirror session retrieval attempt with a latency of 1999 ms


or more.

Mirror Latency is of the String type and is defined as a global MIM


context type.

This counter is reset each time the workflow is started.


Session Timeout This MIM parameter contains the number sessions that has timed out.
Count
Session Timeout Count is of the long type and is defined as a global
MIM context type.

This counter is reset each time the workflow is started.


Session Timeout Multiple timeout threads may read the same session data from Couchbase but
Attempt Count only one of them will perform an update. If a thread reads a session that has
already been updated, it will be counted as an attempt. This MIM parameter
contains the number of attempts.

Session Timeout Attempt Count is of the long type and is defined


as a global MIM context type.

This counter is reset each time the workflow is started.


Session Timeout This MIM parameter contains comma separated counters that each contains the
Latency number of sessions for a specific timeout latency interval i e the difference
between the actual timeout time and the expected timeout time.

The parameter contains 15 counters for a series of one-minute intervals. The first
interval is from 0 to 1 minutes and the last interval is from 14 minutes and up.

Example 74.

The value 1000,100,0,0,0,0,0,0,0,0,0,0,0,0,1 should be


interpreted as follows:

• There are 1000 sessions with a timeout latency that is less than one
minute.

• There are 100 sessions with a timeout latency of one to two minutes.

• There is one session with a timeout latency of 14 minutes or more.

Session Timeout Latency is of the String type and is defined as a


global MIM context type.

441
Desktop 7.1

This counter is reset each time the workflow is started.

11.1.3.9.2. Accesses

The agent does not itself access any MIM resources. However, APL offers the possibility of both
publishing and accessing MIM resources and values.

11.1.3.10. Agent Message Events


The agent does not itself produce any message events. However, APL offers the possibility of producing
events.

11.1.3.11. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Aggregation Storage implementation


This event is reported during workflow initialization. It shows the selected storage type i e file
storage or Couchbase.

You may also configure debug messages in the APL code.

11.1.3.12. Performance Tuning with File Storage


This section describes how the aggregation cache works and what considerations should be taken when
determining the memory usage of the Aggregation agent.

Note! The information in this section is only applicable when using file storage.

11.1.3.12.1. Aggregation Cache

The Aggregation agent can store sessions on the file system using a storage server, but also in a cache.
The maximum size of the cache will be determined by the Max Cached Sessions parameter in the
Aggregation profile (see Section 11.1.3.3, “Aggregation Profile”) and the average size in memory of
a session. It is difficult to estimate the exact memory consumption through testing but the following
should be considered when implementing an Aggregation workflow:

1. Try to keep the session data small. Specifically, do not use large maps or lists in the sessions. These
will take up a lot of memory.

2. If memory issues are encountered, try decreasing the Max Cached Sessions. In order to find out
if the cache size is over dimensioned, you can study the memory of the Execution Context that is
hosting the workflow in System Statistics. For information about System Statistics, see Section 7.12,
“System Statistics”

To avoid a large aggregation cache causing out of memory errors, the aggregation cache detects that
the memory limit is reached. Once this is detected, sessions will be moved from the memory cache to
the file system.

442
Desktop 7.1

Note! This has a performance impact, since the agent will have to read these sessions from the
file system if they are accessed again. The Aggregation agent will log information in the Execu-
tion Context's log file in case the memory limit has been reached and the size of the cache needs
to be adjusted.

It is also possible to specify when updated aggregation sessions shall be moved from the cache to the
file system by setting the mz.aggregation.storage.maxneedssync property in the
executioncontext.xml file. This property shall be set to a value lower than Max Cached Ses-
sions. For performance reasons, this property should be given a reasonably high value, but consider
the risk of a server restart. If this happens, the cached data might be lost.

Hint! To speed up the start of workflows that run locally (on the Execution Context), set the
mz.aggregation.storage.profile_session_cache property in the
executioncontext.xml file to true (default value is false).

By doing so, the aggregation cache will be kept in memory for up to 10 minutes after a workflow
has stopped.

This in turn enables another workflow, that runs within a 10 minute interval after the first
workflow has stopped, and that is configured with the same profile, to use the very same allocated
cache.

Note that since the cache remains in memory for up to 10 minutes after a workflow stopped
executing, other workflows using other profiles might create caches of their own during this
time.

The memory space of the respective aggregation caches will add up in the heap. If the Execution
Context at a certain point runs out of memory, performance deteriorates as cache is cleared and,
as a result, sessions have to be read from and written to disk.

The profile session cache functionality will only be enabled in batch workflows where the Ag-
gregation profile is not set to read-only, and the storage is placed locally to the Execution Context.

11.1.3.12.2. Memory Handling in Real-Time

Warning! In real-time, when memory caching without any file storage, i e Storage Commit
Interval is set to zero, make sure that you carefully scale the cache size to avoid losing a session
due to cache over-runs. An over-run cache is recorded by the system event in the System Log.
For further information, see Section 11.1.4, “Aggregation Session Inspection”.

While the aggregation cache will never cause the Execution Context to run out of memory, it is still
recommended that you set the Max Cached Sessions low enough so that there is enough space for
the full cache size in memory. This will increase system performance.

11.1.3.12.3. Multithreading

If you have many sessions ending up in timeout, you can improve the performance by enabling multi-
threading, i e use a thread pool, for the timeout function block in the Aggregation agent. When
multithreading is enabled, the workflow can hand over sessions to the pool via the queue without
having to wait for the read operations to complete, since the threads in the thread pool will take care
of that. With many threads, the throughput of read operations completed per second can be maximized.

Multithreading is enabled by adding the mz.aggregation.timeout.threads property, with


a value larger than 0, in the executioncontext.xml file.

443
Desktop 7.1

Example:

<property name="mz.aggregation.timeout.threads" value="8"/>

11.1.3.13. Performance Tuning with Couchbase Storage


This sections describes how the Aggregation agent uses the Couchbase storage and the settings that
are available for tuning.

Note! This information is only applicable when using Couchbase storage.

Warning! Setting or changing the aggregation properties that are described in the following
sections can have a negative impact on performance and may also cause loss of data. These
properties should only be used after consulting a MediationZone® expert.

11.1.3.13.1. Views and Indexes

When Couchbase is selected as the storage type in an Aggregation profile, a bucket is automatically
created during execution of a workflow. The bucket is named according to the configuration of the
assigned Couchbase profile. The bucket is populated with JSON documents that contain the aggregation
session data. This makes it possible to index the timeout information of aggregation sessions in
Couchbase.

The aggregation session data is fetched using a Couchbase view. The default name of the view is
timeout. Changing the default name is not recommended, even though it is possible to do so by
setting the property mz.cb.agg.viewname in the Advanced tab in the Aggregation profile. For
further information, see Section 11.1.3.3.6, “Advanced Tab”.

The data returned by the view is split into chunks of a configurable size. The size of each partial set
of data can be configured by setting the property view.iteratorpageSize in the Advanced tab
of the assigned Couchbase profile. Setting a higher value than the default 1000, may increase
throughput performance but it depends on the available RAM of the Execution Context host.

You can choose to update the result set from a view before or after a query. Or you can choose to retrieve
the existing result set from a view. In this case the results are possibly out of date, or stale. To control
this behavior, you can set the property view.index.stale in the Advanced tab of the assigned
Couchbase profile. The following settings are available:

• FALSE - The index is updated before the query is executed. This ensures that any documents updated
(and persisted to disk) are included in the view. The client waits until the index has been updated
before the query is executed, and therefore the response is delayed until the updated index is available.

• OK - The index is not updated. If an index exists for the given view, the information in the current
index is used as the basis for the query and the results are returned accordingly. This value is seldom
used and only if automatic index updates are enabled in Couchbase.

• UPDATE_AFTER - This is the recommended setting when using a Couchbase profile with Aggreg-
ation. The existing index is used as the basis of the query, but the index is marked for updating once
the results have been returned to the client.

For more information about views and indexes, see http://docs.couchbase.com/couchbase-manual-


2.5/cb-admin/#views-and-indexes

444
Desktop 7.1

11.1.3.13.2. Timeout

There are by default, two timeout threads that periodically check the Couchbase aggregation storage
for timed out sessions. You can control how often this check is performed by setting
mz.cb.agg.timeoutwait.sec in the Advanced tab in the Aggregation profile. The default
value is 10 seconds. For further information, see Section 11.1.3.3.6, “Advanced Tab”.

You can also increase the number of threads that perform this check by setting the property
mz.cb.agg.timeout_no_of_thread. Setting a higher value than default may speed up detection
of timeouts. However, the number of CPUs and the time that it takes for Couchbase to index accessed
documents (session data) are limiting factors.

Hint! You can use the MIM parameter Session Timeout Latency as an indicator of the
timeout handling performance.

The sessions that are fetched from the Couchbase view are shuffled randomly in temporary buffers,
one for each workflow. This is done to minimize the probability that multiple workflows attempt to
time out the same sessions simultaneously. You can control the size of these buffers by setting
mz.cb.agg.randombuffer in the Advanced tab in the Aggregation profile. The default value
is 1000 sessions.

11.1.3.13.3. Collision Check

An Aggregation agent may receive a duplicate UDR that is a handled within the same session but by
a different workflow. When set to true, the property mz.cb.agg.collision.check prevents
the last of the duplicate UDRs from being added to the session. The additional checks in the session
data that are required for this may have a negative impact on throughput performance. The default
value of this property is false.

11.1.3.13.4. Asynchronous Mode

When the Aggregation agent creates a session or adds a UDR to an existing session, it waits for a re-
sponse from the Couchbase storage before proceeding to the next UDR in the queue. This synchronous
mode is enabled by default.

The property mz.cb.agg.useasync enables asynchronous storing of aggregation sessions in


Couchbase. This causes the Aggregation agent to pass session data to separate threads that are respons-
ible for the storage handling. The number of threads can be set with the property
mz.cb.agg.async.listenerthreads. The default number of threads is 10.

Note! When enabling asynchronous mode, it is strongly recommended to also set the Queue
Worker Strategy to RoundRobin. This setting is available for real-time workflows in the Ex-
ecution tab of the Workflow Properties. The default strategy may cause the workflow to hang.
For information about the Queue Worker Strategy, see Section 4.1.8.4, “Execution Tab”.

In asynchronous mode, the maximum number of pending requests to Couchbase can be limited with
the property mz.cb.agg.async.nroutstandingrequests. The Aggregation agent blocks
incoming UDRs when the limit is reached. The default limit is 1000.

11.1.3.13.5. Automated Index Updates

In order to obtain the best possible performance in the Aggregation agent, you should disable automatic
index updates in Couchbase.

From a terminal window, update the index settings using the curl tool.

445
Desktop 7.1

curl -u <Couchbase administrator user>:<password>:<IP address or hostname>


:8091/settings/viewUpdateDaemon -d updateMinChanges=0

curl -u <Couchbase administrator user>:<password>:<IP address or hostname>


:8091/settings/viewUpdateDaemon -d updateInterval=0

You may specify the IP address or hostname of any available node in the Couchbase cluster. If the
updates are successful, the changes will be applied to all nodes.

For more information about automated index updates, see http://docs.couchbase.com/couchbase-


manual-2.5/cb-admin/#couchbase-views-operation-autoupdate.

11.1.3.14. APL extensions


The Aggregation agent is also configured with APL code, see the APL Reference Guide for further
information about available functions for the Aggregation agent.

11.1.4. Aggregation Session Inspection


The Aggregation Session Inspector enables viewing and editing of existing sessions. The data is dis-
played in a table where each row represents a session, with the session data ordered in configurable
columns. It is also possible to edit the contents of the sessions, that is, to change the timeout and the
values of the session variables.

Note! Aggregation Session Inspector only inspects sessions stored on disk. Hence, a real-time
workflow Aggregation agent that is not configured with any Storage Commit Interval, or that
uses Couchbase for storage, will not show any sessions.

A real-time work-flow Aggregation agent that is configured with a Storage Commit Interval
will not show all sessions.

To open the Aggregation Session Inspector, click the Tools button in the upper left part of the Medi-
ationZone® Desktop window, and then select Aggregation Session Inspector from the menu.

Figure 323. The Aggregation Session Inspector

Initially the window is empty and must be populated with data using the corresponding Search Sessions
dialog, see Section 11.1.4.2, “The Search Sessions Dialog” for details. The following section describes
the options in the Edit menu. The File and View menus contain standard options for saving, closing
and refreshing.

11.1.4.1. The Edit menu


The Edit menu contains the following options:

Search... Displays the Search Sessions dialog where search criteria may be defined to identify
the group of sessions to be displayed, see Section 11.1.4.2, “The Search Sessions
Dialog” for further information.

446
Desktop 7.1

Explore Ses- Displays a new window where the session variables may be viewed and if Read Only
sion was disabled in the Search Session dialog, the session variables may be edited as well.
An example of a UDR Viewer window:

Note! The window also appears when you double-click the Index column of
the session.

Validate When you select the Validate Storage menu item, after performing a search, all the
Storage aggregation storage session files are validated. This is done by attempting to read the
session data to establish what can and cannot be read. If the storage contains references
to corrupt sessions, an option to remove them is given.
Views Allows a more detailed view of the UDRs in the session list. For further information
about UDR Views, see the MediationZone® Ultra Reference Guide.

11.1.4.2. The Search Sessions Dialog


When you select the Search... option in the Edit menu, the Search Sessions dialog opens where you
can select which group of sessions you want to view.

Figure 324. The Search Sessions Dialog

Profile Select the Aggregation profile that corresponds to the data of interest.
Timeout Period If you select this check box you can select a timeout period from which you want
to display data. You can either select the User Defined option in the drop-down
list and then enter date and time in the From and To fields, or you can select one
of the predefined time intervals in the drop-down list; Today, Yesterday, This
Week, Previous Week, Last 7 Days, This Month or Previous Month.
Search Hand- Disable Read Only if content of the sessions need to be altered. Exclusive access
ling to the repository is required to alter the sessions, meaning that if a currently running

447
Desktop 7.1

workflow is using the selected profile, the workflow needs to be stopped to be able
to get exclusive access.

Disable Limit results if you want to fetch all sessions in the session index. This
can be time and memory consuming. Change Limit results to get fewer/more results.
The total number of results is briefly shown in the status bar of the search result
window.

11.1.5. Example - Association of IP Data


To illustrate the Aggregation agent's features, an association example according to the following
workflow setup is presented below. The workflow is handling IP traffic data, and will group information
from routers and the corresponding network access servers.

Figure 325. An example where an Aggregation agent is used to associate IP data

The Netflow agent collects router data and logs the interacting network elements' addresses and amount
of bytes handled, while the Radius agent keeps track of who has initiated the connection, and for how
long the connection was up. Thus, each user login session will consist of two Radius UDRs (start and
stop), and one or several Netflow UDRs. The Aggregation agent is used to associate this data from
each login session. These additional rules apply:

• A Radius UDR belonging to a specific login session must always arrive before its corresponding
Netflow UDRs. If a Netflow UDR arrives without a preceding Radius UDR, it must be deleted.

• Within a Netflow UDR, the user initiating the session may act as a source or destination, depending
on the direction of data transfer. Thus, it is important to match the IP address from the Radius UDRs
with source or destination IP from the Netflow UDRs.

Note! The Radius specific response handling will not be discussed in this example. For further
information, see Section 13.1, “Radius Agents”.

11.1.5.1. Session Definition


For each session, all the necessary data must be saved. A suggestion of useful variables for this scenario
is described below.

Note! The input UDRs are not stored. Information from the UDRs is extracted and saved in the
session variables.

The Ultra definition for the session type

448
Desktop 7.1

session ExampleSession {
string user;
string IPAddress;
long sessionID;
long downloadedBytes;
long uploadedBytes;
};

user The user initiating the network connection. This value is fetched from the
start Radius UDR.
IPAddress The IP address of the user initiating the network connection. This value is
fetched from the start Radius UDR.
sessionID A unique ID grouping a specific network connection session for the specific
user. This value is fetched from the start Radius UDR.
downloadedBytes The amount of downloaded bytes according to information extracted from
Netflow UDRs.
uploadedBytes The amount of uploaded bytes according to information extracted from
Netflow UDRs.

11.1.5.2. Association - Radius UDRs


The Radius UDRs are the Aggregation session-initiating units. They may be of two types in this ex-
ample; start or stop.

Pay attention to the use of the Additional Expression. The fields associating the start and stop Radius
UDRs are framedIPAddress and acctSessId. However, since there is no field matching the
latter within the Netflow UDRs, this field cannot be entered in the ID Fields area.

449
Desktop 7.1

Figure 326. The Aggregation Profile - Association Tab - Radius UDRs

This is how arriving Radius UDRs are evaluated when configured according to Figure 326, “The Ag-
gregation Profile - Association Tab - Radius UDRs”:

1. Initially, the UDR is evaluated against the Primary Expression. If it evaluates to false, all further
validation is interrupted and the UDR will be deleted without logging (since no more rules exist).
Usually invalid UDRs are set to be deleted. In this case, only the UDRs of type start (acct-
StatusType=1) or stop (acctStatusType=2) are of interest.

2. If the Primary Expression evaluation was successful, the fields entered in the ID Fields area, together
with the Additional Expression are used as a secondary verification. If it evaluates to true, the
UDR will be added to the session, if not - refer to subsequent step.

3. Create Session on Failure is the final setting. It indicates if a new session will be created if no
matching session has been found in step 2.

11.1.5.3. Association - Netflow UDRs


As previously mentioned, the IP address to match against in the Netflow UDRs depends on if data is
being uploaded or downloaded. This results in the session initiator being either the source or destination.
Hence, both these fields need to be evaluated in the Aggregation agent:

450
Desktop 7.1

Figure 327. The Aggregation Profile Editor - Association Tab - Netflow UDRs

This is how arriving Netflow UDRs are evaluated when configured according to Figure 327, “The
Aggregation Profile Editor - Association Tab - Netflow UDRs”:

1. If the DestinationIP, situated in the ID Fields area in the first Rules tab, does not match any existing
session, no new session is created. If a match is found, the UDR is associated with this session.

2. Regardless of the outcome of the first rule, all rules are always evaluated. Hence the second rule is
evaluated. If the SourceIP situated in the ID Fields area in the second Rules tab does not match
any existing session, no new session is created. If a match is found, the UDR is associated with this
session.

Note! Since Create Session on Failure is not enabled for any of the rules, the UDRs which do
not find a matching session will be deleted and cannot be retrieved.

11.1.5.4. The APL Code


From the APL code (the agent configuration window), all actions related to both initiating and
matching a session are defined. When a session is considered associated, the session variables are
saved in a new UDR Type (outputUDR(out)) containing fields with the same name as the variables.

Note! The timeout of a session is set to five days from the current date. Outdated sessions are
removed and their data is transferred to a UDR of type outputUDR, which is sent to ECS.

import ultra.Example.Out;

sessionInit {
Accounting_Request_Int radUDR =
(Accounting_Request_Int) input;
session.user = radUDR.User_Name;
session.IPAddress = radUDR.framedIPAddress;
session.sessionID = radUDR.acctSessionId;
}

consume {
/* Radius UDRs.
If a matching session is found, then there are two Radius UDRs

451
Desktop 7.1

and the session is considered completed.


Remove session and route the new UDR. */

if (instanceOf(input, Accounting_Request_Int)) {
Accounting_Request_Int radUDR = (Accounting_Request_Int)input;

if (radUDR.acctStatusType == 2 ) {
OutputUDR finalUDR = udrCreate( OutputUDR );
finalUDR.user = session.user;
finalUDR.IPAddress = (string)session.IPAddress;
finalUDR.downloadedBytes = session.downloadedBytes;
finalUDR.uploadedBytes = session.uploadedBytes;
udrRoute( finalUDR );
sessionRemove(session);
return;
}
}

/* Netflow UDRs.
Depending on if the user downloaded or uploaded bytes, the
corresponding field data is used to update session variables. */

if (instanceOf(input, V5UDR)) {
V5UDR nfUDR = (V5UDR)input;

if ( session.IPAddress == nfUDR.SourceIP ) {
session.downloadedBytes = session.downloadedBytes +
nfUDR.BytesInFlow;
} else {
session.uploadedBytes = session.uploadedBytes +
nfUDR.BytesInFlow;
}
}

// A session will be considered outdated in 5 days.


date timer=dateCreateNow();
dateAddDays( timer, 5 );
sessionTimeout( session, timer );
}

timeout {
// Outdated sessions are removed, and a resulting UDR is sent on.
OutputUDR finalUDR = udrCreate( OutputUDR );
finalUDR.user = session.user;
finalUDR.IPAddress = (string)session.IPAddress;
finalUDR.downloadedBytes = session.downloadedBytes;
finalUDR.uploadedBytes = session.uploadedBytes;
udrRoute( finalUDR );
sessionRemove(session);
}

452
Desktop 7.1

11.2. Analysis Agent


11.2.1. Introduction
This section describes the Analysis agent. This agent is a standard agent on the DigitalRoute® Medi-
ationZone® Platform.

The Analysis agent can be part of both batch and realtime workflows. Differences in the configuration
are described in Section 11.2.2.2.2, “Realtime Workflows”.

The Analysis Programming Language, APL, used by the Analysis agent is described in the APL Ref-
erence Guide.

If not stated elsewhere, the Oracle database is assumed.

11.2.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• UDR structure and content

• Basic programming

For information about Terms and Abbreviations used in this document, see the Terminology document.

11.2.2. Analysis Agent


The Analysis agent is used to process UDRs and, for example, generate audit data and dispatch events
in the system. These activities are established by editing the rich programming language - Analysis
Programming Language (APL). The Analysis agent, depending on configured APL code, can either
be a pure processing agent with the ability to examine, alter, route and clone each UDR routed to the
agent or it can be the final destination for a UDR in the workflow; a sort of forwarding agent.

See the APL Reference Guide for descriptions of the Analysis Programming Language, APL, and the
available functions.

11.2.2.1. APL Code Editor


The APL Code Editor, as opposed to the Aggregation and Analysis agents' code areas, is used to create
generic APL code, that is, code that can be imported and used by several agents and workflows.

To open the APL Code Editor, click the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select APL Code from the menu.

Hint! When double-clicking the dotted triangle in the lower right corner of the Configuration
window, the code area will be maximized. This is a useful feature when coding.

The generic code is imported by adding the following code in the agent code area, using the import
keyword:

import apl.<foldername>.<APL Code configurationname>

If generic code is modified in the APL Code Editor, the change will automatically be reflected in all
agents that contain this code the next time each workflow is executed.

453
Desktop 7.1

Since Function Overloading is not supported, make sure not to import functions with equal names,
since this will cause the APL code to become invalid, even if the functions are located in different
APL modules. This also applies if the functions have different input parameters, for example, a(int
x) and a(string x).

Note! Not all functions will work in a generic environment, for example, functions related to
specific workflows or MIM related functions. This type of functionality must be included in the
agent code area instead.

Example 75.

An APL code definition, saved as MyGenericCode in the Default directory, is available


to an agent by adding the following into its code area:

import apl.Default.MyGenericCode;

Figure 328. The APL Code Editor

11.2.2.1.1. APL Code Editor Menu

The main menu changes depending on which Configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all Configurations and these are
described in Section 3.1.1, “Configuration Menus”.

The menu items that are specific for APL Code Editor are described in the following sections:

11.2.2.1.1.1. The File Menu

Item Description
Import... Select this option to import code from an external file. Note that the file has to reside on
the host where the client is running.

454
Desktop 7.1

Export... Select this option to export your code to an *.apl file that can be edited in other code ed-
itors, or be used by other MediationZone® systems.

11.2.2.1.1.2. The Edit Menu

Item Description
Validate Compiles the current APL code. The status of the compilation is displayed in a dialog.
Upon failure, the erroneous line is highlighted and a message, including the line number,
is displayed.
Undo Select this option to undo your last action.

Redo Select this option to redo the last action you "undid" with the Undo option.

Cut Cuts selections to the clipboard buffer.

Copy Copies selections to the clipboard buffer.

Paste Pastes the clipboard contents.

Find... Displays a dialog where chosen text may be searched for and, optionally, replaced.

Find Again Repeats the search for the last string entered in the Find dialog.

11.2.2.1.2. APL Code Editor Buttons

The toolbar changes depending on which Configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all Configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.

The additional buttons that are specific for APL Code Editor tabs are described in the following sections:

Button Description
Validate Compiles the current APL code. The status of the compilation is displayed in a dialog.
Upon failure, the erroneous line is highlighted and a message, including the line number,
is displayed.
Undo Select this option to undo your last action.

Redo Select this option to redo the last action you "undid" with the Undo option.

Cut Cuts selections to the clipboard buffer.

455
Desktop 7.1

Copy Copies selections to the clipboard buffer.

Paste Pastes the clipboard contents.

Find... Displays a dialog where chosen text may be searched for and, optionally, replaced.

Find Again Repeats the search for the last string entered in the Find dialog.

Zoom Out Zoom out the APL Code Area by modifying the zoom percentage number that you find
on the toolbar. The default value is 100(%). Clicking the button between the Zoom
Out and Zoom In buttons will reset the zoom level to the default value. Changing the
view scale does not affect the configuration.
Zoom In Zoom in the APL Code Area by modifying the zoom percentage number that you find
on the toolbar. The default value is 100(%). Clicking the button between the Zoom
Out and Zoom In buttons will reset the zoom level to the default value. Changing the
view scale does not affect the configuration.

11.2.2.2. Configuration
The Analysis agent configuration window is displayed when the agent in a workflow is right-clicked
selecting Configuration... or double-clicked.

Hint! When double-clicking the dotted triangle in the lower right corner of the Configuration
window, the code area will be maximized. This is a useful feature when coding.

The Configuration... dialog differs slightly depending on if the Workflow Configuration is of batch
or realtime type. The differences will be pointed in the Section 11.2.2.2.1, “Batch Workflows” and
Section 11.2.2.2.2, “Realtime Workflows” sections.

When the Analysis agent configuration window is confirmed, a compilation is performed in order to
extract the configuration data from the code.

Note! Complex code and formats may take a while to compile.

11.2.2.2.1. Batch Workflows

The configuration dialog consists of two tabs; Analysis and Thread Buffer.

If the routing of the UDRs (the udrRoute command) is left out, it will make the outgoing connection
point disappear from the window, disabling connection to a subsequent agent.

456
Desktop 7.1

Figure 329. Analysis agent configuration window, Analysis tab.

11.2.2.2.1.1. Analysis Tab

The Analysis tab will be described here.

Code Area This is the text area where the APL code, used for UDR processing, is entered.
Code can be entered manually or imported. There is also a third possibility, which
is to set an import command stated to access the generic code created in the APL
Code Editor.

Entered code will be color coded depending on the code type, and for input assist-
ance, a pop-up menu is available. See Section 11.2.2.3, “Syntax Highlighting and
Right-click Menu” for further information.

Below the text area there are line, column and position indicators, for help when
locating syntax errors.
Compilation Compiles the entered code to evaluate the validity. The status of the compilation
Test... is displayed in a dialog. Upon failure, the erroneous line is highlighted and a
message, including the line number, is displayed.
UDR Types Enables selection of UDR Types. One or several UDR Types that the agent expects
to receive may be selected. Refer to Section 11.2.2.5, “Input and Output Types”
for a detailed description.
Set To Input Automatically selects the UDR Type distributed by the previous agent.

For further information about the pop-up menu in the Code Area and the UDR Internal Format
browser, see Section 2.2.7, “Text Editor”.

11.2.2.2.1.2. Thread Buffer Tab

The use and settings of private threads for an agent, enabling multi-threading within a workflow, is
configured in the Thread Buffer tab. For further information, see Section 4.1.6.2.1, “Thread Buffer
Tab”.

457
Desktop 7.1

11.2.2.2.2. Realtime Workflows

An Analysis agent may be part of batch as well as realtime workflows. The dialogs are identical, except
for the Thread Buffer tab which is not present in realtime workflows. Other than that, the agent is
configured in the same way.

There are, however, some restrictions and differences to consider when designing a realtime workflow
configuration:

• APL plug-ins with transaction logic are not allowed. The agent will not perform any validation
during workflow configuration - the workflow will abort upon activation of illegal code. Note, the
user must therefore keep track of what type of plug-ins are invoked.

• Published MIMs may only be of global type.

• In order to make functions thread-safe they must be preceded by the synchronized keyword,
which will make it possible to alter global variables. It is possible to read global variables from any
function block, however, to avoid race conditions with functions updating the global variables, they
must only be accessed from within synchronized functions. See the section about Synchronized
Keyword in the APL Reference Guide for further information.

• Synchronized functions cannot utilize the udrRoute command.

• Function blocks related to batch handling cannot be used.

See the APL Reference Guide for further details on the specific commands.

11.2.2.3. Syntax Highlighting and Right-click Menu


In the code area, the different parts of the code is color coded according to type, for easier identification,
and when right clicking in the code area, a context sensitive popup menu will appear, enabling easy
access to the most common actions you might want to perform.

11.2.2.3.1. Code Definition

The text is color coded according to the following definitions:

Brown - Strings

Blue - Functions

Cyan - Own (user defined) functions

Green - Types

Purple - Keywords

Orange - Comments

Hint! To refresh the text press Ctrl+Shift+L.

458
Desktop 7.1

11.2.2.3.2. Right-click Menu

Figure 330. Text Editor Right-Click Menu

The right-click menu has the following options:

Font Size Sets the font size.


Cut Moves the selected text to the clipboard.
Copy Copies the selected text to the clipboard.
Paste Pastes the contents of the clipboard into the place where the insertion point has
been set.
Select All Selects all the text.
Undo Undoes your last action.
Redo Redoes the last action that you undid with Undo.
Find/Replace... Displays a dialog where chosen text may be searched for and, optionally, re-
placed.

You can also press the CTRL+H keys to perform this action.

Figure 331. Find/Replace Dialog

Quick Find Searches the code for the highlighted text.

You can also press the CTRL+F keys to perform this action.
Find Again Repeats the search for last entered text in the Find/Replace dialog.

You can also press the CTRL+G keys to perform this action.

459
Desktop 7.1

Go to Line... Opens the Go to Line dialog where you can enter which line in the code you
want to go to. Click OK and you will be redirected to the entered line.

You can also press the CTRL+L keys to perform this action.
Show Definition If you right click on a function in the code that has been defined somewhere
else and select this option, you will be redirected to where the function has
been defined.

If the function has been defined within the same configuration, you will simply
jump to the line where the function is defined. If the function has been defined
in another configuration, the configuration will be opened and you will jump
directly to the line where the function has been defined.

You can also click on a function and press the CTRL+F3 keys to perform this
action.

Note! If you have references to an external function with the same name
as a function within the current code, some problems may occur. The
Show Definition option will point to the function within the current
code, while the external function is the one that will be called during
workflow execution.

Show Usages If you right click on a function where it is defined in the code and select this
option, a dialog called Usage Viewer will open and display a list of the Con-
figurations that are using the function.

You can also select a function and press the CTRL+F4 keys to perform this
action.
UDR Assistance... Opens the UDR Internal Format Browser from wihich the UDR Fields may be
inserted into the code area.

You can also press the CTRL+U keys to perform this action.
MIM Assistance... Opens the MIM Browser from which the available MIM Resources may be
inserted into the code area.

You can also press the CTRL+M keys to perform this action.
Import... Imports the contents from an external text file into the editor. Note that the file
has to reside on the host where the client is running.
Export... Exports the current contents into a new file to, for instance, allow editing in
another text editor or usage in another MediationZone® system.
Use External Editor Opens the editor specified by the property mz.gui.editor.command in
the $MZ_HOME/etc/desktop.xml file.

Example 76.

Example:

mz.gui.editor.command = notepad.exe

APL Help... Opens the APL Reference Guide.


APL Code Comple- Performs code completion on the current line. For more information about
tion Code Completion, see Section 11.2.2.3.3, “APL Code Completion”.

460
Desktop 7.1

You can also press the CTRL+SPACE keys to perform this action.
Indent Adjusts the indentation of the code to make it more readable.

You can also press the CTRL+I keys to perform this action.
Jump to Pair Moves the cursor to the matching parenthesis or bracket.

You can also press the CTRL+SHIFT+P keys to perform this action.
Toggle Comments Adds or removes comment characters at the beginning of the current line or
selection.

You can also press the CTRL+7 keys to perform this action.
Surround With Adds a code template that surrounds the current line or selection:

• for Loop (CTRL+ALT+F)

• while Loop (CTRL+ALT+W)

• Debug Expression (CTRL+ALT+D)

• if Condition (CTRL+ALT+I)

• Block Comment (CTRL+ALT+B)

11.2.2.3.3. APL Code Completion

In order to make APL coding easier, the APL Code Completion feature will help you find and add
APL functions and UDR formats.

To access APL Code Completion, place the cursor where you want to add an APL function, press
CTRL+SPACE and select the correct function or UDR format. In order to reduce the number of hits,
type the initial characters of the APL function. The characters to the left of the cursor will be used as
a filter.

APL Code Completion covers:

• Installed APL functions.

• APL functions defined in APL Code configurations.

• APL functions created with MediationZone® Development Toolkit.

• Function blocks such as beginBatch and consume.

• Flow control statements such as while and if.

• Installed UDR formats.

• UDR formats created with MediationZone® Development Toolkit.

• User defined UDR formats.

461
Desktop 7.1

Figure 332. APL Code Completion

11.2.2.4. Assignment and Cloning


All agents handling UDRs will forward UDR references on all outgoing links from the agent, that is,
the same instance of the UDR will be referred to from all agents in a workflow. This is accepted if all
agents only read the UDR content. If an agent alters the content, it effectively alters the content for
all other agents that will receive the UDR. To avoid this behavior, the UDR must be cloned. Refer to
the following examples for more information.

Note! Cloning is a costly operation in terms of performance, therefore it must be used with care.

462
Desktop 7.1

Example 77.

Figure 333. Assignment case 1.

In this example it is desired to alter the UDR in the Analysis agent and send it to Encoder_1,
while still sending its original value to Encoder_2. To achieve this, the UDR must be cloned.
The following code will create, alter, and route a cloned UDR on r_2 and will leave the original
UDR unchanged.

input=udrClone(input);
input.MyNumber=54;
udrRoute(input);

Note that input is a built-in variable in APL, and must be used for all UDRs entering the
agent.

463
Desktop 7.1

Example 78.

An alternative solution to the one presented in the previous example is to clone the UDRs in an
Analysis agent and then route the UDRs to another Analysis agent in which amendment is per-
formed.

Figure 334. Assignment case 2.

Configurations in the Analysis_1 agent;

udrRoute(input,"r_3",clone);
input.MyNumber=54;
udrRoute(input,"r_2");

The incoming UDR is cloned and the clone is routed on to r_3. After that the original UDR can
be altered and routed to r_2.

11.2.2.5. Input and Output Types


UDRs entering an Analysis agent are referred to as input types, while UDRs leaving the agent are re-
ferred to as output types. The input types must be specified, while the output types are calculated from
the input types and the APL code.

464
Desktop 7.1

Example 79.

Suppose there is a workflow with one Analysis agent, one input route streaming two different
input types (typeA and typeB), and two output routes. The two output routes take two different
UDR types - the first equaling one of the input types (typeA), and the second is a new UDR
type (typeC) which is created out of information fetched from the other input type (typeB).

Figure 335. Several UDR types can be routed to and from an Analysis agent.

The APL code:

if (instanceOf(input, typeA)) {
udrRoute((typeA)input,"r_2");
}
else {
typeC newUDR = udrCreate(typeC);
newUDR.field = ((typeB)input).field;
// Additional field assignments...
udrRoute(newUDR, ,"r_3");
}

The first udrRoute statement explicitly typecasts to the typeA type, while there is no
typecasting at all for the second udrRoute statement. This is because the input variable
does not have a known type (it can be either typeA or typeB), while newUDR is known by the
compiler to be of typeC.

Without any typecasting, the output type on r_2 would have been reported as an undefined UDR,
drudr, and the workflow would not have been valid.

11.2.2.6. Transaction Behavior - Batch Workflows


This section includes information about the Analysis agent transaction behavior. For information about
the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

11.2.2.6.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
End Batch The agent itself does not emit End Batch, however it can trigger the collector to do
so by calling the hintEndBatch method. See the sections about beginBatch and

465
Desktop 7.1

endBatch in the APL Reference Guide for information about the hintEndBatch
method.
Cancel Batch The agent itself does not emit Cancel Batch however, it can trigger the collector to
do so by calling the cancelBatch method. See the section about cancelBatch in
the APL Reference Guide for further information.
Hint End Batch If the code contains a call to the method hintEndBatch this will make the agent
emit a Hint End Batch.

Note! Not all collectors can act upon a call on a hintEndBatch request. Please
refer to the user's guide for the respective Collection agent for information.

11.2.2.6.2. Retrieves

The agent retrieves commands from other agents and, based on them, generates a state change of the
file currently processed.

Command Description
Begin Batch When a Begin Batch message is received, the agent calls the beginBatch function
block, if present in the code.
Cancel Batch When a Cancel Batch message is received, the agent calls the cancelBatch function
block, if present in the code.
End Batch When an End Batch message is received, the agent calls the drain and endBatch
function blocks, if present in the code.

11.2.2.7. Introspection
The introspection is the type of data an agent expects and delivers.

Produced types are dependent on input type and the APL code. The agent consumes byte arrays and
any UDR type selected from the UDR Types list.

11.2.2.8. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

The agent does not publish or access any additional MIM parameters. However, MIM parameters can
be produced and accessed through APL. For further information about available functions, see the
section MIM Related Functions in the APL Reference Guide.

11.2.2.9. Agent Message Events


The agent does not itself produce any message events. However by configuring the Analysis agent
with suitable APL code there are possibilities of producing message events. See the section Log Related
Functions in the APL Reference Guide for further information about available functions.

11.2.2.10. Debug Events


The agent does not itself produce any debug events. However by configuring the Analysis agent with
suitable APL code there are possibilities of producing debug events. See the section Log Related
Functions APL Reference Guide for further information about available functions.

466
Desktop 7.1

11.3. Categorized Grouping Agent


11.3.1. Introduction
This section describes the Categorized Grouping agent. This is a standard agent on the DigitalRoute®
MediationZone® Platform.

11.3.1.1. Prerequisites
The reader of this information has to be familiar with:

• The MediationZone® Platform

• Analysis Programming Language

• UDR structure and contents

11.3.1.2. User Documentation


The Analysis Programming Language description and syntax is listed in the MediationZone® APL
Reference Guide.

The Ultra Format Definition Language is described in the MediationZone® Ultra Reference Guide.

11.3.2. Categorized Grouping Agent


The Categorized Grouping agent is a processing agent designed to divide incoming data into categories.
Each category can contain data from one or more files. When a category closing condition is met, the
data collected in a category can be grouped with an external script and the result will be emitted into
the workflow.

11.3.2.1. Overview
11.3.2.1.1. Categorizing

Incoming data is divided into categories by the agent. All data assigned the same categoryId will
be accumulated in one category. It is performed according to conditions set in APL in the analysis
agent, usually preceding the Categorized Grouping agent.

11.3.2.1.2. Grouping

The incoming data accumulated in a category can be grouped into one or several files. Each categorized
set of data sent to the agent will have an associated filename, set either by default or in the APL con-
figuration. If no filename is configured in the preceding APL agent a DEFAULT_FILENAME will be
automatically set by the agent for each category. The default file will always be situated in the top
directory of its category.

11.3.2.1.3. Closing Conditions

A category will be closed as soon as one of the configured closing conditions is met. There are four
different closing conditions available to configure in the agent. It is also possible to use APL to add a
closing condition. To close a category from APL, it is enough that one UDR sets the closing condition
to true.

When a closing condition for a category is met, an external script is executed generating a file containing
all the data of one category. If the Grouping feature is not enabled, the filename associated with incoming
data is ignored and only one file is created for each category. The resulting file is emitted upon a
closing condition. This is useful when splitting is desired and grouping is not needed. The external
script will not be used.

467
Desktop 7.1

11.3.2.1.4. Categorized Grouping Related UDR Types

The UDR types created by default in the Categorized Grouping agent can be viewed in the UDR In-
ternal Format Browser in the CatGroup folder. To open the browser open an APL Editor, in the
editing area right-click and select UDR Assistance...; the browser opens.

11.3.2.2. Categorized Grouping Profile


Configurations concerning the Cat_Group agent are made in the Categorized Grouping profile.

The Categorized Grouping profile is loaded when you start a workflow that depends on it. Changes
to the profile become effective when you restart the workflow.

To open the editor, click the New Configuration button in the upper left part of the MediationZone®
Desktop window, and then select Categorized Grouping Profile from the menu.

Figure 336. Categorized Grouping Profile Editor

Working Directory Absolute path to the working directory to be used by the agent.

It will be used to keep data over multiple input transaction boundaries. The same
root directory can be used for several agents over several workflows. To enable
concatenation and grouping over several activations of the workflow, all Execu-
tion Contexts where the workflow can be activated, must be able to access the
same global root directory. Each agent will create and work in its own unique
subdirectory.

An agent may leave persistent data if closing conditions are not met, the workflow
aborts or Cancel Batch occurs in the last transaction.
Abort on Inconsist- This setting controls the agents behavior if its storage is not in its expected state,
ent Working Dir- that is, if the agent discovers that the persistent directory does not have the ex-
ectory pected contents.

• Not selected - Warn about the condition and continue from a clean state. Any
old data will be moved to a subdirectory.

• Selected - Abort the workflow. If the workflow aborts, manual intervention


may be needed. Regardless of this setting, warnings will be logged.

Activate Use of Activating this option will lead to the spawning of an external process of the
Grouping script identified by the "Script path" described next. If not used, the category
data will be concatenated.

468
Desktop 7.1

Script Path Specifies the external script to be used for grouping operations.
Script Arguments Used to state the order and contents of arguments and flags in the user defined
script. The reserved words %1 and %2 states the position in the call where the
arguments are expected.

%1 This reserved word will during execution be replaced with the target file
the script should create (including absolute path)
%2 An absolute path to a directory which contains the files (and potential sub-
directories) that should be grouped. The agent guarantees that this directory
does not contain any other files or directories except for those that are subject
to grouping.

Example 80. Type Script

#!/bin/sh
cd $2 //Go to the directory stated in %2
tar cf $1 * //Tar file contents of %2
gzip -9 $1 //Gzip the tared file
mv $1.gz $1 //Rename to previous filename
exit 0 //Exit

Byte Count Specifies the byte count closing condition for the agent. This field will have to
be set to a value larger than zero.
File Count Specifies the file count closing condition for the agent. This value is optional.
Closing Interval Specifies the closing interval in seconds for the agent. This value is optional.

After each timeout, all categories will be closed and the timer will be moved
forward according to the timeout interval.
Close on Deactiva- Setting this option will cause the agent to emit its data when the last file has been
tion finished by the workflow. If a workflow aborts before all categories have been
committed and this checkbox is enabled, the agent shall try to log a warning in
the system log that data is remaining in the persistent storage.

If more than one Closing Condition is reached, the condition with the highest number will be
reported.

0 - Timeout
1 - This is the last transaction during this activation
2 - APL code requested closing
3 - The input file count limit is reached
4 - The input file size limit for this category is reached or exceeded

11.3.2.3. Configuration
The Categorized Grouping agent configuration window is displayed when the agent in a workflow is
right-clicked, selecting Configuration... or double-clicked.

469
Desktop 7.1

Figure 337. Categorized Grouping agent, Grouping tab.

Browse... Select the Browse... button to open the Configuration Selection dialog. Browse
for and select the preferred Profile to be added to the agent.
Force Single UDR If this is disabled the output files will automatically be divided in multiple UDRs
per file. The output files will be divided in suitable block sizes.

11.3.2.4. Transaction Behavior


11.3.2.4.1. Emits

The agents does not emit any commands.

11.3.2.4.2. Retrieves

Cancel Batch If a cancelBatch is emitted by any agent in the workflow, all data in the current trans-
action will be disregarded. No closing conditions will be applied.

11.3.2.5. Introspection
The agent receives UDRs of CGAgentInUDR type and emits UDRs of CGAgentOutUDR type.

11.3.2.6. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

11.3.2.6.1. Publishes

Using Clean Stor- This MIM parameter is used if an inconsistency is detected in the persistent storage
age and the agent is configured to continue with a clean state. The MIM can be true
during the first transaction of an activation but will always be false on the sub-
sequent transactions within the same activation.

Using Clean Storage is of the boolean type and is defined as a header


MIM context type.

11.3.2.6.2. Accesses

Source File Count Received from the collector. This is accessed if Close on Deactivation is enabled.

APL offers the possibility of both publishing and accessing MIM resources and values. For a
listing of general MediationZone® MIM parameters, see Section 2.2.10, “Meta Information
Model”.

470
Desktop 7.1

11.3.2.7. Debug Events


Debug messages are dispatched when debug is used. During execution, the messages are shown in the
Workflow Monitor and can also be stated according to the configuration done in the Event Notification
Editor.

The agent does not itself produce any debug events. However, APL offers the possibility of producing
events.

11.3.3. Example with Categorized Grouping Agent


In a workflow the Cat_Group agent is usually connected with one Analysis agent before and after
the agent.

Figure 338. Typical Workflow Containing a Categorized Grouping Agent

11.3.3.1. Analysis_1
In the first Analysis agent a UDR of the type CGAgentInUDR is created and populated.

The example next shows a possible Analysis_1 agent configuration:

471
Desktop 7.1

Example 81.

consume{

//Create CGAgentInUDR.

CatGroup.CGAgentInUDR udr = udrCreate(CatGroup.CGAgentInUDR);

//Set categoryId, data and filename UDR values.

debug(input.categoryID);
udr.categoryID = (string)input.categoryID;
udr.data = input.OriginalData;
udr.fileName= "IncomingFile"; //When "Activate use of
grouping" is enabled in the Cat_Group
profile this file name will be used for
the grouped data in the tar-file.

//When closeGroup is set to "true" the category can be closed


from APL, else settings in the Cat_Group profile will be used.

udr.closeGroup = false;

//Route UDR

udrRoute(udr);

When the CGAgentInUDR is created the field structure can be viewed from the UDR Internal Format
Browser. For further information, see Section 11.3.2.1.4, “Categorized Grouping Related UDR Types”.

11.3.3.2. Cat_Group_1
The Cat_Group agent is configured via the GUI. For configuration instructions see Section 11.3.2.2,
“Categorized Grouping Profile”.

Figure 339. Typical Configuration of a Categorized Grouping Profile

When configured correctly incoming UDRs with different categoryIDs are collected and handled
by the agent to return UDRs consisting of identical categoryID types.

472
Desktop 7.1

The outgoing data is collected in a CGAgentOutUDR with the following structure.

Example 82.

internal CGAgentOutUDR {
string categoryID;
int closingCondition; //Indicates closing condition
that emits the file.
bytearray data;
boolean isLastPartial; //True if last UDR of the input file.
int partialNumber; //Sequence number of the UDR in the
file. 1 for the first, 2 for the
second so on.
};

11.3.3.3. Analysis_2
In the Analysis_2 agent the CGAgentOutUDR will be processed in the way the agent is configured.
In this example one directory, one directory delimiter and one file name are created. The CGAgentOu-
tUDR information is thereafter put in a multiForwardingUDR and last routed to the Disk_2 agent.

473
Desktop 7.1

Example 83.

persistent int counter = 1;

consume {
//Create a fntUDR

FNT.FNTUDR fntudr = udrCreate(FNT.FNTUDR);

//Create a directory name

fntAddString(fntudr, "CG_Directory");

//Add a directory delimiter.

fntAddDirDelimiter(fntudr);

//Create a filename.

fntAddString(fntudr, "File_" + (string)counter);

//Create a multiForwardingUDR.

FNT.MultiForwardingUDR multiUDR = udrCreate(FNT.MultiForwardingUDR);

//Add the fntUDR created above containing the directory,


directory delimiter and the file name.

multiUDR.fntSpecification = fntudr;

//Add the data from the CGAgentOutUDR to the multiForwardingUDR.

multiUDR.content = input.data;

//print closingCondition,
0 = timeout,
1 = close on deactivation,
2 = APL requested closure,
3 = input file count limit is reached,
4 = the input file size limit is reached

debug("Closing condition= " + input.closingCondition);

udrRoute(multiUDR);
counter = counter + 1;
}

11.4. Compression Agents


11.4.1. Introduction
This section describes the Decompressor and Compressor agents. These agents are standard agents on
the DigitalRoute® MediationZone® platform.

474
Desktop 7.1

The functions and components available in your installation depend upon the features in your chosen
license package. As such certain features and functions described in this document may not be available
to you. Please, consult your system administrator for further information.

11.4.1.1. Prerequisites
The reader of this User's Guide should be familiar with:

• The MediationZone® Platform

• APL

11.4.2. Overview
The Decompressor agent receives compressed data batches in Gzip format, extracts them, and routes
the decompressed data forward in the workflow. An empty or corrupt batch is handled by the agent
according to your configuration.

The Compressor agent receives data batches, compresses the data to Gzip format and routes the com-
pressed data forward in the workflow.

11.4.3. Decompressor Agent


11.4.3.1. Configuration
The Decompressor agent configuration view is opened from the Workflow Editor. Either double-click
on the agent in the workflow template, or right-click on the agent and select Configuration....

Figure 340. The Decompressor Agent Configuration View

Compression Select decompression algorithm:

• No Compression: The agent will not decompress the files.

• Gzip: The agent will decompress the files by using gzip (Default).

Error Hand- Select how you want to handle errors for files that cannot be decompressed:
ling
• Cancel Batch: The agent will cancel the batch when a file cannot be decompressed
(Default). The default setting for Cancel Batch is to abort the workflow immediately,
but you can also configure the workflow to abort after a certain number of consec-
utive Cancel Batches, or to never abort the workflow on Cancel Batch. See Sec-
tion 4.1.8, “Workflow Properties” for further information about Workflow Proper-
ties.

• Ignore: The agent will ignore an input batch when a file cannot be decompressed,
and a log message will be generated in the System Log, see Section 7.11, “System
Log”.

475
Desktop 7.1

Note! If you select the Ignore option, data will continue to be sent until
an error occurs in a batch, which means that erroneous data might be routed
from the Decompressor agent.

11.4.3.2. Transaction Behavior


This section includes information about the Decompressor agent's transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

11.4.3.2.1. Emits

The agent emits commands that change the state of the file currently processed.

Command Description
Cancel Batch Emitted if a file cannot be decompressed. If you you have configured the workflow to
abort after a certain number of consecutive Cancel Batches, or never to abort on Cancel
Batch, in the Workflow Properties, the collection agent will send the file to ECS along
with a message describing the error. See Section 4.1.8.2, “Error Tab” and Section 16.1,
“Error Correction System” for further information.

11.4.3.2.2. Retrieves

The agent does not retrieve anything.

11.4.3.3. Introspection
Introspection is the type of data that an agent both recognizes and delivers.

The Decompressor agent consumes and delivers bytearray types.

11.4.3.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

11.4.3.4.1. Publishes

The agent does not publish any MIM resources.

11.4.3.4.2. Accesses

MIM Parameter Description


Source Pathname This parameter is read from the workflow's collecting agent, in order to create
accurate Agent Message Events.
Source Filename This parameter is read from the workflow's collecting agent, in order to create
accurate Agent Message Events.

11.4.3.5. Agent Message Events


The agent generates event messages according to your configuration in the Event Notification editor.

• Ignored batch

OR

476
Desktop 7.1

Ignored batch. The file <filepath/filename> could not be decompressed.

Reported when a batch cannot be decompressed and Error Handling is configured with ignore,
see Section 11.4.3.1, “Configuration”.

Note! The event message includes the file name if the appropriate MIM parameter is provided
by the collecting agent. For example, if data is collected from a database, no MIM parameter
is provided.

11.4.3.6. Debug Events


There are no debug events for this agent.

11.4.4. Compressor Agent


11.4.4.1. Configuration
The Compressor agent configuration view is opened from the Workflow Editor. Either double-click
on the agent in the workflow template, or right-click on the agent and select Configuration....

Figure 341. The Compressor Agent Configuration View

Compression Select Compression algorithm:

• No Compression: The agent will not compress the files.

• Gzip: The agent will compress the files by using gzip (Default).

Compression Level Select the Compression level. The speed of compression is regulated using
a level, where "1" indicates the fastest compression method (less compres-
sion) and "9" indicates the slowest compression method (best compression).
The default compression level is "6".
Produce Empty Check this to make sure that an archive is produced and routed forward
Archives even if it has no content.

11.4.4.2. Transaction Behavior


This section includes information about the Compressor agent's transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

11.4.4.2.1. Emits

The Compressor agent does not emit any commands.

11.4.4.2.2. Retrieves

The Compressor agent does not retrieve any commands.

477
Desktop 7.1

11.4.4.3. Introspection
Introspection is the type of data that an agent both recognizes and delivers.

The Compressor agent consumes and delivers bytearray types.

11.4.4.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

11.4.4.4.1. Publishes

The Compressor agent does not publish any MIM resources.

11.4.4.4.2. Accesses

The Compressor agent does not access any MIM resources.

11.4.4.5. Agent Message Events


The Compressor agent does not generate any message events.

11.4.4.6. Debug Events


There are no debug events for the Compressor agent.

11.5. Decoder Agent


The Decoder agent converts raw data to internal UDRs, based on information from a UFDL decoder
definition or MZ format tagged UDRs (the MediationZone® specific general format).

11.5.1. Configuration
The Decoder configuration window is displayed when you right-click on a Decoder agent and select
the Configuration... option or when you double-click on the agent.

Figure 342. Decoder configuration window, Decoder tab.

Decoder List of available decoders introduced via the Ultra Format Editor, as well as the default
built-in decoder for the MediationZone® internal format (MZ format tagged UDRs).
If the compressed format is used, the decoder will automatically detect this.

478
Desktop 7.1

If the decoder for the MZ format tagged UDRs format is chosen, the Tagged UDR
type list is enabled.
Tagged List of the available internal UDR formats, stored in the Ultra and Code servers.
UDR type

On Error Options to control how to react upon decoding errors.

• Cancel Batch - The entire batch is cancelled. This is the standard behavior.

• Route Raw Data - Route the remaining, undecodable, data as raw data. This option
is useful if you want to implement special error handling for batches that are partially
processed.

Full Decode If enabled, the UDR will be fully decoded before output from the decoder agent. This
action may have a negative impact on performance, since not all fields may be accessed
in the workflow, making decoding of all fields in the UDR unnecessary. If it is important
that all decoding errors are detected, this option must be enabled.

If this option is disabled (default), the amount of work needed for decoding is minimized
using "lazy" decoding of field content. This means that the actual decoding work might
not be done until later in the workflow, when the field values are accessed for the first
time. Corrupt data (that is, data for which decoding fails) might not be detected during
the decoding stage, but can however cause a workflow to abort at a later processing
stage.

Note! The use and settings of private threads for an agent, enabling multi-threading within a
workflow, are configured in the Thread Buffer tab. For further information, see Section 4.1.6.2.1,
“Thread Buffer Tab”.

11.5.2. Transaction Behavior


This section includes information about the Decoder agent transaction behavior. For information about
the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

11.5.2.1. Emits
The agent emits commands that changes the state of the file currently processed.

Command Description
Cancel Batch Emitted on failure to decode the received data.

479
Desktop 7.1

11.5.2.2. Retrieves
The agent retrieves commands from other agents and, based on them, generates a state change of the
file currently processed.

Command Description
End Batch Unless this has been generated by a Hint End Batch message, the decoder evaluates
that all the data in the batch was decoded. When using a constructed or blocked decoder,
the decoder does additional validation of the structural integrity of the batch.

11.5.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces one or more UDR types depending on configuration and consumes bytearray
type.

11.5.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

The agent does not publish nor access any MIM parameters.

11.5.5. Agent Message Events


There are no message events for this agent.

11.5.6. Debug Events


Debug messages are dispatched when debug is used. During execution, the messages are shown in the
workflow monitor and can also be stated according to the configuration done in the event configurations.

For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.

• Splitting batch

Emitted when the Hint End Batch event occurs.

11.6. Duplicate Batch Agent


11.6.1. Introduction
This section describes the Duplicate Batch Detection agent. This is a standard agent on the Digital-
Route® MediationZone® platform.

11.6.1.1. Prerequisites
The reader of this information should to be familiar with:

• The MediationZone® Platform

480
Desktop 7.1

11.6.2. Duplicate Batch Detection Agent


The Duplicate Batch Detection agent allows duplication control on passed data batches. Each data
batch will be tested against already stored meta data from batches, to see if they are considered to be
duplicates.

Meta data for data batches are kept for a configurable number of days. If the meta data of a batch has
been removed, a duplicate of this batch can no longer be detected.

If any duplicates are detected, a message is logged to the System Log, and the duplicate batch is can-
celled, which may cause the workflow to abort. For further information, see Section 7.11, “System
Log”.

To monitor the duplicate batches the Duplicate Batch Inspector may be used. for further information,
see Section 11.6.3, “Duplicate Batch Inspector”.

It is only appropriate to use the agent after agents that may create duplicate batches. Normally this is
a file based collection agent. Several workflows may utilize the same Duplicate Batch profile. In this
case, their batches will be mutually compared.

11.6.2.1. Profile
The Duplicate Batch Detection profile is loaded when you start a workflow that depends on it. Changes
to the profile become effective when you restart the workflow.

To configure a Duplicate Batch Detection profile, click the New Configuration button in the upper
left part of the MediationZone® Desktop window, and then select Duplicate Batch Profile from the
menu.

Figure 343. Duplicate Batch Profile Configuration.

If the Detection Method is modified after a Duplicate Batch Detection agent has been executed,
the already stored information will not match any records processed with the new profile version.

Max Cache Age Enter the number of days you want to keep the batch information in the
(Days) database.
Use CRC Check to create a checksum from the batch file data. You use the checksum
when comparing it with other batch files checksum while searching for du-
plicate batch files.
Use Byte Count Check to compare a number with the number of bytes in the batch file.

481
Desktop 7.1

Use MIM Value Check to use a MIM value for duplicate detection.

A MIM name defined in the Named MIMs table is compared with a MIM
Resource that can be connected both with batches and workflows.
Named MIMs Use the Add button to create a list of user defined MIM names.

When the Duplicate Batch Detection agent is configured, each MIM name is
assigned to one MIM Resource that detection will be applied for.

Within the same workflow configuration the profiles configured to Use MIM
Value must map to the same MIM names.

If a detected batch file is empty, information about it is stored in the


database.

11.6.2.2. Configuration
The Duplicate Batch Detection agent configuration window is displayed when a Duplicate Batch De-
tection agent is double-clicked or right-clicked, selecting Configuration...

Figure 344. Duplicate Batch Detection agent configuration window .

Profile A list of all defined Duplicate Batch profiles.

All workflows in the same workflow configuration can use separate Duplicate
Batch profiles, however it is not possible to map MIM Values with different names
via different profiles. The mapping of MIM values regarding Duplicate Batch agent
is done in the agent for the entire workflow configuration.

In order to appoint different workflow profiles, the Field Settings found in the
Workflow Properties dialog must be set to Default. When this is done each
workflow in the Workflow Table can be appointed the correct profile.
Named MIMs A list of user defined MIMs as defined in the profile.
MIM Resource A list of existing MIM values to be mapped against the user defined Named MIMs.
Logged MIMs Selected MIM values to be used in duplicate detection message.

482
Desktop 7.1

11.6.2.3. Transaction Behavior


This section includes information about the Duplicate Batch Detection agent transaction behavior. For
information about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transac-
tions”.

11.6.2.3.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Cancel Batch Emitted if a duplicate is found.

11.6.2.3.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Begin Batch Removes all timed out detection data from the database cache.
End Batch Compares the incoming batch against the ones existing in the database. If a duplicate
is found, a cancel mark is emitted and an error message is written in the System Log.
If no duplicate is found, the data batch information for the current batch is stored in
the database.

11.6.2.4. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces and consumes bytearray types.

11.6.2.5. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

11.6.2.5.1. Publishes

The agent does not itself publish any MIM resources.

11.6.2.5.2. Accesses

MIM Parameter Description


User selected This MIM parameter contains values from selected MIM resources in the Logged
MIMs and Named MIMs lists.

Read at Start Batch.

11.6.2.6. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Duplicate batch detected

Reported when a duplicate batch has been identified.

483
Desktop 7.1

11.6.2.7. Debug Events


There are no debug events for this agent.

11.6.3. Duplicate Batch Inspector


The Duplicate Batch Inspector is used for viewing the meta data cache used for duplicate checking.
To open the Duplicate Batch Inspector, click the Tools button in the upper left part of the Medi-
ationZone® Desktop window, and then select Duplicate Batch Inspector from the menu.

Initially, the window is empty. To populate it, search criteria needs to be specified in the Search Du-
plicate Batches dialog. Select Search... from the Edit menu to access the dialog.

Figure 345. Search Duplicate Batches dialog.

Profile Select the profile which corresponds to the data of interest.


Creation Period Option to search for data created during a certain time period.

Select OK to view the matching data of the search.

Figure 346. Duplicate Batch Inspector window.

Edit menu Delete... Removes selected entry from the list. If no entry is selected, all entries
are deleted.
Edit menu Search... Displays the Search Duplicate Batches dialog, where search criteria
may be modified.
Edit menu Show MIM Val- Shows all MIM values for the selected duplicate batch.
ues
Show Batches Matching entries are bundled into groups of 500. This list shows which
group, out of how many, is currently displayed. An operation targeting
all matching entries, will have affect on all groups.

484
Desktop 7.1

Note, there is a limit of 100 000 entries for a match. If the match exceeds
this limit, any bulk operation (deleting etc) must be repeated for each
multiple of 100 000.
ID The index of the batch in the search results.
Txn ID The transaction ID of the batch.
Creation Time The time when the transaction was created.
MIM Values The MIM data stored for the batch.

11.7. Duplicate UDR Detection Agent


11.7.1. Introduction
This section describes the Duplicate UDR Detection agent. This is a standard agent on the DigitalRoute®
MediationZone® Platform.

11.7.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• UDR structure and content

11.7.2. Duplicate UDR Detection Agent


The Duplicate UDR Detection agent provides duplication control on incoming UDRs. Each new UDR
is compared with the UDRs that are already stored, to evaluate if it is a duplicate.

If a duplicate is found, a message is automatically logged in the System Log, and the UDR is marked
as erroneous and routed on a user defined route, for instance to ECS. If the UDR is routed to ECS, an
automatically generated ECS Error Code, DUPLICATE_UDR, is assigned to the UDR, which enables
searching for duplicate UDRs in ECS.

Duplication comparison is not based on the content of a complete UDR but on the content of the fields
selected by the user.

Note! If the same file happens to be reprocessed, all UDRs will be considered as being duplicates,
unless the cache is full, in which case a part of the cache will be cleared and the corresponding
amount of UDRs will be considered as non-duplicates. If the file contains a considerable number
of UDRs, the process of inserting all of them in ECS may be time-consuming.

Having a Duplicate Batch agent prior to the Duplicate UDR Detection agent will only make the
problem worse. The Duplicate Batch agent will not detect that the batch is a duplicate until the
end of the batch. At that point all UDRs have already passed the Duplicate UDR Detection agent
and are inserted, as duplicates, into ECS. Since the Duplicate Batch agent will flag for a duplicate
batch, the batch is removed from the stream forcing the Duplicate UDR Detection agent to also
remove all UDRs from ECS.

11.7.2.1. Profile
A Duplicate UDR Detection agent is configured in two steps. First a profile has to be defined, then
the regular configurations of the agent are made.

485
Desktop 7.1

The Duplicate UDR profile is loaded when you start a workflow that depends on it. Changes to the
profile become effective when you restart the workflow.

To create a new Duplicate UDR profile configuration, click the New Configuration button in the upper
left part of the MediationZone® Desktop window, and then select Duplicate UDR Profile from the
menu.

Figure 347. Duplicate UDR Profile Configuration

Storage Host In the drop down menu, the preferred storage host, where duplicate UDRs are to be
stored, can be selected. The choice for storage of duplicate repositories is either on
a specific Execution Context or Automatic. If Automatic is selected, the same Ex-
ecution Context used by the running workflow will be selected, or when the Duplicate
UDR Inspector is used, the Execution Context will be selected automatically.

Note! The workflow must be running on the same Execution Context as its
storage resides, otherwise the Duplicate UDR Detection Agent will refuse to
run. If the storage is configured to be Automatic, its corresponding directory
must be a file system shared between all the Execution Contexts.

Directory An absolute path to the directory on the selected storage host, in which to store the
duplicate cache.
Max Cache The maximum number of days to keep duplicated UDRs in the cache. The age of a
Age (days) UDR stored in cache is either calculated from the Indexing Field (timestamp) of a
UDR in the latest processed batch file, or from the system time, depending on
whether the Based on system arrival time is selected or not.

If the Date Field option, below, is not selected as indexing field, this field will be
deactivated and ignored, and cache size may only be configured using the Max
Cache Size settings. Default is 30 days.

Note! If the UDRs are out of range, this will be logged in the System Log.

486
Desktop 7.1

Based on sys- If selected (it is unselected by default), the calculation of a UDR's age will be based
tem arrival on the time when the UDR arrived in the system. In case of a longer system idle
time time, this might have consequences, as described in Section 11.7.2.1.3, “Using In-
dexing Field Instead of System Time”.

If not selected, the UDR age calculation will instead be made towards the latest In-
dexing Field (timestamp) of a UDR that is included in the previously processed
batch files.

See Figure 348, “UDR Removed from Cache based on Indexing Field or System
Time” to get an overview of the difference when calculating UDR age using
timestamp and system time.

This option is used in combination with Date Field and Indexing Field.
Max Cache The maximum number of UDRs to store in the duplicate cache. The value must be
Size (thou- in the range 100-9999999 (thousands), default is 5000 (thousands). The cache will
sands) be made up of containers covering 50 seconds each, and for every incoming UDR,
it will be determined in which cache container the UDR will be stored

During the initialization phase, the agent checks whether the cache is full or not. If
the check indicates that there will be less than 10% of the cache available, cache
containers will start to be cleared until 10% free cache is reached, starting with the
oldest container. Depending on how many UDRs are stored in each container, this
means that different amounts of UDRs may be cleared depending on the setup. If
the index field happens to have the same value in all the UDRs, all of the UDRs in
the cache will be cleared.

Note! If you have a very large cache size, it may be a good idea to split the
workflows in order to preserve performance.

Type The UDR type the agent will process.


Indexing Field The UDR field used as an index in the duplicate comparison. Fields of type long
and date are valid for selection.

For performance reasons, this field should preferably be either an increasing sequence
number, or a timestamp with good locality. This field will always be implicitly
evaluated.
Date Field If selected (default), the indexing field will be treated as a timestamp instead of a
sequence number, and this has to be selected to be able to set the maximum age of
UDRs to keep in the cache in the Max Cache Age (days) field above.

Note! If the selected indexing field is a timestamp that is configured to be 24


h or more ahead of the system time, the workflow will abort.

Checked Fields The fields to use for the duplication evaluation, when deciding whether or not a UDR
is a duplicate.

If the Checked Fields or Indexing Field are modified after an agent is ex-
ecuted, the already stored information will be considered useless the next time
the workflow is activated. Hence, duplicates will never be found amongst the
old information since other type of meta data has replaced them.

487
Desktop 7.1

11.7.2.1.1. Duplicate UDR Profile Menu

The main menu changes depending on which configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all configurations and these are
described in Section 3.1.1, “Configuration Menus”.

There is one menu item that is specific for Duplicate UDR profile configurations, and it is described
in the coming section:

11.7.2.1.1.1. The Edit Menu

Item Description
External References To Enable External References in an agent profile field. Please refer to Sec-
tion 9.5.3, “Enabling External References in an Agent Profile Field” for further
information.

11.7.2.1.2. Duplicate UDR Profile Buttons

The toolbar changes depending on which Configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all Configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.

There are no additional buttons for Duplicate UDR profile.

11.7.2.1.3. Using Indexing Field Instead of System Time

The "cache time window" (see Figure 348, “UDR Removed from Cache based on Indexing Field or
System Time”) decides whether a UDR shall be removed from the cache or not. The maximum number
of days to store a UDR in cache is retrieved from the Max Cache Age configuration, and each time a
new batch file is processed (and the age of duplicate UDRs is calculated) the "cache time window"
will be moved forward and old UDRs will be removed.

Calculation of the UDR age can be done in two ways:

• Using the latest indexing field (timestamp) of a UDR that is included in the previously processed
batch files.

• Using system time.

The following figure illustrates the difference:

488
Desktop 7.1

Figure 348. UDR Removed from Cache based on Indexing Field or System Time

If the system has been idle for an extended period of time, there will be a "delay" in time. So when a
new batch file is processed, and if system time is used for UDR age calculation, the "cache time window"
will be moved forward with the delay included, and this might result in all UDRs being removed from
the cache, as shown in Figure 348, “UDR Removed from Cache based on Indexing Field or System
Time”. The consequence of this is that the improperly removed UDRs will be considered as non-du-
plicates and, hence, might be handled even though they still are duplicates.

If the indexing field is used instead, a more proper calculation will be done, since the "system delay
time" will be excluded. In this case only UDR 1 and UDR 2 will be removed.

11.7.2.1.4. Enabling External Referencing

External References can be used with the fields:

• Directory

• Max Cache Age

• Max Cache Size

External Referencing of profile fields is enabled from the profile view's main menu. For detailed
instructions, see Section 9.5.3, “Enabling External References in an Agent Profile Field”.

11.7.2.2. Configuration
The Duplicate UDR Detection configuration window is displayed when double-clicking on a Duplicate
UDR Detection agent or right-clicking the agent and selecting Configuration....

489
Desktop 7.1

11.7.2.2.1. Dup UDR Tab

Figure 349. Duplicate UDR Detection configuration window, Dup UDR tab.

Profile Select the Duplicate UDR profile you want the agent to use.

All workflows in the same workflow configuration can use separate Duplicate
UDR profiles, if that is preferred. In order to do that the profile must be set to De-
fault in the Workflow Table tab found in the Workflow Properties dialog. After
that, each workflow in the Workflow Table can be appointed the correct profile.
Duplicate Route Indicates on which route to send detected duplicates.

The list is not populated with output routes until the routes have been created and
the dialog is reopened.
Batch Source In- A list of MIM values, used when creating the error information for the ECS (if
formation routed to an ECS Forwarding agent). To display it from ECS, double-click the Error
Code for the UDR (that is, DUPLICATE_UDR for all duplicates, regardless of
which workflow or profile they originate). Also, note that MIM values are selected
from the original UDR, not the duplicate.

The use and settings of private threads for an agent, enabling multi-threading
within a workflow, is configured in the Thread Buffer tab. For further information,
see Section 4.1.6.2.1, “Thread Buffer Tab”.

11.7.2.3. Transaction Behavior


11.7.2.3.1. Emits

This agent does not emit any commands.

11.7.2.3.2. Retrieves

This agent does not retrieve any commands.

11.7.2.4. Introspection
The agent produces and consumes UDR types selected from the UDR Type list.

11.7.2.5. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

11.7.2.5.1. Publishes

Detected Duplicates This MIM parameter contains the number of detected duplicates in the current
batch.

490
Desktop 7.1

Detected Duplicates is of the int type and is defined as a trailer MIM


context type.

11.7.2.5.2. Accesses

User selected The agent accesses user selected values to log in ECS.

11.7.2.6. Agent Message Events


• File Repository Initialized

Reported when the agent has successfully opened its duplicate detection repository (cache).

• Number of number UDRs were duplicates (number were too old)

Reported after each processed batch. The last number denotes UDRs that were too old to be compared
against, that is, they were older than the configured maximum age.

11.7.2.7. Agent Debug Events


There are no debug events for this agent.

11.7.3. Duplicate UDR Inspector


To open the Duplicate UDR Inspector, click the Tools button in the upper left part of the Medi-
ationZone® Desktop window, and then select Duplicate UDR Inspector from the menu.

Note! Ensure that the Read Only check box is selected unless you need to delete batches from
the cache. If not selected, the profile will be locked and workflows using the profile will not be
able to write to the cache.

11.7.3.1. Search for Duplicate UDR Batches


Initially, the inspector window is empty. To populate the window, select Search... from the Edit menu
to display the Search Duplicate UDR Batches dialog.

Figure 350. The Duplicate UDR Inspector, Search Batches window.

Profile Select the profile that corresponds to the data of interest.

491
Desktop 7.1

Processed Period Select to search for batches processed during a certain time period.
Content Period Select to search with respect to time span of the indexing field in batches. This
option is only available if the selected profile has a timestamp indexing field.
MIM Criteria Select to use a regular expression to search for a selected MIM resource value.
Sort Order Select to specify the sort order when displaying the list of batches.
Lock Handling Disable Read Only if batches need to be deleted from the cache. Exclusive access
to the cache is required for deleting batches, meaning that if a currently running
workflow is using the selected profile, the workflow needs to be stopped to be
able to get exclusive access.

11.7.3.2. Inspect Duplicate UDR Batches


Once the search criteria has been specified, the Duplicate UDR Inspector window is populated with
matching batches.

Figure 351. The Duplicate UDR Inspector.

File

Menu option Description


Save Deletes batches flagged for deletion.

Edit

Menu option Description


Delete Flags selected batches for deletion. Select Save to commit deletions.
Search Refer to Section 11.7.3.1, “Search for Duplicate UDR Batches”.

View

Menu option Description


Refresh Refresh search with previous search parameters.

Result List

Show Batches If a search results in a large number of batches, this enables switching between
different batches in the result list.
ID The index of the batch in the search results.
Txn ID The transaction ID of the batch.
Processed Date The date when the batch was processed.
MIM Values The MIM data stored for the batch. Double-click this field to view all MIM
values.

492
Desktop 7.1

Content Start / End The Duplicate UDR Detection agent stores batches in date segments. The
columns show the date range of the actual data that was duplication checked
during transaction. If the transaction contains dates older than the Max Cache
Age, configured in the Duplicate UDR profile, Outside range is displayed.

If both Start and End show Outside range, all dates in the transaction were
older than Max Cache Age. UDRs that are outside range are always routed
as non-duplicates since there is no duplicate data to compare them to.

These columns are only visible if Date Field is enabled in the Duplicate UDR
profile.
Records The number of records (UDRs) processed for a given batch.
Duplicates The number of duplicates found in the batch.

11.8. Encoder Agent


The Encoder agent converts internal UDRs to raw data. It can be used both in batch and real-time
workflows. The main difference between these two modes of operation is that within a batch workflow
the Encoder agent is capable of adding headers and trailers.

11.8.1. Configuration - Batch Workflow


The Encoder configuration window is displayed when you right-click on an Encoder agent and select
the Configuration... option, or if you double-click on the agent.

Figure 352. Encoder configuration window, Encoder tab.

Suppress Encod- If enabled, the agent will not encode the incoming data. It expects a raw byte array
ing as the input type and will pass it through untouched. This mode is used when only
a header and/or a trailer is added to a data batch.
Encoder List of available encoders introduced via the Ultra Format Editor, as well as the
default built-in encoders for the MediationZone® internal formats; MZ format
tagged UDRs and MZ format tagged UDRs (compressed). Using the compressed
format will reduce the size of the UDRs significantly. However, since compression
will require more CPU, you should consider the trade off between I/O and CPU
when chosing encoder.

Note! The Header and Trailer tabs are described in Section 11.8.8, “Agent Services - Batch
Workflow”. The use and setting of private threads for an agent, enabling multi-threading within
a workflow, is configured in the Thread Buffer tab. For further information, see Section 4.1.6.2.1,
“Thread Buffer Tab”.

11.8.2. Configuration - Real-time Workflow


The Encoder configuration window is displayed when you right-click on an Encoder agent and select
the Configuration... option, or if you double click on the agent.

493
Desktop 7.1

Figure 353. Encoder configuration window - Realtime workflow, Encoder tab.

Encoder List of available encoders introduced via the Ultra Format Editor, as well as the default
built-in encoders for the MediationZone® internal formats; MZ format tagged UDRs
and MZ format tagged UDRs (compressed). Using the compressed format will reduce
the size of the UDRs significantly. However, since compression will require more CPU,
you should consider the trade off between I/O and CPU when chosing encoder.

11.8.3. Transaction Behavior - Batch workflow


This section includes information about the Encoder agent transaction behavior. For information about
the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

11.8.3.1. Emits
This agent does not emit anything.

11.8.3.2. Retrieves
The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Begin Batch Possible headers defined in the Header tab are created and dispatched on all outgoing
routes before the first UDR is encoded.
End Batch Possible trailers defined in the Trailer tab are created and dispatched on all outgoing
routes after the last UDR has been encoded.

11.8.4. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray type and consumes the UDR types corresponding to the selected Encoder.
If Suppress Encoding is enabled bytearray type is consumed.

11.8.5. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

This agent does not publish nor access any MIM parameters.

11.8.6. Agent Message Events


There are no message events for this agent.

11.8.7. Debug Events


There are no debug events for this agent.

494
Desktop 7.1

11.8.8. Agent Services - Batch Workflow


The agent utilizes two specialized services allowing the user to add header and trailer information into
each data batch.

They both offer the possibility of using MIM values, constants and user defined values in the header
or trailer. When selecting MIM resources, note that MIM values used in the data batch header are
gathered when a new batch begins, while MIM values used in the data batch trailer are gathered when
a batch ends. Thus, the numbers of outbound bytes, or UDRs, for any agent will always be zero if they
are referred to in data batch headers.

The windows for both header and trailer configuration are identical.

Figure 354. Encoder configuration window - Header tab.

Suppress On No Data Indicates if the header/trailer will be added to the batch even if the batch
does not contain any data (UDRs or byte arrays).
Value Click on the Add button to populate the columns with items to the header
or trailer of the file. They will be added in the order they are specified.

Figure 355. Add Header/Trailer Content dialog.

MIM Defined If enabled, a MIM value will be part of the header. Size and Padding must be entered
as well.

For data batch headers, the MIM values are gathered at beginBatch.

User Defined If enabled, a user defined constant must be entered. If Size is empty or less than the
number of characters in the constant, Size is set to the number of characters in the

495
Desktop 7.1

constant. If Size is greater than the length of the constant, Padding must be entered
as well.
Pad Only If enabled, a string is added according to the value entered for Size, filled with
Padding characters.
Size Size must always be entered to give the item a fixed length. It can only be omitted
if User Defined is selected, in which case it will be calculated automatically.
Padding Character to pad any remaining space with. Either a user defined character can be
entered, or one of the four predefined/special characters can be selected (Carriage
return, Line feed, Space, Tabulator).
Alignment Left or right alignment within the allocated field size.
Date Format Enabled when a MIM of type date is selected. A Date Format Chooser dialog is
opened, where a date format may be entered.

11.9. SQL Loader Agent


11.9.1. Introduction
This section describes the SQL Loader agent. This is a standard agent on the DigitalRoute® Medi-
ationZone® Platform.

11.9.1.1. Prerequisites
The reader of this information has to be familiar with:

• The MediationZone® Platform

• Structured Query Language (SQL)

• UDR structure and contents

11.9.1.2. User Documentation


The Ultra Format Definition Language and UDR format are described in the MediationZone® Ultra
Reference Guide.

11.9.2. Overview
The SQL Loader agent is a batch processing agent designed to populate the database with data from
existing files, either residing in a local directory or on the server filesystem of the database.

Data can be collected using either the Disk, FTP, or SFTP agents, and supported databases are; MySQL,
Sybase IQ, Netezza and SAP HANA.

11.9.2.1. Workflow Configuration

Figure 356. Workflow with the SQL Loader agent

The Disk, FTP and SFTP agents now have an additional check box called Route FileReferenceUDR
in their configuration dialogs:

496
Desktop 7.1

This check box should be selected when using the SQL Loader agent.

The SQL Loader agent then forwards an SQLLoaderResultUDR containing information about loaded
file, number of inserted rows, execution time and any error messages, for logging purposes.

11.9.2.2. UDR Types


The UDR types used in combination with the SQL Loader agent are FileReferenceUDR and SQLLoad-
erResultUDR:

11.9.2.2.1. FileReferenceUDR

The FileReferenceUDR is the UDR format used to send data from the collection agent to the SQL
Loader agent.

The following fields are included in the FileReferenceUDR:

Field Description
directory (string) This field states the name of the directory the data file is located in.
filename (string) This field states the name of the file data should be collected from.
fullpath (string) This field states the full path to the data file.
OriginalData (long) This field contains the original data in byte array format.

11.9.2.2.2. SQLLoaderResultUDR

The SQLLoaderResultUDR is the UDR that the SQL Loader agent sends out after having loaded the
data into the database. This UDR contains information for logging purposes that should be handled
by another agent in the workflow.

The following fields are included in the ConsumeCycleUDR:

Field Description
errorMessage (string) This field contains any error message that might have been returned
during the loading of data.
executionTime (long) This field indicates the time it took to load the data into the database.
filename (string) This field contains the name of the file from which data was uploaded.
rowsAffected (long) This field indicates the number of affected rows.
OriginalData (long) This field contains the original data in byte array format.

11.9.3. Configuration
The configuration dialog for the SQL Loader agent is opened either by double clicking on the agent,
or right clicking and selecting the Configuration... option.

Figure 357. The SQL Loader Agent Configuration View

497
Desktop 7.1

The SQL Loader tab contains configurations related to the SQL query used for populating the database
with data from external files, as well as error handling.

Database Profile name of the database that the agent will connect to and forward data to.
MySQL, SybaseIQ, Netezza and SAP HANA profiles are supported.
SQL Statement In this field you enter the SQL statement to be used for stating where the files
containing the data are located, into which table in the database the data should
be inserted, as well as the formatting of the data.

See Section 11.9.8, “SQL Statements” for information about how to write the
statements.
Abort if exception Select this check box if you want the workflow to abort in case of an exception.

11.9.4. Transaction Behavior


11.9.4.1. Emits
The agent does not emit any commands.

11.9.4.2. Retrieves

Cancel Batch If a cancelBatch is emitted by any agent in the workflow, all data in the current trans-
action will be disregarded. No closing conditions will be applied.

11.9.5. Introspection
The agent receives UDRs of FileReferenceUDR type and emits UDRs of SQLLoaderResultUDR
type.

11.9.6. Meta Information Model


The agent does not publish or access any MIM values. For information about the MediationZone®
MIM and a list of the general MIM parameters, see Section 2.2.10, “Meta Information Model”.

11.9.7. Debug Events


Debug messages are dispatched when debug is used. During execution, the messages are shown in the
Workflow Monitor and can also be stated according to the configuration done in the Event Notification
Editor.

The agent does not itself produce any debug events. However, APL offers the possibility of producing
events.

11.9.8. SQL Statements


The format of the SQL statements differs depending on which database type you are using.

11.9.8.1. MySQL
For remote loading (the file resides in a local directory)

LOAD DATA LOCAL INFILE '<filepath>'


INTO TABLE TABLENAME
FIELDS TERMINATED BY ','
LINES TERMINATED BY ‘\n’;

For server side loading (file resides in the server filesystem of the database)

498
Desktop 7.1

LOAD DATA INFILE '<filepath>'


INTO TABLE TABLENAME
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';

11.9.8.2. Sybase IQ
For server side loading (file resides in the server filesystem of the database)

LOAD TABLE TABLENAME (COLUMNNAME, COLUMNNAME2)


USING FILE '<filepath>'
FORMAT BCP
ESCAPES OFF
DELIMITED BY ','

The Sybase JConnect driver does not support remote file loading.

11.9.8.3. Netezza
For remote loading (the file resides in a local directory)

INSERT INTO TABLENAME


SELECT * FROM EXTERNAL '<filepath>'
USING (delim ',' REMOTESOURCE 'JDBC')

For server side loading (file resides in the server filesystem of the database)

INSERT INTO TABLENAME


SELECT * FROM EXTERNAL '<filepath>'
USING (delim ',')

11.9.8.4. SAP HANA


For server side loading (file resides in the server filesystem of the database)

1. Create a control file containing code below (this example ctl file name is abc.ctl, abc.txt is the csv
file):

import data
into table SYSTEM."TEST_LOADER"
from 'abc.txt'
record delimited by '\n'
fields delimited by ','
optionally enclosed by '"'
error log 'abc.err'

2. Run the workflow with the following command:

import from '/<path>/abc.ctl'

SAP HANA does not support remote file loading.

499
Desktop 7.1

11.10. PSI Agent


11.10.1. Introduction
This section describes the Comverse Payment Server Interface (PSI) agent of the DigitalRoute® Me-
diationZone® Platform.

11.10.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• Comverse Open Services Access version 4.6

• UDR structure and contents

• Comverse Real-Time Billing Solutions proprietary protocol for Payment Server Interface

11.10.2. PSI Agent


The PSI agent is a real-time processing agent designed to support online charging in the Comverse
Realtime Billing System (RTBS) 4.6.

11.10.2.1. Overview
The PSI agent is a processing agent designed to support the Payment Server Interface provided by the
Comverse Realtime Billing System. The functionality exposed by these services are mapped to the
MediationZone® type system. The agent thereby emits and accepts a set of UDRs that represent the
requests that can be made and their corresponding responses and acknowledgements.

Figure 358. Example of Charge Flow

11.10.2.1.1. Request/Response Mapping

The PSI agent contains a number of different UDRs. The UDRs in turn contain a set of fields corres-
ponding to the fields required by the PSI application. The UDRs also contain a few internal fields that
can be used by the workflow logic.

11.10.2.1.2. PSI Related UDR Types

In the UDR Internal Format Browser a detailed view of the available fields is displayed. To open
the browser, double click or right click on the Analysis Agent and select Configuration.... You then
right-click in the editing area and select the option UDR Assistance....

500
Desktop 7.1

Figure 359. UDR Internal Format Browser showing a UDR with fields

The following requests and responses are provided by the agent and shown in the psi folder.

Note! Not all of the PSI messages are supported, just those included in the UDRs below.

11.10.2.1.2.1. ApplyTariffRequest

The ApplyTariffRequest UDR sends the parameters to the PSI agent to charge a subscriber using the
RTB tariff.

Field Description
bearerCapability Refer to 'Bearer Capability' in the Comverse proprietary protocol
(string) for Payment Server Interface for the Apply Tariff Request specific-
ations.
discount (string) Refer to 'Discount' in the Comverse proprietary protocol for Pay-
ment Server Interface for the Apply Tariff Request specifications.
originatingCallerId Refer to 'Originating Caller ID' in the Comverse proprietary protocol
(string) for Payment Server Interface for the Apply Tariff Request specific-
ations.

501
Desktop 7.1

Field Description
originatingSubscriberM- Refer to 'Originating Subscriber MSC Address' in the Comverse
SCAddress (string) proprietary protocol for Payment Server Interface for the Apply
Tariff Request specifications.
subscriberType (int) Refer to 'Subscriber Type' in the Comverse proprietary protocol
for Payment Server Interface for the Apply Tariff Request specific-
ations.
terminatingCallerId Refer to 'Terminating Caller ID' in the Comverse proprietary pro-
(string) tocol for Payment Server Interface for the Apply Tariff Request
specifications.
terminatingSubscriberM- Refer to 'Terminating Subscriber MSC Address' in the Comverse
SCAddress (string) proprietary protocol for Payment Server Interface for the Apply
Tariff Request specifications.

11.10.2.1.2.2. ApplyTariffResponse

This UDR send the response message from the PSI agent.

Field Description
statusID (int) Refer to 'Status ID' in the Comverse proprietary protocol for Payment
Server Interface for the Apply Tariff Response specifications.
transactionID (byte- Refer to 'Transaction ID' in the Comverse proprietary protocol for
array) Payment Server Interface for the Apply Tariff Response specifications.

11.10.2.1.2.3. PSICycleUDR

This UDR contains all the relevant UDRs for the entire message cycle between a SLU and the PSI
Agent, including errors and contexts. This is the only UDR that the PSI agent accepts and emits.

Field Description
ackUDR (PSISessionAck- Refer to Section 11.10.2.1.2.4, “TransactionIdAcknowledge”.
UDR)
associatedNumber (long) A value generated by the PSI Agent to uniquely identify a request.
This should not be set by the calling agent.
context (any) This is an internal working field that can be used in the workflow
configuration to keep track of and use internal workflow inform-
ation related to the request, when processing the answer.
errors (list<string>) This list contains the errors sent from the PSI Agent.
hasErrors (boolean) If there are errors, this is set to true. The errors are listed in er-
rors (list<string>).
reqUDR (PSISessionRe- Refer to Section 11.10.2.1.2.1, “ApplyTariffRequest”.
qUDR)
respUDR (PSISession- Refer to Section 11.10.2.1.2.2, “ApplyTariffResponse”.
RespUDR)
SLUIndex (int) This field indicates the specific SLU to which the session is
bound. You do not modify this value.

11.10.2.1.2.4. TransactionIdAcknowledge

This UDR acknowledges the receipt of the response message sent from the PSI agent to a SLU.

502
Desktop 7.1

Field Description
statusID (int) Refer to the 'Status ID' in the Comverse proprietary protocol for Payment
Server Interface, in the Apply Tariff section for the Transaction ID Acknow-
ledgement specifications.

11.10.2.1.3. Error Management

The agent will, to the extent possible, manage errors without aborting the workflow. Errors related to
the communication between the agent and the PSI will be sent to the System Log and from the PSI
agent via the cycleUDR (see errors and hasErrorshasErrors in Section 11.10.2.1.2.3, “PSI-
CycleUDR”).

11.10.2.2. Configuration
The PSI agent configuration window is opened by right clicking on the node in a realtime workflow,
and selecting the Configuration... option, or by double clicking on the node.

Figure 360. The PSI Agent Configuration View - SLU list tab

In the SLU list tab, use the Add button to add the SLU (Service Logic Unit) Host and corresponding
Server Ports. The PSI requests are equally distributed by round robin to all the PSI SLU servers added.

Figure 361. The PSI Agent Configuration View - Connection tab

503
Desktop 7.1

In the Connection tab, you configure the heartbeat messages which are sent between MediationZone®
and the Payment Server to keep the session active when there is no other message activity from Medi-
ationZone® .

Heartbeat interval (s) Specifies the interval period in seconds for sending heartbeat messages. The
default value is 60, the maximum value is 1800.
Request timeout (ms) Specifies the timeout period for responses from a SLU.
Request retries Specifies the number of SLUs to which a request is attempted to be sent on
a timeout. Entering a value of 1 means you an attempt is made to send a re-
quest to 2 SLUs.

Figure 362. The PSI Agent Configuration View - Advanced properties tab

You can modify properties in the Advanced properties tab.

One example of properties that can be configured under the advanced tab is reconnectInterval,
which specifies the time interval after which the agent will try to reconnect.

See the text in the Properties field for further information about the properties.

11.10.2.3. Introspection
The agent receives and emits UDR types as defined in Section 11.10.2.1.1, “Request/Response Mapping”
section.

11.10.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Section
2.2.10, “Meta Information Model”.

The agent does not publish or access any additional MIM parameters. However, MIM parameters can
be produced and accessed through APL. For further information about available functions, see the
section MIM Related Functions in the APL Reference Guide.

11.10.2.5. Agent Message Events


There are no agent message events for this agent.

504
Desktop 7.1

For information about the agent message event type, see Section 5.5.12, Agent Event.

11.10.2.6. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Connected to SLU: <slu>

This message is displayed when connected to a SLU.

• Connecting to SLU: <slu>

This message is displayed when connecting to a SLU.

• Disconnected from SLU: <slu>

This message is displayed when disconnected from a SLU.

• Disconnecting from SLU: <slu>

This message is displayed when disconnecting from a SLU.

• Encode error in {ack|req}: <error info>

This message is displayed when an encoding error occurs.

• PSI.TransactionIdAcknowledge

This message is displayed when the PSI agent attempts to write a TransactionID acknowledgement
message.

• PSI.ApplyTariffRequest

This message is displayed when an attempt is made to write an ApplyTariffRequest.

• Invalid Cycle UDR: <error info>

This message is displayed when a request or response is invalid.

• Invalid response from SLU: <slu>, <associated number>

This message is displayed when the SLU does not understand the message sent.

• No connected SLU

This message is displayed when there is no connection to any of the SLUs.

• No response from last heartbeat to SLU: <slu>

This message is displayed when none of the SLUs send a heartbeat response.

• Unable to write to SLU: <slu>

This message is displayed when it is not possible to write to a SLU.

• Unexpected heartbeat from SLU: <slu>

505
Desktop 7.1

This message is displayed when an unexpected heartbeat response is received from a SLU.

• Unknown message from SLU: <slu>, <associated number>

This message is displayed when a message received from a SLU is not recognized.

• Unsolicited response from SLU: <slu>, <associated number>

This message is displayed when a response is received from a SLU for which a request has not been
sent.

11.10.3. Exception Messages


When exceptions occur, the PSI agent generates messages and routes them to the subsequent agent in
the workflow. The exceptions are written to the $MZ_HOME/log, the System Log and the Workflow
Monitor.

The exceptions which may occur are the following:

• Connected to SLU: <slu>

This message is displayed when connected to a SLU.

• Connecting to SLU: <slu>

This message is displayed when connecting to a SLU.

• Disconnected from SLU: <slu>

This message is displayed when disconnected from a SLU.

• Disconnecting from SLU: <slu>

This message is displayed when disconnecting from a SLU.

• Encode error in {ack|req}: <error info>

This message is displayed when an encoding error occurs.

• Invalid Cycle UDR: <error info>

This message is logged when a request or response is invalid.

• Invalid response from SLU: <slu>, <associated number>

This message is logged when the SLU does not understand the message sent.

• No connected SLU

This message is logged when there is no connection to any of the SLUs.

• No response from last heartbeat to SLU: <slu>

This message is logged when none of the SLUs send a heartbeat response.

• Unable to write to SLU: <slu>

This message is logged when it is not possible to write to a SLU.

• Unexpected heartbeat from SLU: <slu>

This message is logged when an unexpected heartbeat response is received from a SLU.

506
Desktop 7.1

• Unknown message from SLU: <slu>, <associated number>

This message is logged when a message received from a SLU is not recognized.

• Unsolicited response from SLU: <slu>, <associated number>

This message is logged when a response is received from a SLU for which a request has not been
sent.

11.10.4. Example
This example shows a workflow configured to, via Diameter_Stack, receive requests and, through the
Analysis agent, sends messages to and receives messages from the PSI agent.

Figure 363. Workflow Example

An example for the Analysis agent can be as follows:

consume {
if (instanceOf(input, Diameter.RequestCycleUDR)) {
Diameter.RequestCycleUDR diameterUDR = (Diameter.RequestCycleUDR )input;
Credit_Control_Request ccr = (Credit_Control_Request)diameterUDR.Request;
// Create and populate PSI.ApplyTariffRequest
PSI.ApplyTariffRequest applyTariffReq = udrCreate(PSI.ApplyTariffRequest);
applyTariffReq.originatingCallerId = ccr.Subscription_Id.Subscription_Id_Data;
applyTariffReq.terminatingSubscriberMSCAddress = "10.0.17.42";
applyTariffReq.subscriberType = 1;
applyTariffReq.bearerCapability = "2";
applyTariffReq.discount = "0";
// etc ...
// Create and populate PSICycleUDR
PSI.PSICycleUDR cycle = udrCreate(PSI.PSICycleUDR);
cycle.context = ccr;
cycle.reqUDR = applyTariffReq;
udrRoute(cycle,"to_psi");
} else if (instanceOf(input, PSI.PSICycleUDR)) {
// Create and route Credit_Control_Answer
udrRoute(createCCA((PSI.PSICycleUDR)input), "to_diameter");
// Create transactionId ack
PSI.PSICycleUDR cycle = (PSI.PSICycleUDR)input;
if (cycle.hasErrors) {
for (int i=0; i < listSize(cycle.errors); i++) {
string error = listGet(cycle.errors, i);
// handle errors
debug(error);
}
} else {
if (instanceOf(cycle.respUDR, PSI.ApplyTariffResponse)) {
PSI.TransactionIdAcknowledge trIdAck = udrCreate(PSI.TransactionIdAcknowledge);
trIdAck.statusId = 0;

507
Desktop 7.1

cycle.ackUDR = trIdAck;
udrRoute(cycle, "to_psi");
}
}
}
}

11.11. RTBS Agent


11.11.1. Introduction
This section describes the Comverse Real Time Billing System (RTBS) agent of the DigitalRoute®
MediationZone® Platform.

11.11.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• Comverse Open Services Access version 4.6

• UDR structure and contents

11.11.2. RTBS Agent


The RTBS agent is a processing agent designed to support the Charging Parlay Service provided by
the Comverse Realtime Billing System. The functionality exposed by these services are mapped to
the MediationZone® type system. The agent thereby emits a set of UDRs that represent the requests
that can be made and their corresponding responses.

Note! The agent is compatible with the services defined in the Comverse Interface Control
Document for Open Services Access release 4.6.

The agent communicates with the billing system using SOAP over HTTP. In order to retrieve responses
from the billing system, a separate service needs to be configured for the Execution Context. For further
information, see Section 11.11.2.1, “Preparations”. This service is shared by all RTBS agents running
on the same Execution Context.

11.11.2.1. Preparations
When installing the RTBS agent for the first time, properties must be added to configure the Execution
Contexts.

1. For each Execution Context that the RTBS workflows will execute on, two properties for callback
has to be added in the executioncontext.xml file found in $MZ_HOME/etc:.

• Callbackhost, defines the name or IP address of the interface that should receive responses
from the machine hosting the Execution Context.

<property name="rtbs.callbackhost" value="10.0.10.15"/>

• Callbackport, defines the port that responses should be sent to.

508
Desktop 7.1

<property name="rtbs.callbackport" value="9595"/>

2. Startup the Execution Contexts:

$> mzsh startup ec1

$> mzsh startup ec2

11.11.2.2. Overview
11.11.2.2.1. Asynchronous Request

The RTBS agent (Parlay) uses both synchronous and asynchronous requests. It is important to understand
the differences between them, since it affects the requirements on the business logic in the workflow.

Usually, when routing a UDR to a subsequent agent, the agent performing the route can trust that the
subsequent agents have completed their tasks before it continues with any post activity. For asynchron-
ous requests, this is not the case.

The RTBS agent can not assume that the request has been successful until the corresponding response
comes back from the agent. This means that the agent following the RTBS agent must manage any
operations that are supposed to take place when the response comes back.

Note! When a response connected to an asynchronous request returns, the workflow is driven
by the response and not by the collection agent as usual. Therefore, the call path for the workflow
is different and the workflow logic needs to manage this. This is typically done by using the
Aggregation agent to store variables and states that are needed to pick up and handle a response
from an asynchronous agent.

For a list of requests and the category that each of them belongs to, see Section 11.11.2.2.2, “Request/Re-
sponse Mapping”. Basically, UDRs ending with Req represents an asynchronous request and UDRs
starting with Get represents a synchronous request.

11.11.2.2.2. Request/Response Mapping

The RTBS agent contains a number of different UDRs. The UDRs in turn contain a set of fields cor-
responding to the fields required by the billing system. The UDRs also contain a few internal fields
that can be used by the workflow logic.

There are also additional types that the agent provides, which are used in order to populate the request
and response. These data types are prefixed Tp.

11.11.2.2.3. RTBS Related UDR Types

In the UDR Internal Format Browser a detailed view of the available fields is displayed. The browser
is opened by clickin on the Configuration menu and selecting the option APL Code... and then right-
clicking in the editing area and selecting the option UDR Assistance....

509
Desktop 7.1

Figure 364. UDR Internal Format Browser showing a UDR with fields

The following requests and responses are provided by the agent and shown in the rtbs folder.

Asynchronous Request Response or Error


ReserveUnitReq ReserveUnitRes or ReserveUnitErr
DebitUnitReq DebitUnitRes or DebitUnitErr
CreditUnitReq CreditUnitRes or CreditUnitErr
DirectDebitUnitReq DirectDebitUnitRes or DirectDebitUnitErr
ReserveAmountReq ReserveAmountRes or ReserveAmountErr
DebitAmountReq DebitAmountRes or DebitAmountErr
CreditAmountReq CreditAmountRes or CreditAmountErr
DirectDebitAmountReq DirectDebitAmountRes or DirectDebitAmountErr
DirectCreditAmountReq DirectCreditAmountRes or DirectCreditAmountErr
ExtendLifeTimeReq ExtendLifeTimeRes or ExtendLifeTimeErr

Synchronous Request Response


GetAmountLeftReq GetAmountLeftRes
GetUnitLeftReq GetUnitLeftRes
GetLifeTimeLeftReq GetLifeTimeLeftRes

Other Request Other Error


ReleaseSession N/A
Allows the workflow to tell the
agent that an ongoing RTBS ses-
sion should be released, normally
not needed since the billing system
will manage this.
Any request RequestException

510
Desktop 7.1

Error that can be sent from the agent whenever an abnor-


mal error has occurred.
Any request SessionAborted
Possible response from the billing system in the case an
ongoing session must be aborted. For instance when a
session has been idle for some time.

11.11.2.2.4. Error Management

The agent will, to the extent possible, manage errors without aborting the workflow. Errors related to
the communication between the agent and the billing system will be sent to the system log and routed
into the workflow via the RequestException UDR where the two fields errorMessage and
errorDetails contain the details of the error. The RequestException type will also be used
for invalid requests detected by the agent. Other errors will only be written to the System Log.

11.11.2.3. Configuration
The RTBS agent configuration window is opened by right clicking on the node in a realtime workflow,
and selecting the Configuration... option, or by double clicking on the node.

Figure 365. The RTBS Agent Configuration View - RTBS tab

Host Specifies the name or IP address of the machine where the Charging Manager
services is located.
Port Specifies the port that the Charging Manager listens to.
Path Specifies the path to the Charging Manager service at the supplied host and
port above.
Max Blocking Enter the maximum number of simultaneous threads that can call upon a service.
Threads Any workflow thread that attempts an I/O request on this specific service after the
max quota has been reached, will be blocked and will cause the RTBS agent to
generate an exception.

Note! The service is identified by its host- and port numbers. For example:
http://127.0.0.1:8080/axis/services/IPChargingManager/12345 is identified
as 127.0.0.1:8080. This value is also used when determining the number of
simultaneous threads that are sent to the IpChargingManager.

Blacklist Enter the number of seconds during which a service should remain blacklisted.
Timeout
A service blacklist is triggered when a thread receives a SocketTimeoutException.
A blacklisted service cannot be called upon. Any attempt to call it will result in
an exception that is routed into the workflow.

511
Desktop 7.1

HTTP Timeout Enter the maximum number of milliseconds during which a service can be blocked.
If a call does not receive a reply within this period, an exception is generated. This
in turn, triggers the service blacklist.
Debug Specifies if the agent should provide latency and throughput measures as debug
events. The latency and throughput of every asynchronous request will be measured.
It will send this information as a Debug Event every ten seconds, if this option is
enabled. For further information about debug events, see Section 11.11.2.7, “Debug
Events”.

11.11.2.4. Introspection
The agent receives and emits UDR types as defined in Section 11.11.2.2.2, “Request/Response Mapping”
section.

11.11.2.5. Meta Information Model


The RTBS agent does not publish or access any MIM parameters.

11.11.2.6. Agent Message Events


There are no agent message events for this agent.

11.11.2.7. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Charging manager reference: reference

This event is reported during workflow initialize. It shows the reference to the Charging Man-
ager.

• Launching RTBS web server

This event is reported during workflow initialize in order to tell the user that the centralized web
server in the Execution Context is about to be initialized.

• Active: sessions, Throughput throughput, Latency latencyAverage / latencyMax,


Requests requests

This event is reported in case Debug has also been activated as part of the agent's configuration
dialog. It reports statistics over the measured number of simultaneous RTBS sessions, the average
throughput, latency (average and max) and number of requests.

11.11.3. Event and Exception Messages


When exceptions occur, the RTBS agent generates messages and routes them to the subsequent agent
in the workflow.

• NODE BUSY ERROR. Maximum number of blocking threads to <hostname:port>:

The service is blocked by too many simultaneous workflow threads.

• NODE BUSY ERROR. <hostname:port> is unreachable:

512
Desktop 7.1

The service is blacklisted.

• Node <hostname:port> does not respond and will be blacklisted:

This message will also appear in the System Log.

• NODE BUSY ERROR. Charging Manager is blocked by maximum number of threads:

The IpChargingManager is blocked by a maximum number of workflow threads.

• NODE BUSY ERROR. Timeout when connecting to the Charging Manager:

Timeout occurred when connecting to the IpChargingManager.

11.11.4. WFCommands
The RTBS relevant wfcommands:

• help: Lists all the available commands.

• listBlackList: Lists all the OSA service nodes that are currently blacklisted. The list also includes
the time when the OSA service nodes were blacklisted.

Example 84.

MZ>>wfcommand 8102.test.workflow_1 RTBS_1 listBlackList


8102.test.workflow_1 (14):
10.46.20.142:8082 18:39:22
.

• remFromBlackList <host:port>: Removes a specified service from the blacklist.

Example 85.

$> mzsh mzadmin/dr wfcommand 8102.test.workflow_1 RTBS_1 \


remFromBlackList 10.46.20.142:8082
8102.test.workflow_1 (14):

11.11.5. APL
Unlike other MediationZone® agents, the RTBS agent emits RequestException with a message string
that begins with "NODE BUSY ERROR". In order to handle such errors correctly, you need to adjust
your APL code as follows: When an error occurs, the request is dropped by MediationZone® .
Therefore, for Diameter compliance, the workflow should send the message "DIAMETER_UN-
ABLE_TO_DELIVER" to the GGSN node.

11.11.6. Example
This example shows a workflow configured to, via TCPIP, receive requests and, through the Request
(Analysis) node, make a reservation in the RTBS node. The session state between network and RTBS
is stored in the State (Aggregation) node whilst the network sends a new request for an ongoing session.
The Request node checks if there is an ongoing session in the State node before a new request is sent
to the RTBS node.

513
Desktop 7.1

Before passing a response back to the network element, the response returning from the RTBS agent
may need to be enriched with data required for the response to the network element using the state of
the ongoing session, kept in the State agent.

Figure 366. Workflow Example

An example of a reservation in the Request node can be as follows:

import ultra.rtbs;

consume {
// Initial reservation request
ReserveUnitReq req = udrCreate(ReserveUnitReq);
req.sessionDescription = "";
req.merchantAccount = udrCreate(TpMerchantAccountID);
req.merchantAccount.AccountID = 1732380001;
req.merchantAccount.MerchantID = "OSA Merchant-001 (ET-NJ)";
req.correlationID = udrCreate(TpCorrelationID);
req.correlationID.CorrelationID = 10;
req.correlationID.CorrelationType = 1;
req.user = udrCreate(TpAddress);
req.user.AddrString = "8082340211";
req.user.Name = "Name";
req.user.Plan = "P_ADDRESS_PLAN_E164";
req.user.Presentation = "P_ADDRESS_PRESENTATION_UNDEFINED";
req.user.Screening = "P_ADDRESS_SCREENING_UNDEFINED";
req.user.SubAddressString = "Subaddresssss";

// Application Description
req.applicationDescription = udrCreate(TpApplicationDescription);
req.applicationDescription.Text = "description";
req.applicationDescription.AppInformation = listCreate
(TpAppInformation);
TpAppInformation appInfo = udrCreate(TpAppInformation);
appInfo.Timestamp = "2008-06-01 00:00";
listAdd(req.applicationDescription.AppInformation, appInfo);

// Charging Parameters
req.chargingParameters = listCreate(TpChargingParameter);
TpChargingParameter param = udrCreate(TpChargingParameter);
param.ParameterID = 2;

514
Desktop 7.1

param.ParameterValue = udrCreate(TpChargingParameterValue);
param.ParameterValue.StringValue = "OSA";
listAdd(req.chargingParameters, param);

// Volume
req.volumes = listCreate(TpVolume);
TpVolume volume = udrCreate(TpVolume);
volume.Unit = 3;
volume.Amount = udrCreate(TpAmount);
volume.Amount.Exponent = 1;
volume.Amount.Number = 6;
listAdd(req.volumes, volume);

// User Object to be associated to response


req.UserObject = input;

// Pass to RTBS
udrRoute(req);
}

515
Desktop 7.1

12. Appendix IV - Forwarding agents

12.1. Archiving
12.1.1. Introduction
This section describes the Archiving agents. These are standard agents on the DigitalRoute® Medi-
ationZone® platform.

12.1.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

12.1.2. Overview
With the Archiving agents, MediationZone® offers the possibility to archive data batches for a config-
urable period of time. There are two agents:

• The Archiving agent (also referred to as the Global Archiving agent), which stores the data on the
platform machine.

• The Local Archiving agent, which stores the data locally on the Execution Context machine. Note
that local data is not possible to export.

The Archiving agents can be configured to archive all received data batches. Each data batch is saved
as a file in a user specified repository. The Global Archiving agent also saves a corresponding reference
in the database, enabling the Archive Inspector to browse and purge the data batch files.

Depending on the selected profile, the Archive services are responsible for naming and storing of each
file and to purge outdated files at a regular basis. Utilizing the Directory Templates and base direct-
ories specified in the Archive profile, directory structures are dynamically built when files are stored.

The system administrator defines what structure is suitable for each profile. For instance, set the dir-
ectory structure to be changed with respect to collecting agent name on a daily basis. The Archive
services will automatically create all directories needed in the base directory or directories.

12.1.3. Configuration
You configure an archiving agent in three steps:

• Define an Archive profile.

• Configure the agent.

• Set MultiForwardingUDR input.

12.1.3.1. Archive Profile


Storage, naming scheme and lifetime for targeted files are configured in the Archive profile. Several
workflows may be configured to use the same profile, however only one of the workflows may be
active at a time.

The full path of each filename to store in the archive is completely dynamic via the Archive File
Naming Algorithm. The name is determined by three parameters:

516
Desktop 7.1

AAA/BBB/CCC

Where:

AAA Represents one of the base directories specified in the Base Directory list in the Archive profile
configuration. If several base directories exist, this value will change according to the frequency
selected from the Switch Policy list.

The system automatically appends a directory delimiter after this name.


BBB This part is constructed from the Directory Template. If the template contains one or several
Directory delimiters this part will enclose one or several directory levels itself.

For instance, if the template contains Month, Directory delimiter, Day this will yield new
directories every day, named 03/01, 03/02 ... 03/31, 04/01, 04/02 ... 04/30 and so
on. In this example, files are stored in a directory structure containing all months, which in
turn contains directories for all days (which in turn will contain all files from that day).

The system automatically appends a directory delimiter after this name.


CCC This is the name the file will get. It is defined on each archiving agent using configurations
from the Filename Template tab in the Archiving agent configuration window.

Figure 367. Archive Profile Configuration

The Archive profile is loaded when you start a workflow that depends on it. Changes to the profile
become effective when you restart the workflow.

To open the configuration, click the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select Archive Profile from the menu.

Switch Policy If several base directories are configured, the switch policy determines for how
long the Archive services will populate each base directory before starting to
populate the next one (daily, weekly, or monthly). After the last base directory
has been populated, the archiving wraps to the first directory again.
Base Directory One or several base directories that can be used for archiving of files. For consid-
erable amounts of data to be archived, several base directories located on different
disk partitions might be needed.
Directory Tem- List of tokens that, in run-time, builds subdirectory names appended to one of the
plate base directories. The tokens could be either special tokens or user defined values.

517
Desktop 7.1

Subdirectories on any level, can be constructed by using the special token Direct-
ory delimiter.
Remove Entries If enabled, files older than the entered value will be deleted from the archive.
(days) Depending on the agent using the profile the removal will occur differently.

• For the Local Archiving agent the cleanup of outdated files is mastered by the
workflow. It removes the file from its archive directory.

• For the Archiving agent the cleanup of outdated files is mastered by the Archive
Cleaner task. It removes the reference in MediationZone® , as well as the file
itself from its archive directory. Consequently, the data storage is also dependent
on the setup of the task scheduling criteria.

Keep Files Files will not be deleted.

If Keep Files and Remove Entries (days) are combined, only references in the
database are removed while the files remain on disk. (not valid for the Local
Archiving agent).

12.1.3.1.1. Enabling External Referencing

You enable External Referencing of profile fields from Archive Profile in the Edit menu. For detailed
instructions, see Section 9.5.3, “Enabling External References in an Agent Profile Field”.

When you apply External Referencing to profile fields, the following profile parameters are affected:

Base Directory The directory paths that you add to this list are included
in the properties file that contains the External Refer-
ences.

Example 86.

For example:

myBaseDirectoryKey =
/mypath/no1, /mypath/no2

Remove Entries (days) The value with which you set this entry is included in
the properties file and interpreted as follows:

myRemoveEntriesKey = 1
#! Remove after 1 day
myRemoveEntriesKey = 365
#! Remove after 365 days
myRemoveEntriesKey = -1
#! Do not remove.
#! This value is equal to clearing the
#! check-box.

Keep Files In the properties file a checked entry is interpreted as


true or yes, and a cleared entry as false or no.

518
Desktop 7.1

12.1.3.1.2. Directory Template

Figure 368. Add Directory Template

Special Tokens to be used as part of the directory name.


token
• Year - Inserts four digits representing the year the file was archived.

• Month - Inserts two digits representing the month the file was archived.

• Day - Inserts two digits representing the day of the month the file was archived.

• Hour - Inserts two digits representing the hour (24) of the day the file was archived.

• Agent directory name - Inserts the MIM value(s) defined in the Agent Directory
Name list in the Archiving agent configuration window.

• Day index - Inserts a day index between zero and the value entered in Remove Entries
(days) field. This number is increased by one every day until (Remove Entries (days)
number - 1) is reached. It then wraps back to zero. Day index may not be used in the
template if Remove Entries (days) is disabled.

• Directory delimiter - Inserts the standard directory delimiter for the operating system
it distributes files to. This way, a sub-directory is created.

Text If enabled, the token is entered from the text field. When disabled, the token is instead
selected from the Special token list.

12.1.3.1.3. Archive Profile Menu

The main menu changes depending on which Configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all Configurations and these are
described in Section 3.1.1, “Configuration Menus”.

There is one menu item that is specific for Archive profile configurations, and it is described in the
coming section:

12.1.3.1.3.1. The Edit Menu

Item Description
External References To Enable External References in an agent profile field. Please refer to Sec-
tion 12.1.3.1.1, “Enabling External Referencing” for further information.

12.1.3.1.4. Archive Profile Buttons

The toolbar changes depending on which Configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all Configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.

There are no additional buttons for Archive profile.

12.1.3.2. Archiving Agent


The Archiving agent configuration window is displayed when right-clicking the agent and selecting
Configuration..., or when the agent is double-clicked.

519
Desktop 7.1

Figure 369. Archiving agent configuration, Archiving tab

The following options are available in the Archiving agent configuration:

Profile Name of the Archive profile to be used when determining the attributes of the
target files.
Input Type The agent can act on two input types. The behavior varies depending on the
input type that you configure the agent with.

The default input type is bytearray. For information about the agent beha-
vior with the MultForwardingUDR input type, see Section 12.1.3.4, “Mul-
tiForwardingUDR Input”.
Compression Compression type of the target files. Determines if the agent will compress the
files before storage or not.

• No Compression - the agent will not compress the files.

• Gzip - the agent will compress the files using gzip.

No extra extension will be appended to the target filenames, even if


compression is selected. The configuration of the filenames is managed
in the Filename Template tab, only.

Agent Directory Possibility to select one or more MIM resources to be used when naming a sub-
Name directory in which the archived files will be stored. If more than one MIM re-
source is selected, the values making up the directory name will automatically
be separated with a dot.

If at least one Agent Directory Name is selected in Directory Templates


in the Archive Profile, this directory field is used.

Logged MIM Data MIM values to be logged as meta data along with the file. This is used for
identification of the files. The meta data is viewed in the Archive Inspector.

520
Desktop 7.1

Produce Empty If enabled, files are produced although containing no data.


Files

The names of the created files are determined by the settings in the Filename Template tab.
For further information about the Filename Template service, see Section 4.1.6.2.4, “Filename
Template Tab”.

12.1.3.3. Local Archiving Agent


The Archiving_Local agent configuration window is displayed when right-clicking the agent and se-
lecting Configuration... or when the agent is double-clicked.

Figure 370. The Archiving_Local Agent Configuration - Archiving Local Tab

The following options are available in the Archiving_Local agent configuration:

Profile Name of the Archive profile to be used when determining the attributes of the
target files.

All workflows in the same workflow configuration using the Archiving_Local


agent can use separate archiving profiles, if that is preferred. In order to do that
the profile must be set to Default in the Workflow Table tab found in the
Workflow Properties dialog. After that each workflow in the table can be appoin-
ted the correct profile.
Input Type The agent can act on two input types. The behavior varies depending on the input
type that you configure the agent with.

The default input type is bytearray. For information about the agent behavior
with the MultForwardingUDR input type, see Section 12.1.3.4, “MultiFor-
wardingUDR Input”.
Compression Compression type of the target files. Determines if the agent will compress the
files before storage or not.

• No Compression - the agent will not compress the files.

• Gzip - the agent will compress the files using gzip.

No extra extension will be appended to the target filenames, even if com-


pression is selected. The configuration of the filenames is managed in the
Filename Template tab, only.

521
Desktop 7.1

Agent Directory Possibility to select one or more MIM resources to be used when naming a sub-
Name directory in which the archived files will be stored.

If more than one MIM resource is selected, the values making up the directory
name will automatically be separated with a dot.

If at least one Agent Directory Name is selected in Directory Templates


in the Archive profile, this directory field is used.

Produce Empty If enabled, files are produced although containing no data.


Files

The names of the created files are determined by the settings in the Filename Template tab.
For further information about the Filename Template service, see Section 4.1.6.2.4, “Filename
Template Tab”.

12.1.3.4. MultiForwardingUDR Input


When the agent is set to use MultiForwardingUDR input, it accepts input of the UDR type MultiFor-
wardingUDR declared in the package FNT. The declaration follows:

internal MultiForwardingUDR {
// Entire file content
byte[] content;
// Target filename and directory
FNTUDR fntSpecification;
};

Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.

A UDR of the type MultiForwardingUDR which target filename is not identical to its precedent is
saved in a new output file.

After a target filename that is not identical to its precedent is saved, you cannot use the first fi-
lename again. For example: Saving filename B after saving filename A, prevents you from using
A again. Instead, you should first save all the A filenames, then all the B filenames, and so forth.

522
Desktop 7.1

12.1.3.4.1. Global Archiving Example

Example 87.

This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs. In this example, the data is being buffered in
the consume block. This makes it possible to route a complete batch to multiple files from the
drain block. NOTE that the execution context needs available memory to buffer the whole file.

import ultra.FNT;

bytearray data;

MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent) {

//Create the FNTUDR


FNTUDR fntudr = udrCreate(FNTUDR);
fntAddString(fntudr, dir);
fntAddDirDelimiter(fntudr);//Add a directory
fntAddString(fntudr, file);//Add a file

MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;

return multiForwardingUDR;
}

beginBatch {
data = baCreate(0);
}

consume {
data = baAppend(data, input);
}

drain {
//Send MultiForwardingUDRs to the forwarding agent
udrRoute(createMultiForwardingUDR("dir1", "file1", data));
udrRoute(createMultiForwardingUDR("dir2", "file2", data));
}

523
Desktop 7.1

12.1.3.4.2. Local Archiving Example

Example 88.

This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.

import ultra.FNT;

MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){

//Create the FNTUDR


FNTUDR fntudr = udrCreate(FNTUDR);
fntAddString(fntudr, dir);
fntAddDirDelimiter(fntudr);//Add a directory
fntAddString(fntudr, file);//Add a file

MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;

return multiForwardingUDR;
}

consume {

bytearray file1Content;
strToBA (file1Content, "file nr 1 content");

bytearray file2Content;
strToBA (file2Content, "file nr 2 content");

//Send MultiForwardingUDRs to the forwarding agent


udrRoute(createMultiForwardingUDR
("dir1", "file1", file1Content));
udrRoute(createMultiForwardingUDR
("dir2", "file2", file2Content));
}

The Analysis agent mentioned previous in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the root directory.

12.1.4. Archive Inspector

Note that this section is only valid for the Global Archiving agent.

To locate files in the archive, the Archive Inspector is used. The access group user is permitted to
launch and purge these files. Once a file is located, it is treated as a regular UNIX file using regular
UNIX commands to view or copy it.

524
Desktop 7.1

It is not encouraged to alter or remove a file from the archive using UNIX commands. If altering
is desired, make a copy of the file. If removal is desired, use the Archive Inspector.

To open the Archive Inspector, click the Tools button in the upper left part of the MediationZone®
Desktop window, and then select Archive Inspector from the menu.

Figure 371. The Archive Inspector

Initially, the window is empty and must be populated with data using the corresponding Search Archive
dialog. For further information, see Section 12.1.4.1, “Searching the Archive”. Each row represents
information about a data batch (file).

Edit menu Search... Displays the Search Archive dialog where search criteria may be defined
to limit the entries in the list. For further information about setting the filter
for this dialog, see Section 12.1.4.1, “Searching the Archive”.
Edit menu Delete... If Keep Files is disabled in the Archive profile, all selected files are removed
from the archive, including their corresponding references in the database.
If Keep Files is enabled, only the references are removed while the files
shown in Archive Inspector are still kept on disk.
View menu View data... Shows the raw data content for the selected file.
Show Archives If the query resulted in a match larger than the Archive page size, this list
toggles between the result sets.
ID Holds the index of the rows in the archive.
Workflow Name of the archiving workflow.
Agent Name of the archiving agent.
Filename Full pathname to the file as stored on disk.
Timestamp Time when the entry was inserted in the archive.
MIM Values When double-clicking a cell in the MIM Values column, a dialog is dis-
played where values for the adherent MIM resources are displayed. Adherent
MIM resources are defined as Logged MIM Data in the Archiving agent
configuration window. For further information, see Figure 372, “MIM Re-
sources Dialog”.

Figure 372. MIM Resources Dialog

Profile Name of the profile used to archive the file.

525
Desktop 7.1

12.1.4.1. Searching the Archive


The Search Archive dialog allows the user to limit the number of rows displayed in the Archive In-
spector. Search conditions may be defined to display either all entries, a specific entry or entries
archived between two dates.

The Search Archive dialog is displayed when Search... is selected from the Edit manu.

Figure 373. The Search Archive dialog

Profile Select the profile that corresponds to the data of interest. If no profile is selected archive
entries for all profiles will be shown.
Workflow Option to narrow the search with respect to what workflow that archived the file.
Agent Option to narrow the search with respect to what agent that archived the file.
Period Option to search for data archived during a certain period.

12.1.5. Maintaining Archives


The Archive Cleaner task removes outdated archives that has has expired according to the Purge
criteria in the Archive profile. The Archive Cleaner task is accessed from Execution Manager, which
is opened by clicking the Tools button in the upper left part of the MediationZone® Desktop window.
The Archive Cleaner is only configurable with respect to scheduling criteria. The cleanup of outdated
files is mastered by removing the reference in MediationZone® as well as the file itself from the archive
directory.

For further information, see also Section 4.1.1.4, “System Task Workflows”.

12.1.6. Transaction Behavior


This section includes information about the Archiving agents' transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

12.1.6.1. Emits
The agents do not emit any commands.

12.1.6.2. Retrieves
The agents retrieve commands from other agents and based on them generate a state change of the file
currently processed.

Command Description
Begin Batch When a Begin Batch message is received, if the temporary directory DR_TMP_DIR
is not already in the base directory, the agent creates it. Then, the agent creates a target
file in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is closed.
Finally, the file is moved from the temporary directory to the target directory.

526
Desktop 7.1

Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.

12.1.7. Introspection
The introspection is the type of data an agent expects and delivers.

The Archiving agent consumes bytearray types.

The Local Archiving agent consumes either bytearray or MultiForwardingUDR types.

12.1.8. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

12.1.8.1. Publishes
The agents publish the following MIMs:

MIM Value Description


Target Filename This MIM parameter contains the name of the target filename, as defined in
Filename Template.

Target Filename is of the string type and is defined as a trailer MIM


context type.
MultiForwardin- This MIM parameter is set only when the agent expects input of MultiFor-
gUDR's FNTUDR wardingUDR type. The MIM value is a string that represents the sub- path
from the output root directory on the target file system. The path is specified
by the fntSpecification field of the last received MultiForwardingUDR.
For further information about the MultiForwardingUDR type, see to
Section 12.1.3.4, “MultiForwardingUDR Input”.

This MIM parameter is of the string type and is defined as a batch MIM
context type.

12.1.8.2. Accesses
Various MIM resources are accessed depending on the MIM value selection in the Agent Directory
Name and Logged MIM Data lists. The MIM values are read at End Batch.

12.1.9. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with file: name

A message event is reported along with target filename each time a file is archived.

12.1.10. Debug Events


The agents do not produce any debug events.

527
Desktop 7.1

13. Appendix V - Collection and Processing


Agents

13.1. Radius Agents


13.1.1. Introduction
This section describes the Radius Server and Radius Client agents. These are extension agents of the
DigitalRoute® MediationZone® Platform.

13.1.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• RADIUS (RFC 2865, http://www.ietf.org/rfc/rfc2865.txt)

• RADIUS Accounting (RFC 2866, http://www.ietf.org/rfc/rfc2866.txt)

13.1.2. Radius Server Agent


Radius is a real-time server agent. It receives accounting data (UDP packet) from up to several Network
Access Servers (NAS).

13.1.2.1. Overview
The Radius accounting data contains information about the last client that had logged in, the log-in
time, the duration of the session etc. Other than collecting such data, the Radius agent may act as an
extension to the NAS, creating accounting data itself. For instance, when receiving a packet containing
a login request, it may reply with an accept or reject packet. The reply logic is handled through APL
code (an Analysis or Aggregation node).

Figure 374. A Typical Radius Workflow

Note the absence of a Decoder. For Realtime workflows, field decoding is handled via APL commands.
The Radius format is included when a Radius bundle is committed into the system. The format
contains record identification information on the first level (code, identifier, length, authenticator and
attributes) to be used by the Radius agent. Hence, the agent is responsible for recognizing the type of
data, while the Analysis node does the actual decoding of the contents (the attributes). A UFDL format
needs to be defined for this purpose.

When activated, the agent will bind to the configured port and wait for incoming UDP packets from
NASes. Each received UDP will be converted to a UDR and forwarded into the workflow. If fields
are missing in a UDP, the agent will still create a UDR, filling in all found fields. If the data in the
UDP is corrupt, or if data arrives from a host not present in the configuration window of the node, a
message will be sent to the System Log and the data will be discarded.

528
Desktop 7.1

Since NASes do not offer the possibility of requesting historic data, the agent will lose all data that is
delivered from the NAS while the agent is not executing.

13.1.2.2. Configuration
The Radius agent configuration window is displayed when the agent in a workflow is double-clicked
or right-clicked, selecting Configuration...

13.1.2.2.1. NAS Tab

The list on the NAS tab can be dynamically updated.

Figure 375. The Radius Server Agent Configuration View - NAS Tab

In the NAS tab, all NASes the agent will collect information from are specified.

IP Address The IP address that the NAS, sending packets, is located on.
Secret Key Key used for authentication of a received packet. This key must be identical to the one
defined in the NAS.

13.1.2.2.2. Miscellaneous Tab

Figure 376. The Radius Server Agent Configuration View - Miscellaneous Tab

Port The port number where the Radius agent will listen for packets from the NAS(es).

Note! Since the NASes will be configured to communicate with a specific


host on this port, it is important that the workflow containing the Radius
agent is configured to execute on the associated EC for that host.

Two Radius agents may not be configured to listen on the same port, on
the same host.

529
Desktop 7.1

PDU Lifetime If set to a value larger than 0 (zero), duplicate check is activated. The buffer
(millisec) saved for comparison is the packets collected during the set time frame.
Skip MD5 Calcula- If enabled, the check for MD5 signatures is excluded. This is necessary if the
tion Radius client does not send MD5 signatures along with the packets, in which
case they would be discarded by the Radius agent.

Note! When Skip MD5 Calculation is enabled, the authenticator field in


all response messages will be 0 (zero).

Duplicate Check Checking for duplicate packets can be made based on:

• Radius Standard - the identifier within the packet (byte number 2).

• CRC32 - check sum for the complete packet.

When a duplicate is detected, it is silently thrown away (no message is logged)


and the Radius agent responds as if a normal packet was received.

13.1.2.3. Introspection
The agent emits and retrieves UDRs of the Radius UDR type. For further information, see Section 13.1.5,
“The Radius Format”.

13.1.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

13.1.2.4.1. Publishes

MIM Parameter Description


Number of messages
Number of duplicate messages/type
Number of rejected messages/type

13.1.2.4.2. Accesses

The agent does not access any MIM resources.

13.1.2.5. Agent Message Events


There are no message events for this agent.

13.1.2.6. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Access request with invalid passwd/secret from ipaddress

Indicates an invalid password was entered or is missing for the connection.

530
Desktop 7.1

• Accounting request with invalid signature from ipaddress

Indicates the calculated sum, based on secret specified for the agent, is not equal to the secret in the
incoming packet.

• Incoming invalid code xx from ipaddress

Indicates the incoming request is not ACCESS_REQUEST or ACCOUNTING_REQUEST.

13.1.2.7. Supervision Service


If you want to reject certain messages when the load gets too heavy, you can use the Supervision Service.
With this service you can select one of the following overload protection strategies:

• Radius_AccessRequest - For rejecting requests of type AccessRequest

• Radius_AccountingIntermediate - For rejecting AccountingIntermediate requests

• Radius_AccountingStart - For rejecting AccountingStart requests

• Radius_AccountingStop - For rejecting AccoutningStop requests

For each strategy you can select if you want to reject 25, 50, or 100 % of the requests.

See Section 4.1.8.5.2, “Supervision Service” for further information.

13.1.3. Radius Client Agent


The Radius client agent is used in realtime workflows. The Radius client agent sends radius request
to one or many Radius servers. You can also enable throttling, which allows you to prevent more than
the specified number of requests (UDRs) per second to be forwarded. The throttling funcationality
uses the token bucket algorithm.

13.1.3.1. Overview
You include the Radius_Client agent in a workflow in order to transmit requests from the workflow.
During runtime a Radius UDR that includes a request field that is assigned with a value is routed into
the Radius_Client agent. When the answer field is assigned, the Radius UDR is routed back.

When combined with the Radius_Server agent, MediationZone® operates as a Radius Proxy.

13.1.3.2. Configuration
The Radius agent configuration window is displayed when the agent in a workflow is double-clicked
or right-clicked, selecting Configuration...

531
Desktop 7.1

13.1.3.2.1. Radius Servers Tab

Figure 377. The Radius Client Agent Configuration View - Radius Servers Tab

The Radius Servers tab enables you to configure an IP address and a secret key for every RADIUS
server that the agent communicates with.

IP Address The IP address of the RADIUS server.


Secret Key The shared secret key is used to sign RADIUS transactions between the
server and its client, as well as to encrypt user-password attributes.
Throughput Threshold If throttling has been enabled for the host, this field will show the configured
threshold for when requests (UDRs) should be throttled. Throttled UDRs
will be routed back into the workflow.

For example: 1.000 (which means a maximum of 1.000 requests/second will


be forwarded).

13.1.3.2.1.1. To Add a Server

1. In the configuration for the Radius Client agent, click on the Add button.

The Add Radius Server dialog opens.

Figure 378. The Add Radius Server dialog

2. Enter the IP address and secret key for the server in the IP Address and Secret Key fields.

3. If you want to enable throttling for the host, select the Enable Throttling check box, and then enter
the the maximum number of UDRs (requests) per second you want the agent to forward in the
Throughput Threshold (UDR/s) field.

Note! Ensure that you handle the throttled UDRs in your APL code in the workflow in order
to not loose any UDRs.

532
Desktop 7.1

4. Click on the Add button and the server will be added in the table containing Radius Servers, and
then click on the Close button to close the dialog when you are finished adding hosts.

13.1.3.2.2. Miscellaneous Tab

Figure 379. The Radius Client Agent Configuration View - Miscellaneous Tab

Host Enter either the IP address or the hostname through which the agent will bind
with the Radius servers.

Note! Since the Radius servers are configured to communicate with a


specific host on this port, it is important that the workflow that includes
the Radius agent is configured to execute on the associated EC for that
specific host, and not on a random one.

Two Radius agents should not be configured to listen through the same
port, on the same host.

Source Port Enter the local port through which the agent will bind with the Radius servers.
Additional Ports In case you want to use a range of ports, enter the number of consecutive ports
in this field.

For example, if you enter 2000 in the Source Port field and 10 in the Additional
Ports field, the ports 2000-2010 will be used.
Retry Count The maximum number of attempts to send. An attempt to send occurs if a re-
sponse is not received within the Retry Interval time.
Retry Interval Enter the time interval, in seconds, between repeated attempts to send.
Skip MD5 Calcula- Check to exclude the use of the MD5 hashing algorithm.
tion

Note! When Skip MD5 Calculation is checked, the authenticator field


in all the request messages turn 0 (zero).

Identifier Calcula- Select this check box if you want an identifier to be calculated and appended
tion to the requests automatically. This identifier will be used for correlating requests
with answers. As the maximum number of pending requests to a specific port
is 256, the identifier range will be 0-255.

533
Desktop 7.1

13.1.3.3. Introspection
The agent emits and retrieves UDRs of the Radius UDR type. For further information see Section 13.1.5,
“The Radius Format”.

13.1.3.4. Meta Information Model


The agent does not publish nor access any MIM resources.

For a list of general MediationZone® MIM parameters, see Section 2.2.10, “Meta Information Model”.

13.1.3.5. Message Events


There are no message events for this agent.

13.1.3.6. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Access request with invalid passwd/secret from <ipaddress>

Indicates an invalid password was entered or is missing for the connection.

• Accounting request with invalid signature from <ipaddress>

Indicates the calculated sum, based on secret specified for the agent, is not equal to the secret in the
incoming packet.

• Incoming invalid code <xx> from <ipaddress>

Indicates the incoming request is not ACCESS_REQUEST or ACCOUNTING_REQUEST.

• Maximum number of resends (<x>) reached for packet sent to <ipaddress>


on port <port>.

Indicates that the maxumim nuber of retries have been reaced and no more attempts will be made
to resend the message.

• Request message has remote IP set to <ipaddress> and there is no


entry for this address in the Radius Servers table.
Rejecting message.

Indicates that the request destination that is set in the workflow has not been configured for the
agent.

• Access response with invalid passwd/secret from <ipaddress>.

Indicates an incorrect Response Authenticator.

• Accounting response with invalid signature from <ipaddress>.

Indicates an incorrect Response Authenticator.

• Incoming invalid code(<Code>) from <ipaddress>.

534
Desktop 7.1

Indicates that the incoming response code is not the expected one.

• Maximum number of resends reached for Radius Request


[packet id ='ID'] to host :
<ipaddress>, port : <port>.

Indicates that the maximum number of repeated attempts had been reached and that no more attempts
will be made to send the message.

• Radius Request [packet id =<ID>] to host : <ipaddress>,


port : <port> has been resent. Number of resends for this request:
<NUM>.

Indicates that another attempt is being transmitted.

13.1.4. Radius Related UDR Types


The UDR types created by default in the Radius agents can be viewed in the UDR Internal Format
Browser in the radius folder. To open the browser open an APL Editor, in the editing area right-click
and select UDR Assistance...; the browser opens.

13.1.5. The Radius Format


Each Radius UDR will be packed containing both the request and response UDRs for each case, along
side the general NAS information, such as port and IP address.

Included with the MediationZone® Radius bundle is a general Radius format, containing all possible
record types valid for Radius. The Radius agent will use this format for recognizing the type of data.
The actual decoding of the contents ( requestMessage), and the encoding of the reply ( re-
sponseMessage) must be handled through a user defined format.

attributeType (int) This field indicates the type of Radius request.


context (any) This field stores information about the context in which the opera-
tion has been invoked.
hashKey (string) Provided that duplicate checking is enabled, and CRC32 is selected
as method, this field will contain the calculated check sum for the
packet. For all other cases it will be set to NULL.
remoteIP (ipaddress) The IP address of the NAS.
remotePort (int) The port used for communication with the NAS.
requestMessage (byte- The field containing a request UDR. Depending on the settings on
array) the NAS, it can be any of the available standards, for instance, Ac-
cess-Request or Accounting-Request.

An Ultra Format Definition must be designed to handle decoding


of this field.
responseMessage (byte- The field containing a response UDR. Depending on the settings
array) on the NAS, it can be any of the available standards, for instance,
Access-Accept or Access-Reject.

An Ultra Format Definition must be designed to handle decoding


of this field.
sourceIP (ipaddress) The source IP address.
sourcePort (int) The source port.
statusMessage (string) The status message field is set to communicate status while a mes-
sage is being sent or received.

535
Desktop 7.1

throttled(boolean) This flag indicates whether the UDR has been throttled or not. De-
fault is false, and if the UDR has been throttled, it will be set to
yes.

13.1.6. An Example
A Radius agent can act as an extension to a NAS and to illustrate such a scenario an example is intro-
duced. In the example an Analysis agent is used to validate the content of the received UDP packet,
and depending on the outcome a reply is sent back (also in the form of a UDP packet). Valid UDRs
are routed to the subsequent agent, while invalid UDRs are deleted. Schematically, the workflow will
perform the following:

1. Decode the data into a UDR. Discard and continue with the next packet upon failure.

2. Validate the UDR. If it is a Access_Request_Int, a comparison with a subscriber table must


be performed to make sure the user is authorized (that is, exists in the table). All other UDR types
must be deleted.

3. If the user was found in the table, send the UDR to the next agent and a reply UDR of type Ac-
cess_Accept_Int back to the Radius agent. If the user was not found, delete the UDR and
send a reply UDR of type Access_Reject_Int to the Radius agent. Both reply UDRs must
have the Identifier field updated first.

To keep the example as simple as possible, valid records are not processed. Usually, no reply
is sent back until the UDRs are fully validated and manipulated. The focus of the example is
the MediationZone® specific issues, such as decoding, validation and reply handling.

13.1.6.1. The Workflow Setup


A Radius agent must always be connected to an Analysis or Aggregation agent, since it is dependent
on receiving replies which are issued through APL commands.

Figure 380. The Analysis agent handles all reply logic.

The radius agent will forward all received packets. The actual discarding and validation of the data is
handled in the Analysis agent.

13.1.6.2. The Ultra Format Definition


To simplify the example, only one type of request UDRs is accepted, and one of two types of reply
UDRs is sent back. Hence, the requestMessage and responseMessage fields of the Radius
UDR will be populated with any of these UDR types:

requestMessage • Access_Request UDR - A login request from a supposed user which


must be authenticated against a subscriber database.

responseMessage • Access_Accept UDR - Sent back in case the authentication succeeded.

• Access_Reject UDR - Sent back in case the authentication failed.

536
Desktop 7.1

The full Ultra Format Definition for the example is not shown, since it is beyond the scope of this
manual to handle packet content or UFDL syntax.

13.1.6.2.1. Shortened Ultra Format Definition

The format definition is here stored in the Default directory with the name extendedRadius.

external Access_Request_Ext sequential :


identified_by(Code == 1),
dynamic_size(RecLength) {
int Code: static_size(1);
int Identifier: static_size(1);
int RecLength: static_size(2),
encode_value(udr_size);
bytearray Authenticator: static_size(16);
switched_set( Type ) {
int Type: static_size(1);
int Length: static_size(1),
encode_value(case_size);
case(1) {
ascii User_Name: dynamic_size( Length - 2 );
};
case(2) {
ascii User_Password: dynamic_size( Length - 2 ),
terminated_by(0);
};
// ...
// Further field definitions.
// ...
};
};

external Access_Accept_Ext sequential :


identified_by(Code == 2),
dynamic_size(RecLength) {
// ...
// Further field definitions.
// ...
};

external Access_Reject_Ext sequential :


identified_by(Code == 3),
dynamic_size(RecLength) {
// ...
// Further field definitions.
// ...
};

internal Vendor_Specific_Int {
int Type;
int Length;
int VendorID;
int SubAttrID;
int VendorLength;
int InfoCode;
string Data;
};

537
Desktop 7.1

in_map Access_Request_Map : external(Access_Request_Ext),


target_internal(Access_Request_Int) {
automatic {
Vendor_Specific_Ext: internal( Vendor_Specific_Int ),
target_internal( Vendor_Specific_TI1 );
};
};

in_map Access_Accept_InMap: external(Access_Accept_Ext),


target_internal( Access_Accept_Int ) {
automatic {
Vendor_Specific_Ext: internal( Vendor_Specific_Int );
};
};

out_map Access_Accept_Map : external(Access_Accept_Ext),


internal(Access_Accept_Int) {
automatic;
};

in_map Access_Reject_InMap: external(Access_Reject_Ext),


target_internal( Access_Reject_Int ) {
automatic {
Vendor_Specific_Ext: internal( Vendor_Specific_Int );
};
};

out_map Access_Reject_Map : external(Access_Reject_Ext),


internal(Access_Reject_Int) {
automatic;
};

decoder Request_Dec: in_map(Access_Request_Map);

encoder Response_Enc: out_map(Access_Accept_Map),


out_map(Access_Reject_Map);

13.1.6.3. The Analysis Agent


All decoding, validation and manipulation is performed from the Analysis agent. The code logic is as
follows:

1. Each time the workflow is activated, a subscriber table is read into memory. To keep the example
simple, the table content are assumed to be static. For a real implementation, it is recommended to
re-read the table on a regular basis.

2. Decode the UDP packet. Consider only UDRs of type Default.extendedRadius.Access_Re-


quest_Int.

3. Perform a lookup against the subscriber table, and create a reply of type Default.extendedRa-
dius.Access_Accept_Int or Default.extendedRadius.Access_Reject_Int,
depending on if the subscriber was found in the table.

4. Route the reply back to the Radius agent.

table tmp_tab;

initialize {
tmp_tab = tableCreate("select SUBSCRIBER from VALID_SUBSCRIBERS");

538
Desktop 7.1

consume {
list<drudr> reqList = listCreate(drudr);
radius.Radius r = (radius.Radius) input;
string err = udrDecode("Radius.Request_Dec",
reqList, r.requestMessage, true);

if ( (err != null) || (listSize(reqList) != 1) ) {


debug("Decoding error: " + err);
return;
}

drudr elem = (drudr) listGet(reqList, 0);

if (instanceOf(elem, Default.extendedRadius.Access_Request_Int)) {
Default.extendedRadius.Access_Request_Int req =
(Default.extendedRadius.Access_Request_Int) elem;
table rowFound = tableLookup( tmp_tab,
"SUBSCRIBER", "=", req.User_Name );

if (tableRowCount(rowFound) > 0) {
Default.extendedRadius.Access_Accept_Int resp =
udrCreate(Default.extendedRadius.Access_Accept_Int);
resp.Identifier = req.Identifier;
r.responseMessage =
udrEncode("Default.extendedRadius.Response_Enc", resp);
udrRoute( r );
} else {
Default.extendedRadius.Access_Reject_Int resp =
udrCreate(Default.extendedRadius.Access_Reject_Int);
resp.Identifier = req.Identifier;
r.responseMessage =
udrEncode("Default.extendedRadius.Response_Enc", resp);
udrRoute( r, "Response" );
}

} else {
debug("Invalid request type");
}
}

13.2. Diameter Agents


13.2.1. Introduction
This section describes the Diameter Stack and Request agents. These agents are real time extension
agents and are available on the MediationZone® Platform.

13.2.1.1. Prerequisites
The user of this information should be familiar with:

• The MediationZone® Platform

539
Desktop 7.1

• Diameter Base Protocol, RFC 6733 (http://www.faqs.org/rfcs/rfc6733.html), which obsoletes RFC


3588

13.2.2. Overview
The Diameter agents enable you to configure MediationZone® to act as a Diameter server, a Diameter
client, or as a Diameter Proxy agent, by applying the Diameter Base Protocol.

This section covers information about the MediationZone® application of:

• Diameter Base Protocol

• Diameter Transport Security

13.2.2.1. The Diameter Base Protocol


The Diameter Base Protocol is a successor of the Radius Protocol. It is designed to provide an extensible
framework for any services that require the support of authentication, authorization, and accounting
(AAA), across several networks. The Diameter Base Protocol, unlike the Radius Protocol, also enables
new access control features while maintaining flexibility for further extension.

According to the RFC 6733, the Diameter Base Protocol alone does not offer much functionality. The
Diameter Base Protocol should be regarded as a standard transport and management interface for AAA
applications that provide a well-defined functionality subset. To increase functionality, predefined
AAA applications are added. An AAA application usually consist of new command code and AVP
definitions that map the semantics of the application. One example of a predefined application is the
Diameter Credit-Control. For further information, see RFC 4006.

13.2.2.1.1. The Diameter Workflow Operation

There are two Diameter agents in MediationZone® :

• Diameter_Stack

• Diameter_Request

13.2.2.1.1.1. Diameter_Stack

The MediationZone® Diameter_Stack agent manages transport, decoding, and encoding of Diameter
input messages.

In order for a workflow to act as a Diameter server, you must use the the Diameter_Stack agent. The
Diameter_stack agent communicates with the workflow by using the UDR type called Request-
CycleUDR. When a request message arrives to the stack, the message is decoded, validated, and
turned into a UDR of the pre-generated UDR type, as specified in Section 13.2.3.1.2, “Commands
Tab”. This UDR is inserted into the RequestCycleUDRs Request field and routed through the
workflow. By using the APL function udrCreate the Answer field is populated with an appropriate
answer message, and then RequestCycleUDR is routed back to the stack agent, for transmission
of the answer.

It is possible to use multiple Diameter_Stack agents in a workflow if that is required in the business
logic. However, for the best possible performance, it is recommended to use one Diameter_Stack agent
per workflow.

540
Desktop 7.1

Figure 381. A Diameter_Stack Workflow

Note!

• Diameter Base Protocol commands such as Capability-Exchange-Request and


Capability-Exchange-Answer, are handled internally by the Diameter_Stack agent.
You see an indication of execution of these commands on the Workflow Monitor in the debug
mode. The Diameter_Stack agent also publish MIM values that contain counters for the
various commands.

• AVPs (Attribute-Value pairs) from the Diameter Base Protocol are static, unchangeable, and
always available to MediationZone® .

13.2.2.1.1.2. Diameter_Request

In order for a workflow to act as a Diameter client, you must use both the Diameter_Request agent
and the Diameter_Stack agent. The Diameter_Request agent simply references a Diameter_Stack agent
that is suitable for the outgoing route.

A RequestCycleUDR with a populated Request field is routed into the Diameter_Request agent.
This agent then uses the selected stack to send the message. A RequestCycleUDR containing the
original Request field and a populated Answer field is then routed back into the workflow.

Figure 382. A Diameter_Request Workflow

13.2.2.1.2. Diameter Related UDR Types

You view the UDR types that are created by default in the Diameter agents (based on RFC 6733), in
the UDR Internal Format Browser, in the Diameter folder. To open the browser, right-click in an
APL code area and select UDR Assistance.

541
Desktop 7.1

Figure 383. The UDR Internal Format Browser

Each Command or AVP that is defined in the Diameter Application profile configuration will result
in a UDR type after it has been saved. Note that the base commands and AVPs in RFC 6733 are pre-
defined and will be included automatically.

Note! The BaseUDR and BaseCommand UDR types are internal and shall not be used in APL
code.

13.2.2.1.2.1. RequestCycleUDR

The Diameter_stack agent and Diameter_Request agent communicate with the workflow by using the
UDR type RequestCycleUDR. For more information, see Section 13.2.2.1.1.1, “Diameter_Stack” and
Section 13.2.2.1.1.2, “Diameter_Request”.

The following fields are included in the RequestCycleUDR:

Field Description
Answer (BaseCom- This field is populated with an "answer message UDR", before routed
mand (Diameter)) back by the workflow to the Diameter_Stack agent. For the Diameter_Re-
quest agent it will work the reversed way: after the answer field has been
populated by the agent, the RequestCycleUDR is routed to the workflow.

542
Desktop 7.1

Field Description

Note! In the answer message UDR, the fields Origin_Host and


Origin_Realm will be automatically populated by the Diameter
stack in case they are unconfigured, i e set to null. Otherwise, your
configured values will be used.

The field EndToEndIdientifier is available as a read-only


field and will always have the same value as in the "request mes-
sage UDR".

AnswerReceivedTime A timestamp indicating when the client receives the answer in nano-
(long) seconds.
AnswerSentTime A timestamp indicating when the server sends the answer in nanoseconds.
(long)
Context (any) This is an internal working field that can be used in the workflow config-
uration to keep track of and use internal workflow information related to
the request, when processing the answer. An example for a proxy work-
flow including TCP/IP and Diameter agents: When sending the Request-
CycleUDR to the Diameter Request agent, the input TCPIPUDR is saved
in the Context field. When the response is received from the Diameter
agent, the TCPIPUDR can be read from the Context field and this
TCPIPUDR can be used to send back the response to the TCP/IP agent.
ExcludePeers You can populate this field with a list a list of peers, identified by their
(list<string>) hostnames. When Round Robin is selected as the Realm Routing
Strategy, these peers will be excluded from lookups in the Realm
Routing Table.

You can use ExcludePeers to act on errors in the answers from a


Diameter server. For instance, if a peer answers with the Result-Code
AVP set to DIAMETER_TOO_BUSY 3004, you may want to exclude
this peer in consecutive requests for some time.
IncomingRemotePeer This field is populated with the Diameter identity of the sending peer.
(string)

Note! The field is read-only and will be automatically populated


by the Diameter stack.

Request (BaseCom- The Diameter_Stack agent populates the Request field with the "request
mand (Diameter)) message UDR" before routing the RequestCycleUDR to the workflow.
For the Diameter_Request agent it will work the reversed way and request
messages will be transmitted from the workflow to the Diameter_Stack
agent by using this field.

Note! In the request message UDR, the fields Origin_Host and


Origin_Realm will be automatically populated by the Diameter
stack in case they are unconfigured, i e set to null. Otherwise, your
configured values will be used.

The value of the field EndToEndIdientifier can be con-


figured by using APL. If it is not configured, the value will be set
by the stack sending out the request.

543
Desktop 7.1

Field Description
RequestReceived- A timestamp indicating when the server receives the request in nano-
Time (long) seconds.
RequestSentTime A timestamp indicating when the client sends the request in nanoseconds.
(long)
Session_Id(string) A Diameter Session-Id value that will be read from the Request field in
the Diameter message. This is a read-only field.
Throttled(boolean) This flag indicates whether the UDR has been throttled or not. Default
is false, and if the UDR has been throttled, it will be set to true.

Note! If Throttled is true, the Answer field of the Request-


CycleUDR will be set to null.

13.2.2.1.2.2. WrappedMZObject

The WrappedMZObject UDR does not map to a Diameter message but can be used to send data between
workflows. WrappedMZObject is added as a field in the RequestCycleUDR (request or answer).

Note! Since this is not a normal Diameter message, the receiver has to be another Medi-
ationZone® workflow.

The following fields are included in the WrappedMZObject UDR:

Field Description
Data (any) The data to send to another workflow.
Destination_Host (string) The server host, the destination of the message.
Destination_Realm (string) The realm where to route the message.
Is_Request (boolean) Used to indicate if the message is a request or not.
Is_Unidirectional (boolean) To be used if when no reply is expected.

13.2.2.1.3. Special Error Handling

The Diameter_Stack agent produces three error answers with MediationZone® internal result codes.

• No suitable route

When there is no peer or realm in the Routing profile that matches the content of the AVPs that are
used for routing, a message with the error code 4997 is returned.

The following AVPs are used for routing:

• Destination-Host

• Destination-Realm

• Acct-Application-Id

• Auth-Application-Id

• Vendor-Specific-Application-Id

544
Desktop 7.1

This error may also occur when all the peers of a realm are specified in the ExcludePeers field
of a RequestCycle UDR and Round Robin is the selected Realm Routing Strategy.

• Connection to peer not open

When connection with the peer that is receiving the request is not established, a message with the
error code 4998 is returned.

• Timed out waiting for answer

When a request is sent to a peer and no answer is received within a configurable timeout, a message
with the error code 4999 is returned.

13.2.2.2. SCTP
The Diameter agents support Transmission Control Protocol (TCP) and Stream Control Transmission
Protocol (SCTP) as transport protocols.

Even though there are similarities between these protocols, SCTP provides some capabilities that TCP
is lacking, including multistreaming and multihoming.

TCP transmits data in a single stream and guarantees that data will be delivered in sequence. If there
is data loss, or a sequencing error, delivery must be delayed until lost data is retransmitted or an out-
of-sequence message is received. SCTP's multistreaming allows data to be delivered in multiple, inde-
pendent streams, so that if there is data loss in one stream, delivery will not be affected for the other
streams.

The multihoming feature adds more redundancy benefits of having multiple network interfaces.

When a network interface of a TCP connection fails, the connection will time out as it cannot redirect
data using an alternate network interface that is available on the host. Instead, failover to another inter-
face must be handled in the application layer.

Multihoming in SCTP allows multiple IP addresses in association. As a result, failover to an alternate


interface can be handled in the transport layer. Typically, different IP addresses are bound to different
networks thus providing additional resiliency in case of failure.

The number of transmissions, timeouts and any other parameters that determine when the failover
should occur must be set in the SCTP software specific to your operating system.

Figure 384. TCP Connection

545
Desktop 7.1

Figure 385. SCTP Connection

On a system with SCTP installed you can bind multiple IP addresses to a hostname by editing the
hosts file. The location of this file is operating system specific but it can be found under /etc on
most Linux and Unix distributions.

Example 89.

127.0.0.1 localhost
192.168.1.111 server1
192.168.1.112 server1

When a Diameter_Stack agent in MediationZone® receives a connection request from a peer over
SCTP, it is not certain that its hostname will be resolved to the IP address of a particular network in-
terface. To ensure that a specific interface is used to setup the connection, you must specify the IP
address of the interface in the Primary Host text box in the Diameter_Stack agent. This can be useful
if the peer only uses a single static IP address to connect to the agent. Once the connection is established,
failover to an alternate interface is possible.

For further information about the Diameter_Stack agent, see Section 13.2.4, “Diameter_Stack Agent”.

13.2.2.3. Diameter Transport Security


The Diameter protocol communication over TCP can be protected by using Transport Layer Security,
TLS.

13.2.2.3.1. TLS Configuration

TLS requires a keystore file that is generated by using the Java standard command keytool. For further
information about the keytool command, see the JDK product documentation.

546
Desktop 7.1

Example 90.

1. To Create a keystore:

$ keytool -genkey -keyalg RSA -keystore MZstack.jks

Keytool prompts for required information such as identity details and password. Note that
the keystore password must be the same as the key password.

2. Generate the certificate

$ keytool -export -keystore MZstack.jks -file ./MZstack.cer

The certificate file can now be distributed to the other peers.

3. Install a Diameter node certificate in the MZstack keystore

$ keytool -import -alias "peerTLS" -file peerTLS.cer -keystore


MZstack.jks

4. Enter the keystore path and the keystore password in the Diameter Stack configuration.

5. From the Peer Table, in the Diameter Routing Profile configuration, select the TCP/TLS
protocol for the peer with which you want to establish a secure connection.

13.2.2.3.2. TLS Configuration Properties

You can control the handling of unrecognized certificates by setting a property in either the
common.xml file or in the executioncontext.xml file on the machine that the workflow ex-
ecutes on.

mz.diameter.tls.accept_all

If the property is set to false (default), the Diameter_Stack agent does not accept any non-trusted
certificates. If it is set to true, the Diameter_Stack agent accepts any certificate.

In either case any unrecognized certificate will be logged in an entry in the System Log (in PEM
format).

Check the certificate. If you trust it, import it into the keystore by using the Java standard keytool
command. For further information, see the standard Java documentation.

13.2.3. Diameter Profiles


The Diameter agents configuration includes a two profile definitions:

• Application Profile

• Routing Profile

13.2.3.1. Diameter Application Profile


The Diameter application profile captures a set of AVP and command code definitions that are recog-
nized by the Diameter_Stack agent during runtime.

The Diameter appication profile is loaded when you start a workflow that depends on it. Changes to
the profile become effective when you restart the workflow.

547
Desktop 7.1

To open the configuration, click the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select Diameter Application Profile from the menu.

Figure 386. The Diameter Application Profile

13.2.3.1.1. Diameter Menu

From the main menu at the top of the configuration view, select Diameter to display available options
for import, export and AVPs.

The Diameter application profile enables you to import and export AVP and command specifications
in two supported formats:

• ABNF (Augmented Backus-Naur Format)

• XML (eXtensible Markup Language)

From Diameter you can also clear entire specifications.

Figure 387. The Diameter Menu

13.2.3.1.1.1. To Import ABNF Specifications

1. From the Diameter menu select Import ABNF Specifications and the Select a File to Import
dialog opens.

2. Select an ABNF file and click Open to import the ABNF file to your Diameter application profile
configuration.

Note! For further information about the ABNF file, see Section 13.2.7.1, “ABNF Specification
Syntax”.

548
Desktop 7.1

13.2.3.1.1.1.1. Handling Duplicate Specification Files

If your ABNF file contains specifications that are already included in the Diameter profile, you are
prompted to select one of the alternatives to overwrite, rename or skip importing the file specification.

Figure 388. Duplicate ABNF Dialog-Box

Overwrite The file specification replaces the existing one.


Rename The file specification is imported and with a new name so that conflict
with the existing specification is avoided.

Example 91.

For example: A specification called Re-Auth-Request is already


included in the profile when the first attempt to re-import it occurs.
The new file name is Re-Auth-Request-1. The next attempt
to re-import the same specification file will import
Re-Auth-Request-2.

Skip The file specification is not imported.


On Current Definition Apply the selection of Overwrite, Rename , or Skip, only to the current
Only specification.
On All Upcoming Defini- Apply to all future imported specifications.
tions

13.2.3.1.1.2. To Import XML Specifications

1. From the Diameter menu select Import XML Specifications and the Select a File to Import
dialog opens.

2. Select an XML file and click Open. The XML file is imported to your Diameter application profile
configuration.

Note!

For further information about the XML file, see Section 13.2.7.2, “XML Specification Syntax”.

For further information about handling specifications (XML or ABNF) that are already included
in the application profile, see Section 13.2.3.1.1.1.1, “Handling Duplicate Specification Files”.

549
Desktop 7.1

13.2.3.1.1.3. To Export ABNF Specifications

1. From the Diameter menu select Export ABNF Specifications and the Select a Target File for
Export dialog opens.

2. Select an ABNF file and click Save. The ABNF file (both AVPs and commands) is saved as an
export file.

Note! For further information about the ABNF file, see Section 13.2.7.1, “ABNF Specification
Syntax”.

13.2.3.1.1.4. To Export XML Specifications

1. From the Diameter menu select Export XML Specifications and the Select a Target File for
Export dialog opens.

2. Select an XML file and click Save. The XML file (both AVPs and commands) is saved as an export
file.

Note! For further information about the XML file, see Section 13.2.7.2, “XML Specification
Syntax”.

MediationZone®

13.2.3.1.1.5. To Clear AVP Specifications

From the Diameter menu select Clear AVP Specifications and click OK. All the AVP specifications
are deleted.

13.2.3.1.1.6. To Clear Command Specifications

From the Diameter menu select Clear Command Specifications and click OK. All the Command
specifications are deleted.

13.2.3.1.2. Commands Tab

The commands that you use in the Diameter application profile are predefined command sets of spe-
cific solutions.

The Commands tab in the Diameter application profile configuration enables you to create and edit
command sets that are customized according to your needs.

550
Desktop 7.1

Figure 389. The Diameter Application Profile - Commands Tab

Name The command name. For example: Credit-Control-Request.


Code The unique numeric command code. For example: 272. For further inform-
ation, see Show Base Commands.
Application ID The numeric representation of the Diameter Application that this command
belongs to. For example: 4 - Diameter Credit-Control.
Show Base Commands Select this check box to view predefined commands, their numeric code,
and Application ID. These are the commands specified in Diameter Base
Protocol (RFC 6733).

13.2.3.1.2.1. To Add a Diameter Command Specification

The Add Diameter Command Specification dialog is displayed when clicking the Add icon in the
Commands tab.

Figure 390. The Diameter Commands Tab - Add Diameter Command Specification

551
Desktop 7.1

Command Name Enter a command name.


Command Code Enter a unique numeric command code.
Application ID The numeric representation of the Diameter Application that the command belongs
to.
Flags Select the Request check box to mark the command as a request message (r-bit is
set in Diameter message header); clear the Request check box to mark the command
as an answer message.

Select Proxiable to enable the command to support proxy, relay, or redirection


(p-bit is set in Diameter message header). For more information about the Proxiable
flag, see the Diameter Base Protocol (RFC 6733).

This flag is set by default when a new command is added.

Select Error to mark that the message contains a protocol error (e-bit is set in
Diameter message header), so that the message will not conform to the ABNF
described for this command. This flag is typically used for test purposes.

If you want to send an error message answer from APL, it is recommended that
you use the UDR Diameter.Base.Error_Answer_Message.
Auto-Populate Click on this button to automatically fill out the AVP Layout table with data,
based on your Flags selection. The Category of AVP data is set to Required.

Note! To manually modify the data in the table cells double-click a cell.

Selecting the Request and Proxiable check boxes will auto-populate the AVP
Layout table with the following AVPs:

• Origin-Realm

• Origin-Host

• Destination-Realm

Selecting Proxiable only will auto-populate the AVP Layout table with the fol-
lowing AVPs:

• Origin-Realm

• Origin-Host

• Result-Code

Selecting Error will prevent the AVP Layout table from being auto-populated.

AVP Layout Table

This table includes a list of all the AVPs in a specific command. From the table you can add, edit, and
remove AVPs.

To manually modify the data in the table cells double-click on a cell. Either a drop-down list button
will appear and enable you to select a different content, or the cell will become editable.

Category There are three different AVP categories:

• A Fixed AVP must be included in its predefined space in the command.

552
Desktop 7.1

• A Required AVP must be included, but may appear anywhere in the message.

• An Optional AVP can appear anywhere in the message.

AVP Enter or modify an AVP name.


Min Enter the lowest number of AVPs that the command should contain.
Max Enter the highest number of AVPs that the command should contain.

UDR types are generated for the Diameter Application profile based on the command
configuration. When Max is set to 2 or <unbounded> the data type of the UDR field
for the AVP will be list<data type>.

13.2.3.1.3. AVPs Tab

AVPs carry the data payload in all Diameter messages. While MediationZone® recognizes all the
AVPs that are defined in the Diameter Base Protocol, it also recognizes your customized AVPs. In the
AVPs tab you can define your own customized AVPs.

Figure 391. The Diameter Application Profile - Diameter AVPs Tab

Auto-Populate Click on this button to enter missing table entries in all the user defined AVPs of
a command in the table.

Note! Does not apply to base AVPs.

Name The AVP name


Code The numeric code that represents the AVP. This number is unique and fixed for
every AVP. For further information, see the AVP specifications. For example:
RFCs.
Type The AVP data format as specified in the Diameter Base Protocol (RFC 6733).
UDR types are generated for the Diameter application profile based on the AVP
configuration. The AVP data formats are mapped to the UDR data types as follows:

Address ipaddress

553
Desktop 7.1

DiameterIdentity string
Enumerated int
Grouped list<type>
Float32 float
Float64 double
IPFilterRule IPFilterRuleUDR(Diameter)
OctetString bytearray
Signed32 int
Signed64 long
Time date
Unsigned32 int
Unsigned64 long
UTF8String string

Vendor The numeric Vendor ID of the AVP. The vendor ID of all the IETF standard
Diameter applications is 0 (zero).
Show Base AVPs To display all predefined AVP types, check Show Base AVPs. These are the
AVPs specified in Diameter Base Protocol (RFC 6733).

13.2.3.1.3.1. To Add an AVP

To open the Add Diameter AVP Specification dialog, click on the Add icon at the bottom of the
AVP tab.

Figure 392. The Add Diameter AVP Specification View

AVP Name The name of the AVP.


AVP Code The numeric id of the AVP.
Vendor ID The number that represents the vendor. The default value is 0 (zero).
AVP Type The data type of the AVP.

Note! Selecting Enumeration or Grouped reveals configuration options in the


Enumeration/Group Properties table.

554
Desktop 7.1

Mandatory ('M') The M-bit allows the sender to indicate to the receiver whether or not understanding
Bit the semantics of an AVP and its content is mandatory. If the M-bit is set by the
sender and the receiver does not understand the AVP or the values carried within
that AVP, then a failure is generated. For further information about the M-bit, see
the Diameter Base Protocol (RFC 6733).

The following applies for incoming and outgoing messages that contains the
configured AVP:

• MUST:The M-bit is set to 1 in outgoing messages and must be set to 1 in in-


coming messages.

• MAY:The M-bit is set to 0 or 1 (configurable in the Advanced tab) in outgoing


messages and may be set to 0 or 1 in incoming messages.

• SHOULD:The M-bit is set to 0 in outgoing messages and may be set to 0 or


1 in incoming messages.

• MUST NOT:The M-bit is set to 0 in outgoing messages and must be set to 0


in incoming messages.

You can change the value of the M-bit from APL if Mandatory ('M') Bit is set
to MAY or SHOULD.
Protection ('P') The P-bit bit is reserved for future usage of end-to-end security.
Bit
Enumera- This table is accessible for editing only when AVP Type is configured as Enu-
tion/Group Prop- merated or as Grouped. This table enables you to add, edit, or remove AVPs or
erties enumeration values.

For further information about the tables columns and entries, see Sec-
tion 13.2.3.1.2.1, “To Add a Diameter Command Specification”.

13.2.3.1.3.2. To Edit an AVP

To open the Edit Diameter AVP Specification dialog, click on the Edit icon at the bottom of the
AVPs tab.

The Edit Diameter AVP view is identical to the Add Diameter AVP Specification view. The same
description applies for editing an AVP specification.

13.2.3.1.4. CER/CEA Tab

The identifiers in this tab define the advertised applications for the capabilities handshake. They are
used whenever the Diameter_Stack agent initiates or responds to a new transport connection, in order
to negotiate the compatible applications for the link.

For further information about Authentication and Accounting Applications, see Diameter Base Protocol
(RFC 6733).

555
Desktop 7.1

Figure 393. The Diameter Application Profile - CER/CEA Tab

Auto-Popu- Click on this button to add Application IDs, that are used in any of the commands, to
late the Application ID table.

In the Vendor Specific Applications table, available Vendor IDs are extracted from
the AVPs tab into the Vendor ID column.

Auto-Populate can not populate Vendor ID and Application ID into a Vendor


Specific Applications table if a vendor specific application command is con-
figured in the command tab. This means that a command includes a Vendor-
Specific-Application-Id AVP.

Application IDs Table

Application ID Numeric codes of the supported applications.


Authentication Select and check to flag an application with Authentication.
Accounting Select and check to flag an application with Accounting.

Vendor Specific Applications Table

Vendor ID Enter the numeric code of the vendor


Auth App ID Enter the vendor specific authentication application ID.
Acct App ID Enter the vendor specific accounting application ID.

13.2.3.1.5. Advanced Tab

The Advanced tab contains additional settings.

556
Desktop 7.1

Figure 394. The Diameter Application Profile - Advanced Tab

Default Outgoing 'M' Bit Set to 1 When When this check box is selected and Mandatory ('M') Bit
Flag Rule MAY Is Selected is set to MAY in the AVPs tab, the M-bit will be set to 1
in outgoing messages.

13.2.3.2. Diameter Routing Profile


The Diameter routing profile enables you to define the Peer Table and the Realm Routing Table
properties for the Diameter_Stack agent. You can also enable throttling, which allows you to prevent
more than the specified number of UDRs per second to be forwarded. The throttling functionality uses
the token bucket algorithm.

The Diameter routing profile is loaded when you start a workflow that depends on it. Changes to the
profile become effective when you restart the workflow. It is also possible to make changes effective
while a workflow is running. For more information about this, see Section 13.2.3.2.3, “To Dynamically
Update the Diameter Routing Profile”.

To define a routing profile, click on the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select Diameter Routing Profile in the menu.

13.2.3.2.1. Routing Tab

Figure 395. The Diameter Routing Profile - Routing Tab

557
Desktop 7.1

13.2.3.2.1.1. Peer Table

A Diameter_Stack agent that uses the routing profile maintains transport connections with all the hosts
that are defined in the Peer table list. Connections and handshakes of hosts that are not in this list are
rejected with the appropriate protocol errors.

Note! MediationZone® will actively try to establish connections to any hosts that are included
in this list, unless the Do Not Create Outgoing Connections option is checked in the Diamet-
er_Stack agent.

Hostname The hostname (case sensitive) or IP address of a Diameter Identity.

For example: ggsn01.vendor.com.

Note! The content of the Origin-Host AVP in the answer commands from the
specified peer should be identical to this value. If the values do not match, the
MIM values published by the Diameter_Stack agent that contain counters are
not updated correctly. This may occur, for instance, if you have specified a
hostname in this text box but the Origin-Host AVP contains an IP address. It is
recommended that you consistently use either IP addresses or hostnames when
configuring the Diameter profiles and agents.

Port The port to connect to when initiating transport connections with a peer.

For example: 3868.


Protocol The transport protocol to use when initiating a peer connection. The following settings
are available:

• TCP

• TCP/TLS

• SCTP

When TCP/TLS is selected, the Diameter_Stack requires a secure connection from


this host. You configure this feature by setting the Keystore Path and the Keystore
Password in the Diameter_Stack agent. For further information, see Section 13.2.4.1.3,
“Advanced Tab”.

Note! SCTP must be installed on every EC host that uses the SCTP protocol.
For installation instructions, see your operating system documentation.

Throughput If throttling has been enabled for the peer, this text box will show the configured
Threshold threshold for when transmissions of request UDRs should be throttled. Throttled UDRs
will be routed back into the workflow.

For example: 1.000 (which means a maximum of 1.000 UDRs/second will be transmit-
ted).

558
Desktop 7.1

Note! Throttling will determine if and how the workflow will limit the number
of requests and UDRs sent out from the workflow. For information regarding
how to configure the Diameter agent to reject incoming requests or UDRs to
the workflow, see Section 13.2.4.1.2, “Diameter Too Busy Tab”.

13.2.3.2.1.2. To Add a Host

1. In the Diameter Routing Profile, click on the Add button beneath the Peer Table.

The Add Host dialog opens.

Figure 396. The Diameter Routing Profile - Adding a Host

2. Enter the hostname and port for the host in the Hostname and Port text boxes.

3. Select protocol in the Protocol drop-down-list.

4. If you want to enable throttling for the peer, select the Enable Throttling check box, and then enter
the maximum number of request UDRs per second you want the Diameter_Stack agent to transmit
to the peer in the Throughput Threshold (UDR/s) text box.

Note! Ensure that you handle the throttled UDRs in your APL code in the workflow in order
to not loose any UDRs.

5. Click on the Add button and the host will be added in the Peer Table, and then click on the Close
button to close the dialog when you are finished adding hosts.

13.2.3.2.1.3. Realm Routing Table

Realm-based routing is performed when the Destination-Host AVP is not set in a Diameter
message. All realm-based routing is performed based on lookups in the Realm Routing Table.

When the lookup matches more than one set of keys, the first result from the lookup will be used for
routing. For this reason, the order of the rows in the Realm Routing Table must be considered. You
can control the order of the rows by using the arrow buttons. Clicking on the table columns to change
the displayed sort order does not have any effect on the actual order of the rows in the Realm Routing
Table.

Realm Rout- Diameter requests are routed to peers in the realms in accordance with the selected
ing Strategy Realm Routing Strategy. The following settings are available:

• Failover: For each realm, Diameter requests are routed to the first specified peer
(primary) in the Hostnames cell, or the first host resolved by a DNS query. If the
connection to the first peer fails, requests to this realm are routed to the next peer
(secondary) in the cell, or next host resolved by a DNS query.

559
Desktop 7.1

Failback to the first peer (primary) is performed when possible.

• RoundRobin: Diameter requests are evenly distributed to all the specified peers in
the Hostnames cell, or peers resolved by DNS queries. If the connection to a peer
fails, the requests are distributed to the remaining hosts. This also applies when
UDRs are throttled due to the settings in the Peer Table.

The connections to the peers are monitored through the standard Diameter watchdog
as described in RFC 6733 and RFC 3539. Possible states of the connection are: OKAY,
SUSPECT, DOWN, REOPEN, INITIAL.

The table below contains examples of how Diameter requests are routed to the peers
of a realm, with the RoundRobin strategy, depending on the peer connection state:

Peer 1 Peer 2 Peer 3 Route Distribution


Status Status Status
OKAY OKAY OKAY Peer 1, Peer 2, or Peer 3
OKAY OKAY SUSPECT Peer 1 or Peer 2
REOPEN SUSPECT OKAY Peer 3
DOWN DOWN SUSPECT Peer 3
DOWN REOPEN DOWN Peer 2
DOWN DOWN DOWN None

Diameter requests are routed to peers with status REOPEN and SUSPECT as a last
resort, i e when there are no peers with status INITIAL or OKAY.

Diameter request are not routed to peers that are specified in the ExcludePeers
field of a RequestCycle UDR. For more information about the RequestCycle UDR,
see Section 13.2.2.1.2.1, “RequestCycleUDR”.
Enable Dy- Select this checkbox when you want to use DNS queries (Dynamic Peer Discovery) to
namic Peer find peer hosts in realms. The queried peer host information is buffered by the Diamet-
Discovery er_Stack agent according to the TTL (time to live) parameter in the DNS records. When
the TTL has expired, the agent will attempt to refresh the information. If the refresh
fails, the buffered information will be deleted.

When Enable Dynamic Peer Discovery is selected, DNS queries are performed at:

• Workflow start

• After TTL Expiration

• Dynamic update of Diameter routing profile

Note!

• To make changes to this setting effective, you must restart the workflow(s).

• If the DNS service is unavailable (server available but service down) when
starting the workflow(s), the system log entry will indicate errors in realm
lookups. In order to resume lookups in DNS, you need to dynamically update
the routing table in the Diameter_Stack agent when the DNS is available again.
For information about how to dynamically update the routing table, see Sec-
tion 13.2.3.2.3, “To Dynamically Update the Diameter Routing Profile”.

For information about how to select DNS servers, see Section 13.2.3.2.2, “DNS Tab”.

560
Desktop 7.1

Realm The realm name (case sensitive). Realm is used as primary key in the routing table
lookup. The Diameter Stack agent compares this value with the Destination_Realm
AVP. If left empty, all the destination realms are valid for this route.
Applications The applications that this route serves. This entry is used as a secondary key field in
the routing table lookup. If left empty, all the applications are valid for this route.

For example: 3,4.


Hostnames A list of one, or more, peer hosts in the realm. All hostnames must be selected from
the Peer Table.

When Node Discovery is set to Dynamic, you should leave this field empty.
Node Discov- The method of finding the peer hosts in the realm:
ery
• Static - The peer hosts are specified in the Hostnames field of the Realm Routing
Table.

• Dynamic - The Diameter_Stack agent uses DNS queries (Dynamic Peer Discovery)
to find the peer hosts. These queries may resolve to multiple IP addresses or host-
names.

Note! Entries in the Realm Routing Table that have the Dynamic setting are
ignored (not matched), unless Enable Dynamic Peer Discovery is selected.

When a DNS server resolves a realm to peer hosts, it may return fully qualified DNS
domain names with a dot at the end. These trailing dots are removed by the Diamet-
er_Stack agent.

13.2.3.2.1.4. To Add a Realm

1. In the Diameter Routing Profile, click on the Add button beneath the Realm Routing Table.

The Add Route dialog opens.

561
Desktop 7.1

Figure 397. The Diameter Routing Profile - Adding a Realm

2. Enter the realm name in the Realm text box.

3. If the realm serves specific applications, click on the Add button beneath the Applications list box
and specify the Application Id. Repeat this step for each application.

4. You should only perform this step if Node Discovery is set to Static and the peer host are to be
specified in the Realm Routing Table. Click on the Add button beneath the Hostname list box
and select a host from the drop-down list. Repeat this step for each host in the realm.

5. If you specified the peer hosts of the realm in the previous step, select Static from Node Discovery.
If you want to use Dynamic Peer Discovery instead, select Dynamic from this drop-down list.

6. Click on the Add button and the realm will be added in the Realm Routing Table, and then click
on the Close button to close the dialog when you are finished adding realms.

562
Desktop 7.1

13.2.3.2.2. DNS Tab

Figure 398. The Diameter Routing Profile - DNS Tab

You can use the DNS tab to configure the DNS settings used for looking up peer hosts of realms.

For information about how configure your DNS for Dynamic Peer Discovery, see the Diameter Base
Protocol (RFC 6733).

Note!

• To make changes to this tab effective, you must restart the workflow(s).

• Avoid configuring the same peer host in both DNS and the Peer Table, this may cause du-
plicate instances of Diameter peers.

• The host- and realm names in the Diameter_Stack agent are case sensitive.

Retry Interval Time Enter the time (in milliseconds) that Diameter_Stack agent must wait before
(ms) retrying a failed DNS connection.
Max Number Of Re- Enter the maximum number of times that Diameter_Stack agent should retry
tries to connect to the servers in the DNS Servers list before it gives up. When the
agent has attempted to connect to all servers (after an initial failed attempt),
it counts as retry.
DNS Servers Enter the hostname or IP address of the DNS servers that can be quieried.
The topmost available server will be used.

If the DNS Servers list is empty, the Diameter_Stack agent will use the file
/etc/resolv.conf on the Execution Context host to select the DNS
server. For information about how to configure resolv.conf, see your
operating system documentation.

13.2.3.2.3. To Dynamically Update the Diameter Routing Profile

You can refresh the routing table of a Diameter_Stack agent while a workflow is running. When the
agent refreshes the routing table, it reads the updated Peer Table, Realm Routing Table and Realm
Routing Strategy from the selected Diameter routing profile.

563
Desktop 7.1

The setting Enable Dynamic Peer Discovery in the Routing tab and the settings in the DNS tab are
not read from the Diameter Routing table at refresh. To make changes to these settings effective, you
must restart the workflow(s).

The routing table can be refreshed from the Workflow Monitor or from the Command Line Tool.

13.2.3.2.3.1. Workflow Monitor

1. In the Workflow Monitor, double-click the Diameter_Stack agent to open the Workflow Status
Agent configuration.

2. In the Command tab, select the Update Routing Table button to refresh the routing table.

13.2.3.2.3.2. Command Line Tool

Run the following command:

mzsh mzadmin/<password> wfcommand <workflow name> <Diameter_Stack


agent name>

Example 92. Update Routing Table

mzsh mzadmin/<password> wfcommand Default.my_workflow Stack1

When Round Robin is the selected Realm Routing Strategy, you can reset the selection cycle by
running the following command:

mzsh mzadmin/<password> wfcommand <workflow name> <Diameter_Stack


agent name> clearstrategystate

Example 93. Reset Round Robin Selection

mzsh mzadmin/<password> wfcommand Default.my_workflow Stack1


clearstrategystate

13.2.3.2.3.3. To Read the Realm Routing Table in APL

You can use the Diameter_Stack MIM value Realm Routing Table to read the realm routing
table of a Diameter Stack_Agent from APL. The MIM value is of the map<string,map<string,
list<string>>> type and is defined as a global MIM context type.

The string values in the outer map contain the realm names (primary key). The string values of the
inner map contain the applications (secondary key). The lists in the map contain the hostnames of the
peers in the realm.

Figure 399. Realm Routing Table MIM

Asterisks (*) are used in the strings to denote unspecified realm name or unspecified applications.

The values in the inner and outer maps are sorted exactly as the Realm Routing Table of the selected
Diameter routing profile.

564
Desktop 7.1

Example 94. Realm Routing Table MIM

Assume that the following realm routing table is defined for a Diameter_Stack agent:

Realm Application Peers


dr peer1, peer2
100, 200 peer3,peer4
peer5, peer6

The following APL code can be used to read the table:

initialize {
//Note the space between the angle brackets!
map<string, map<string, list<string> > > realmTable =
(map<string, map<string, list<string> > >)
mimGet("Stack1", "Realm Routing Table");
//Check the size of the table
if (mapSize(realmTable) != 2)
abort("Realm table incorrect size");
//Check that realms are included
if (mapKeys(realmTable) != listCreate(string, "dr", "*"))
abort("Wrong realms");
//Get the inner map for realm name "dr"
map<string, list<string> > drMap = mapGet(realmTable, "dr");
//Get the inner map for realm name "*" (unspecified realm)
map<string, list<string> > starMap = mapGet(realmTable, "*");
//Any Application Id
debug(mapGet(drMap, "*"));
//Application Id 100
debug(mapGet(starMap, "100"));
//Any Application Id
debug(mapGet(starMap, "*"));
}

The spaces between the angle brackets in the example above are required. If missing,
the APL will fail to compile.

Example debug output:

12:11:40: [peer1, peer2]


12:11:40: [peer3, peer4]
12:11:40: [peer5, peer6]

For more information about MIM values published by the Diameter_Stack agent, see Section 13.2.4.3,
“Meta Information Model”.

13.2.4. Diameter_Stack Agent


By including the Diameter_Stack agent in a workflow you enable MediationZone® to act as a Diameter
server for any application that follows the Diameter Base Protocol (RFC 6733).

565
Desktop 7.1

For further information about the agent's operation, see Section 13.2.2.1.1.1, “Diameter_Stack”.

13.2.4.1. Configuration
You open the Diameter_Stack agent configuration view from the workflow configuration by either
double-clicking the agent icon, or by right-clicking it and then selecting Configuration.

13.2.4.1.1. General Tab

The General tab contains general Diameter settings that are needed for configuration of the agent.

Figure 400. The Diameter Stack Agent - General Tab

Application Profile Click Browse to select a predefined Application Profile. The profile contains
details about advertised applications, as well as supported AVPs and command
codes.

For further information, see Section 13.2.3.1, “Diameter Application Profile”.


Routing Profile Click Browse to select a predefined routing profile. The profile contains details
about supported hosts, listening ports, applications, and realms.

For further information, see Section 13.2.3.2, “Diameter Routing Profile”


Server Protocol Select the transport protocol for incoming connections.

Note! SCTP must be installed on every EC host that uses the SCTP
protocol. For installation instructions, see your operating system docu-
mentation.

Diameter Identity Select Hostname to manually enter the hostname (case sensitive) of this Dia-
meter agent. In case the Origin-Host AVP has been left unconfigured, the
Hostname value will be applied whenever a Diameter message is transmitted
from this agent.

If SCTP is configured as server protocol, all IP addresses that are resolved from
the Diameter Identity will be used as SCTP endpoints through multihoming.
Use DNS Hostname If enabled, the Diameter Identity of the local agent is automatically set by
looking up the DNS hostname that is associated with the local IP address. If
there are more than one network interface, the agent aborts on startup.
Realm Enter the Diameter realm (case sensitive) for this specific host. In case the
Origin-Realm AVP has been left unconfigured, the Realm value will be applied
in messages transmitted from this agent.
Listening Port Enter the port through which the Diameter agent should listen for transport
connections input.

566
Desktop 7.1

Primary Host When using SCTP, optionally enter the IP address of the network interface that
will be used to establish a transport connection. If left unconfigured, any IP
address that can be resolved from the Hostname will be selected.

13.2.4.1.2. Diameter Too Busy Tab

The Diameter_Stack receives, decodes, and forwards UDRs asynchronously. An internal queue in the
workflow engine acts as a backlog for the workflow. When the load of messages gets too heavy to
process, you can either use the configurations in the Diameter Too Busy tab, in order to respond to
callers, or configure the Supervision Service with actions to take.

Note! The configurations described in this section will determine if and how the Diameter agent
will reject incoming requests or UDRs to the workflow. For information regarding how to limit
the number of requests and UDRs sent out from the workflow, see Section 13.2.3.2, “Diameter
Routing Profile”.

13.2.4.1.2.1. Diameter Too Busy

The Diameter Too Busy tab enables you to configure the agent with instructions to respond to callers.

Figure 401. The Diameter Stack Agent - Diameter Too Busy Tab

Enable Diameter Too Select this check box to enable the agent to automatically respond with
Busy DIAMETER_TOO_BUSY when the workflow is overloaded.
Maximum Workflow Enter the highest limit of the internal queue size. When this limit is reached
Queue Size(%) the agent sends "Too Busy" responses.

This setting is measured in percent % of the total Workflow Queue Size


that is configured in Workflow Properties.

Note! You can change this value during processing from the
Workflow Monitor.

Throughput Threshold The Throughput Threshold is also a congestion control setting. With it,
(UDRs/s) you can make the agent reject some of the incoming UDRs.

When the load of requests per second exceeds the value of this property,
some of the requests will be rejected and the process sending the request
will get a Diameter Too Busy response.

Note! You can change this value during processing from the
Workflow Monitor.

567
Desktop 7.1

Time Between Log This property tells the agent how often it should write messages to system
Entries(s) log when it is in congestion prevention mode.

Time Between Log Entries is an integer value between 1 and 3600


seconds.

13.2.4.1.2.2. Supervision Service

If you want to reject certain messages when the load gets too heavy, you can use the Supervision Service.
With this service you can select one of the following overload protection strategies:

• Diameter_ACInterimRequest - For rejecting requests of type AccountingInterim-Request

• Diameter_CCInitialRequest - For rejecting Credit-Control Initial requests

• Diameter_CCTerminationRequest - For rejecting Credit-Control Termination requests

• Diameter_ACStartRequest - For rejecting requests of type AccountingStart-Request

• Diameter_ReAuthRequest - For rejecting ReAuthentication requests

• Diameter_AbortSessionRequest - For rejecting requests of type AbortSession

• Diameter:ACStopRequest - For rejecting requests of type AccountingStop-Request

• Diameter_CCUpdateRequest - For rejecting Credit-Control Update requests

• Diameter_CCEventRequest - For rejecting requests of type Credit-Control-EventRequest

For each strategy you can select if you want to reject 25, 50, or 100 % of the requests.

See Section 4.1.8.5.2, “Supervision Service” for further information.

13.2.4.1.3. Advanced Tab

This tab includes settings of a more advanced nature.

Figure 402. The Diameter Stack Agent - Advanced Tab

Diameter Answer Enter the period of time (in milliseconds) before a non-responded request is
Timeout (ms) handled as an error, i e an Error Answer Message is returned. See Sec-
tion 13.2.2.1.3, “Special Error Handling” for further information.

568
Desktop 7.1

Note! The timeout is checked periodically according to the Timeout


Resolution (ms) setting. The time set in this field may be added to
the time period entered for Diameter Answer Timeout (ms).

Consequently, setting the timeout interval to a very small value will


not be very useful, since the delay in detecting the timeouts will have
a quite large effect on the actual time interval before detecting timeouts.

Examples

Setting Diameter Answer Timeout (ms) to:

• 50 will result in timeouts being detected within 50 - 149 ms

• 300 will result in timeouts being detected within 300 - 399 ms

• 750 will result in timeouts being detected within 750 - 849 ms

Timeout Resolution Enter the interval at which the Answer Timeout should be checked.
(ms)
Enable Debug Events Select this check box to enable debug mode. Useful for testing purposes.
Enable Runtime Valid- Select this check box to enable runtime validation of the Diameter messages
ation against the command and AVP definitions in the Diameter application profile.

The following is validated in incoming and outgoing messages:

• Occurences and position of AVPs

• Setting of AVP flags i e M-bit and P-bit

• Setting of command flags i e Proxiable (p-bit) and Error (e-bit)

When runtime validation is selected, incoming messages that fail the valida-
tion are rejected by the Diameter_Stack agent and the appropriate result code
is applied in an error answer message.

Enabling runtime validation may have some performance impact.


Do Not Create Outgo- Select this check box to prevent the agent from actively trying to connect or
ing Connections reconnect if a connection is lost, with peers. When this option is checked,
the agent is said to run in passive mode.
Keystore Path The path to a keystore file that contains the private key of the Diameter_Stack,
and any certificates needed to verify peers. This parameter is only applicable
if TLS security is used for one or more of the peers in the Diameter routing
profile. The path is relative to the MediationZone® Execution Context on
which the Diameter_stack runs.
Keystore Password Enter the password to accessing the keystore.
Watchdog (ms) Enter the watchdog timer interval TWINIT. For information about TWINIT,
see RFC 3539.
Maximum Message Enter the maximum number of bytes allowed in a single Diameter message.
Size (B)
Socket Write Timeout Enter the timeout value for writing to a socket. If the write operation is
(ms) blocked for longer than the timeout period, the peer will be disconnected.
Write blocks may occur if the receiving peer is overloaded.

569
Desktop 7.1

Connect Timeout (ms) Enter the timeout value for peer connection attempts. This setting is only
applicable for TCP connections.
Connect Interval (ms) Enter the minimum time interval between connection attempts when routing
messages from a workflow and the peer connection is not established. An
interval timer is started at the first connection attempt; subsequent connection
attempts to the same peer are then suppressed until the timer has expired.

When realm-based routing is used, the connect interval is applied only if all
configured peers in the realm are down.

13.2.4.2. Introspection
The introspection is the type of data an agent expects and delivers.

The agent emits and receives UDRs of the type RequestCycleUDR.

13.2.4.3. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

13.2.4.3.1. Publishes

Note! In order for the MIM counters in this section to publish correct values, the hostnames
specified in the Peer Table of the selected Diameter routing profile must be consistent with the
Origin-Host AVP in answer commands. For more information about the Peer Table, see Sec-
tion 13.2.3.2, “Diameter Routing Profile”.

MIM Value Description


Bytes Received This MIM parameter contains the number of received bytes from each peer
in the selected Diameter routing profile.

Bytes Received is of the map<string>, long> type and is defined


as a global MIM context type. The string values in the map contain the host-
names of the peers.
Bytes Transmitted This MIM parameter contains the number of transmitted bytes to each peer
in the selected Diameter routing profile.

Bytes Transmitted is of the map<string, long> type and is defined


as a global MIM context type. The string values in the map contain the host-
names of the peers.
CEA Count This MIM parameter contains the number of sent and received CEA (Capab-
ilities-Exchange-Answer) commands for each peer in the selected Diameter
routing profile.

CEA Count is of the map<string, long> type and is defined as a


global MIM context type. The string values in the map contain the hostnames
of the peers.
CER Count This MIM parameter contains the number of sent and received CER (Capabil-
ities-Exchange-Request) commands for each peer in the selected Diameter
routing profile.

CER Count is of the map<string, long> type and is defined as a


global MIM context type. The string values in the map contain the hostnames
of the peers.

570
Desktop 7.1

Communication Fail- This MIM parameter contains the number of connection problems detected
ure Network Layer on network level.

Communication Failure Network Layer is of the long type and


is defined as a global MIM context type.
Communication Fail- This MIM parameter contains the number of connection problems detected
ure Protocol Layer on protocol level.

Communication Failure Protocol Layer is of the long type


and is defined as a global MIM context type.
Diameter Too Busy This MIM parameter contains the number of sent Diameter Too Busy re-
Count sponses. The MIM value is reset each time the MIM is read.

Diameter Too Busy Count is of the long type and is defined as a


global MIM context type.
Diameter Too Busy This MIM parameter contains the number of Diameter Too Busy responses
Total Count sent, since Workflow start.

Diameter Too Busy Total Count is of the long type and is defined
as a global MIM context type.
DPA Count This MIM parameter contains the number of sent and received DPA (Discon-
nect-Peer-Answer) commands for each peer in the selected Diameter routing
profile.

DPA Count is of the map<string, long> type and is defined as a


global MIM context type. The string values in the map contain the hostnames
of the peers.
DPR Count This MIM parameter contains the number of sent and received DPR (Discon-
nect-Peer-Request) commands for each peer in the selected Diameter routing
profile.

DPR Count is of the map<string, long> type and is defined as a


global MIM context type. The string values in the map contain the hostnames
of the peers.
DWA Count This MIM parameter contains the number of sent and received DWA (Device-
Watchdog-Answer) commands for each peer in the selected Diameter routing
profile.

DWA Count is of the map<string, long> type and is defined as a


global MIM context type. The string values in the map contain the hostnames
of the peers.
DWR Count This MIM parameter contains the number of sent and received DWR (Device-
Watchdog-Request) commands for each peer in the selected Diameter routing
profile.

DWR Count is of the map<string, long> type and is defined as a


global MIM context type. The string values in the map contain the hostnames
of the peers.
Incoming Messages This MIM parameter contains the number of received messages.

Incoming Messages is of the long type and is defined as a global MIM


context type.
Origin State Id This MIM parameter contains a value that is incremented in the initialize
workflow execution state. It is used for populating the AVP Origin-State-Id.

571
Desktop 7.1

Origin State Id is of the int type and is defined as a global MIM


context type.
Outgoing Messages This MIM parameter contains the number of sent messages.

Outgoing Messages is of the long type and is defined as a global MIM


context type.
Peer Round Trip This MIM parameter contains the minimum round trip latency to peer since
Latency Min Workflow start.

Workflow Round Trip Latency Min is of the long type and is


defined as a global MIM context type.
Peer Round Trip This MIM parameter contains the maximum round trip latency to peer since
Latency Max Workflow start.

Workflow Round Trip Latency Max is of the long type and is


defined as a global MIM context type.
Peer Round Trip This MIM parameter contains the average round trip latency to peer, calculated
Latency Avg (in milliseconds) over the last 1000 received records.

Workflow Round Trip Latency Avg is of the long type and is


defined as a global MIM context type.
Peer Status This MIM parameter contains the state of a peer connection. The first string
is the hostname and the second string is the state. The state can be any of the
following: OKAY, SUSPECT, DOWN, REOPEN, or INITIAL. These are
described in RFC 3539.

The value of Peer Status is always INITIAL in the initialize


workflow execution state.

Peer Status is of the map<string,string> type and is defined as a


global MIM context type.
Realm Routing Table This MIM parameter contains the realm routing table of the Diameter_Stack
agent.

Realm Routing Table is of the map<string,map<string,


list<string>>> type and is defined as a global MIM context type. The
string values in the outer map contain the realm names (primary key). The
string values of the inner map contain the applications (secondary key). The
lists in the map contain the hostnames of the peers in the realm.

For an example of how to use this MIM value, see Section 13.2.3.2.3.3, “To
Read the Realm Routing Table in APL”.
Records in decoder This MIM parameter contains the current number of records in the queue for
queue decoding.

Records in decoder queue is of the long type and is defined as a


global MIM context type.
Rejected Messages This MIM parameter contains the number of rejected messages.

Rejected Messages is of the long type and is defined as a global MIM


context type.
Workflow Round Trip This MIM parameter contains the average workflow processing latency, cal-
Latency Avg culated (in milliseconds) over the last 1000 processed records.

Workflow Round Trip Latency Avg is of the long type and is


defined as a global MIM context type.

572
Desktop 7.1

Workflow Round Trip This MIM parameter contains the minimum workflow processing latency,
Latency Max since Workflow start.

Workflow Round Trip Latency Max is of the long type and is


defined as a global MIM context type.
Workflow Round Trip This MIM parameter contains the maximum workflow processing latency,
Latency Min since Workflow start.

Workflow Round Trip Latency Min is of the long type and is


defined as a global MIM context type.

13.2.4.3.2. Accesses

The agent does not access any MIM parameters.

13.2.4.4. Diameter Peer State Changed Event


When you run a Diameter workflow, the peer connections of the Diameter_Stack agents are monitored
through the standard Diameter watchdog as described in RFC 6733 and RFC 3539. Possible states of
the connection are: OKAY, SUSPECT, DOWN, REOPEN, INITIAL. The Diameter peer state changed
event is triggered whenever there is a change of peer state.

You can configure an event notification that is triggered whenever a state change occurs. For more
information about this event, see Section 5.5.19, “Diameter Peer State Changed Event”.

13.2.4.5. Diameter Dynamic Event


You can configure an event notification that is triggered in the following cases when dynamic peer
discovery is enabled:

• A Diameter workflow is started.

• The routing table of a Diameter_Stack agent is dynamically updated.

• The TTL (time to live) of a cached DNS record expires.

For more information about this event, see Section 5.5.5, “Diameter Dynamic Event”.

13.2.4.6. Agent Message Events


An information message from the agent, generated according to the configuration in the Event Noti-
fication Configuration.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Generic Diameter Server initialized for realm XXXX

The message is generated when the workflow is started.

• RFC 6733 compliance warning: Passive mode enabled

The message is generated if the workflow is started with the agent in the passive mode.

• Generic Diameter Server stopping.

The message is generated when the workflow is stopping.

573
Desktop 7.1

13.2.4.7. Debug Events


Debug messages are dispatched when in debug mode. The messages appear in the Workflow Monitor
during execution and can also be generated according to the settings in the Event Notification Con-
figuration.

For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.

13.2.5. Diameter_Request Agent


By including both the Diameter_Stack and the Diameter_Request agents in a workflow you enable
MediationZone® to act as a Diameter client.

You apply a Diameter_Request agent to your workflow in order to transmit requests from the workflow.

13.2.5.1. Configuration
You open the Diameter_Request agent configuration view from the workflow configuration by either
double-clicking the agent icon, or by right-clicking it and then selecting Configuration.

Figure 403. The Diameter Request Agent - Diameter Request Tab

Associated Diameter_Stack

From the drop-down list that includes all the Diameter_Stack agents in the workflow, select the Dia-
meter_Stack that you want requests to be sent from.

13.2.5.2. Introspection
The introspection is the type of data that an agent expects and delivers.

The agent emits and receives UDRs of the type RequestCycleUDR.

13.2.5.3. Meta Information Model


The agent does not publish nor access any MIM parameters.

13.2.5.4. Agent Message Events


There are no agent message events for this agent.

13.2.5.5. Debug Events


There are no debug events for this agent.

13.2.6. A Diameter Example


This section describes an example of based on two workflow configuration that includes Diameter
agents and profiles.

The example below demonstrates a call scenario where a client issues a Service-Authorization-
Request in order to authorize a user for a certain service. The Diameter_Stack agent sends back a
Service-Authorization-Answer that is assigned with the value yes (1), if authorization is
successful, or no (0), if authorization has failed.

574
Desktop 7.1

13.2.6.1. Diameter Application Profile


You only need to create one Diameter application profile in order to run this example on one system.
This is possible since several Diameter_Stack agents can share one profile. If you wish to run the
Diameter_Stack Workflow and Diameter_Request workflow on different systems, you may create two
identical profiles.

For information about how to configure a Diameter application profile, see Section 13.2.3.1, “Diameter
Application Profile”.

The commands used in this example are specified in ABNF format below:

Service-Authorization-Request ::= < Diameter Header: 383, REQ, PXY 999>


1 < Session-Id >
1 { Origin-Host }
1 { Origin-Realm }
1 { Destination-Host }
1 { Destination-Realm }
1 { Accounting-Record-Type }
1 { Accounting-Record-Number }
1 { Auth-Application-Id }
1 { Service }
1 { User }

Service-Authorization-Answer ::= < Diameter Header: 383, PXY 999>


1 < Session-Id >
1 { Result-Code }
1 { Origin-Realm }
1 { Origin-Host }
1 { Accounting-Record-Type }
1 { Accounting-Record-Number }
1 { Auth-Application-Id }
1 { Authorized }

The AVPs used by the commands are specified in ABNF format below:

ServiceId ::= <AVP Header: 601 = Unsigned32>


(M:MUST, P:MAY, Protected:NO)

ServiceGroup ::= <AVP Header: 602 = Unsigned32>


(M:MUST, P:MAY, Protected:NO)

ServiceName ::= <AVP Header: 603 = OctetString>


(M:MUST, P:MAY, Protected:NO)

Username ::= <AVP Header: 610 = OctetString>


(M:MUST, P:MAY, Protected:NO)

IP ::= <AVP Header: 611 = OctetString>


(M:MUST, P:MAY, Protected:NO)

User ::= <AVP Header: 612>


(M:MUST, P:MAY, Protected:NO)
1 { IP }
0*1 [ Username ]

575
Desktop 7.1

Service ::= <AVP Header: 604>


(M:MUST, P:MAY, Protected:NO)
1 { ServiceId }
1 { ServiceName }
0* [ ServiceGroup ]

Authorized ::= <AVP Header: 620 = Enumerated>


(M:MUST, P:MAY, Protected:NO)
%FALSE 0
%TRUE 1

13.2.6.2. Diameter Routing Profile


To handle routing of Diameter requests, two tables are used. If the Destination_Host in the request
is set, it is matched directly with an entry in the Peer Table. If only Destination_Realm is set,
it is compared with the Realm Routing Table.

You only need to create one Diameter application profile in order to run this example on one system.
This is possible since several Diameter_Stack agents can share one profile. If you wish to run the
Diameter_Stack Workflow and Diameter_Request workflow on different systems, you may create two
profiles.

If you only create one (shared) Diameter routing profile, you must configure all peers and realms in
this profile. As it is not allowed to use duplicated ip addresses or hostnames in the peer table, you may
also need to update the host file of your operating system, in order to run two peers on the same machine.
The location of this file is operating system specific but it can be found under /etc on most Linux
and Unix distributions.

576
Desktop 7.1

Example 95. Hosts file

127.0.0.1 localhost
172.16.207.1 dia1 dia2 dia3 dia4

Figure 404. The Diameter Routing Profile

For further information about the Diameter routing profile, see profile, see Section 13.2.3.2, “Diameter
Routing Profile”.

13.2.6.3. Diameter_Stack Workflow

Figure 405. Diameter_Stack Workflow Configuration

13.2.6.3.1. Analysis Agent

This Analysis agent contains the code that is needed to read the Service_Authorization_Request.
Normally it would then make the authorization by populating a request of an external Charging/Rating
System, but in this code example we create only a positive Service Authorization Answer:

import ultra.diameter_example.dia_app_prof;

initialize{
debug("wf started");

577
Desktop 7.1

consume {
//The Request is a subclass of RequestCycleUDR
if (instanceOf(input.Request,Service_Authorization_Request)){
Service_Authorization_Request request =
(Service_Authorization_Request)input.Request;

//This is how you extract the AVPs.


debug("Service Id: "+request.Service.ServiceId);
debug("Service Name: "+baToStr(request.Service.ServiceName));
debug("User Name: "+ baToStr( request.User.Username));

//This AVP is optional within the grouped AVP.


if(udrIsPresent(request.User.IP)){
debug("IP: "+ baToStr( request.User.IP));
}

//This is how you create the answer.


Service_Authorization_Answer answer =
udrCreate(Service_Authorization_Answer);

//Set the mandatory values.


//OriginHost and OriginRealm are set by the server
answer.Session_Id=request.Session_Id;
answer.Result_Code = 2001; //Means ok
answer.Accounting_Record_Type = request.Accounting_Record_Type;
answer.Accounting_Record_Number = request.Accounting_Record_Number;

//Validate if the User has access to the requested Service.


// ... then ...
//Set the configured AVP
answer.Authorized = 1; // User authorized

//Attach the answer to the Diameter Session.


input.Answer = answer;

//Route the answer back to the client


udrRoute(input);
}
//Send a diameter answer error message.
//The e-bit is set in the message header.
else {
Diameter.Base.Error_Answer_Message eam =
udrCreate(Diameter.Base.Error_Answer_Message);
eam.Result_Code=3001;
eam.Session_Id = input.Session_Id;
eam.Error_Message="DIAMETER_COMMAND_UNSUPPORTED";
eam.Error_Reporting_Host = "dia2";
Diameter.Base.AVP.User_Name un =
udrCreate(Diameter.Base.AVP.User_Name);
//Adding a optional "any AVP" into the Error Message..
un.Value="dr_user";
list <any> l = listCreate(any);
listAdd(l,un);
eam.Additional_AVPs = l;
input.Answer = eam;
udrRoute(input);

578
Desktop 7.1

}
}

13.2.6.3.2. Diameter_Stack Agent


Configure the Diameter_Stack agent so that Diameter Identity, Port, and Realm are consistent with
the Diameter routing profile used by the stack in the Diameter_Request workflow.

13.2.6.4. Diameter_Request Workflow

Figure 406. Diameter Request Workflow Configuration

13.2.6.4.1. TCP_IP Agent

This agent receives the comma separated UDR:s user,id. user is an IP Address and id is a Service
ID of a requested service.

The ultra definition for the TCP UDR is as follows:

external TCPext {
ascii user : terminated_by( "," );
ascii id : int(base10), terminated_by( 0xD );
};

internal TCPint :
extends_class( "com.digitalroute.wfc.tcpipcoll.TCPIPUDR" ) {
string session_Id;
};

in_map TCP_inMap : external( TCPext ),


internal( TCPint ),
target_internal( TCP_TI ) { automatic; };

decoder TCP_Decoder : in_map(TCP_inMap);

13.2.6.4.2. Analysis Agent

This agent creates a Service_Authorization_Request and routes it to the Diameter_Request agent. A


Diameter Session ID is created uniquely using a synchronized function.Use the following code within
the Analysis agent (analysis_1):

import ultra.diameter_example.dia_app_prof;

long sessionNo;

synchronized long sessionIncr(){


sessionNo=sessionNo+1;
return sessionNo;

579
Desktop 7.1

consume {
if (instanceOf(input,TCP_TI)){
TCP_TI tcp_req = (TCP_TI)input;
tcp_req.session_Id="session"+(string)sessionIncr();
debug("User:" + tcp_req.user);
debug("Id:" + tcp_req.id);

// Create the Diameter request


// The UDR fields are based on the
//configuration of Diameter application profile
Service_Authorization_Request request =
udrCreate(Service_Authorization_Request);

request.Session_Id=tcp_req.session_Id;

//Grouped types must be created before instantiation.


request.Service = udrCreate(AVP.Service);
request.User = udrCreate(AVP.User);

//Set all the mandatory values,


//or the encoding of the RequestCycleUDR will fail.
request.Service.ServiceId=tcp_req.id;
strToBA(request.Service.ServiceName,"Example Service");
strToBA(request.User.IP,tcp_req.user);

//This value should match an hostname in


//the peer table of the routing profile.
request.Destination_Host= "dia2";

//Set this value and clear destination host


//to use realm based routing.
//request.Destination_Realm="example_realm.com";

//Create a RequestCycleUDR and assign the request to it.


RequestCycleUDR diam_req = udrCreate(RequestCycleUDR);
diam_req.Request=request;

udrRoute(diam_req,"Diameter_req");
debug("Routed Request");

} else if (instanceOf(input,RequestCycleUDR)){
//Check if this is an answer UDR.
if (instanceOf(((RequestCycleUDR)input).Answer,
Service_Authorization_Answer)){

Service_Authorization_Answer answ =
(Service_Authorization_Answer)((RequestCycleUDR)input).Answer;

//Store the value of Authorized in a bytearray


bytearray tcp_answ;
strToBA(tcp_answ, (string) answ.Authorized);

//Send the value back to the TCP/IP agent.


udrRoute(tcp_answ,"answer");
debug("Received Answer");
}

580
Desktop 7.1

else {
//Check if it is an error answer message.
if(instanceOf(((RequestCycleUDR)input).Answer,
Base.Error_Answer_Message)) {
Diameter.Base.Error_Answer_Message eam =
(Diameter.Base.Error_Answer_Message)
((RequestCycleUDR)input).Answer;
debug("Received an error message answer: " +
eam.Error_Message);
}
}
}
}

13.2.6.4.3. Diameter_Stack Agent

The Diameter_Stack must be included in the request workflow since the Diameter_Request agent can
not perform actions such as handshaking according to the Diameter Base protocol.

Configure the Diameter_Stack agent so that Diameter Identity, Port, and Realm are consistent with
the Diameter routing profile used by the stack in the Diameter_Stack workflow.

The Analysis agent connected with the Diameter_Stack does nothing. However, it is required since
all agents must be connected.

13.2.7. Syntax Description


In this section you will find a detailed description of the import and export syntax formats that are
supported by MediationZone® .

13.2.7.1. ABNF Specification Syntax


MediationZone® uses the ABNF format defined in the Diameter Base Protocol (RFC 6733).

581
Desktop 7.1

Example 96. The Diameter Command ABNF Specification - Copied from RFC 6733:

Every Command Code that is defined must include a corresponding ABNF specification that is
used to define the AVPs. The following format is used in the definition:

command-def = command-name "::="


diameter-message

command-name = diameter-name

diameter-name = ALPHA *(ALPHA / DIGIT / "-")

diameter-message = header [ *fixed] [ *required]


[ *optional] [ *fixed]

header = "<" Diameter-Header:" command-id


[r-bit] [p-bit] [e-bit] [application-id]">"

application-id = 1*DIGIT

command-id = 1*DIGIT
The Command Code assigned to the command

r-bit = ", REQ"


If present, the 'R' bit in the Command Flags is set, indicating
that the message is a request, as opposed to an answer.

p-bit = ", PXY"


If present, the 'P' bit in the Command Flags is set, indicating
that the message is proxiable.

e-bit = ", ERR"


If present, the 'E' bit in the Command Flags is set, indicating
that the answer message contains a Result-Code AVP in the
"protocol error" class.

fixed = [qual] "<" avp-spec ">"


Defines the fixed position of an AVP

required = [qual] "{" avp-spec "}"


The AVP MUST be present and can appear anywhere in the message.

optional = [qual] "[" avp-name "]"


The AVP-name in the 'optional' rule cannot evaluate to any AVP
Name which is included in a fixed or required rule. The AVP can
appear anywhere in the message.

qual = [min] "*" [max]


See ABNF conventions, RFC 2234 Section 6.6. The absence of any
qualifiers depends on whether it precedes a fixed, required or
optional rule. If a fixed or required rule has no qualifier,
then exactly one such AVP MUST be present. If an optional rule
has no qualifier, then 0 or 1 such AVP may be present.

NOTE: "[" and "]" have a different meaning than in ABNF


(see the optional rule, above). These braces cannot be used

582
Desktop 7.1

to express optional fixed rules (such as an optional ICV


at the end). To do this, the convention is '0*1fixed'.

min = 1*DIGIT
The minimum number of times the element may be present.
The default value is zero.

max = 1*DIGIT
The maximum number of times the element may be present. The
default value is infinity. A value of zero implies the AVP
MUST NOT be present.

avp-spec = diameter-name
The AVP-spec has to be an AVP Name, defined in the base or
extended Diameter specifications.

avp-name = avp-spec / "AVP"


The string "AVP" stands for *any* arbitrary AVP Name,
which does not conflict with the required or fixed position
AVPs defined in the command code definition.

The following is a definition of a fictitious command code:

Example-Request ::= < "Diameter-Header: 9999999, REQ, PXY >


{ User-Name }
* { Origin-Host }
* [ AVP ]

13.2.7.2. XML Specification Syntax


This section describes the tags used in the XML file, and includes a DTD file that supports the XML
file.

13.2.7.2.1. XML

13.2.7.2.1.1. <diameter-protocol>

The XML file starts with a diameter-protocol declaration.

Example 97. diameter-protocol syntax

<diameter-protocol name='unknown'>

The name attribute is optional and specifies the Diameter protocol name.

The diameter-protocol tag can contain the following tags:

<avp>

<command>

13.2.7.2.1.2. <avp>

The avp tag defines an AVP.

583
Desktop 7.1

Example 98. avp syntax

<avp id='55' name='Event-Timestamp' vendor='1234'>

Each AVP tag requires an ID and a name. The ID is the AVP code allocated by IANA for this AVP.
The name identifies this AVP in grouped AVPs or commands. The vendor attribute is optional and
sets the ID of the AVP vendor.

The avp tag must contain the following tag.

<flag-rules>

The avp tag must contain one of the following tags.

<simple-type>

<enumeration>

<layout>

The avp tag may contain the following tag.

<may-encrypt>

13.2.7.2.1.3. <flag-rules>

The flag-rules tag is required for the avp tag, and defines the AVP flags with a number of flag-rule
definition tags.

Example 99. flag-rules syntax

<flag-rules>
<flag-rule name='mandatory' rule='must'/>
<flag-rule name='protected' rule='must_not'/>
</flag-rules>

13.2.7.2.1.4. <flag-rule>

The flag-rule tag defines the value for one of the valid AVP flags.

Example 100. flag-rule syntax

<flag-rule name='mandatory' rule='must'/>

The flag-rule tag has two required attributes. The name attribute is the flag name, either mandatory
or protected. The rule attribute defines the flag value and can be must, may, should_not or must_not.

13.2.7.2.1.5. <may-encrypt>

The may-encrypt tag defines if this AVP is to be encrypted or not.

584
Desktop 7.1

Example 101. may-encrypt syntax

<may-encrypt/>

13.2.7.2.1.6. <simple-type>

<simple-type> with its name attribute defines the AVP type as any of the following types:

• Unsigned32

• Unsigned64

• Signed32

• Signed64

• Float32

• Float64

• DiameterIdentity

• UTF8String

• Address

• OctetString

• Time

• DiameterURI

• IPFilterRule

Example 102. simple-type syntax

<simple-type name='Time'/>

13.2.7.2.1.7. <enumeration>

The enumeration tag defines AVPs of type Enumerated, and can have any number of <enumerator>
sub tags.

Example 103. enumeration syntax

<enumeration>
<enumerator value='1' name='EVENT_RECORD'/>
<enumerator value='2' name='START_RECORD'/>
<enumerator value='3' name='INTERIM_RECORD'/>
<enumerator value='4' name='STOP_RECORD'/>
</enumeration>

585
Desktop 7.1

13.2.7.2.1.8. <enumerator>

The enumerator tag defines an element in an Enumerated AVP type. It has two required attributes
called name and value. For an example of the syntax see Section 13.2.7.2.1.7, “<enumeration>”.

13.2.7.2.1.9. <layout>

The layout tag defines AVPs of type Grouped. A grouped AVP consists of a sequence of AVPs. It is
also possible to nest grouped AVPs, that is to include a grouped AVP within a grouped AVP.

Example 104. layout syntax

<layout>
<fixed>
<avp-ref name='Session-Id' min='0'/>
</fixed>
<required>
<avp-ref name='Origin-Host'/>
<avp-ref name='Origin-Realm'/>
<avp-ref name='Result-Code'/>
</required>
<optional>
<avp-ref name='Origin-State-Id'/>
<avp-ref name='Error-Reporting-Host'/>
<avp-ref name='Error-Message'/>
<avp-ref name='Proxy-Info' max='*'/>
<any-avp/>
</optional>
</layout>

The layout tag can contain the following tags.

<fixed>

<required>

<optional>

13.2.7.2.1.10. <fixed>

The fixed tag defines the fixed AVPs included in a grouped AVP. For an example of the syntax see
Example 104, “layout syntax”.

The fixed tag can contain the following tag.

<avp-ref>

13.2.7.2.1.11. <required>

The required tag defines the required AVPs included in a grouped AVP. For an example of the syntax
see Example 104, “layout syntax”.

The required tag can contain the following tags.

<avp-ref>

<any-avp>

586
Desktop 7.1

13.2.7.2.1.12. <optional>

The optional tag defines the optional AVPs included in a grouped AVP. For an example of the syntax
see Example 104, “layout syntax”.

The optional tag can contain the following tags.

<avp-ref>

<any-avp>

13.2.7.2.1.13. <avp-ref>

The avp-ref tag contains a reference to an AVP that should be included in a grouped AVP. The tag
has a required attribute called name. It holds the name of the referenced AVP. The optional attributes
min and max set the qualifiers for the AVP. For an example of the syntax, see Example 104, “layout
syntax”.

13.2.7.2.1.14. <any-avp>

The any-avp tag defines that a grouped AVP in the group list can have any number of AVPs of any
kind.

13.2.7.2.1.15. <command>

The command tag defines a command.

Example 105. command syntax

<command id='257'>

The required attribute id is the command code allocated by IANA for this command. The optional at-
tribute application sets the command application ID.

The command tag requires one of the following tags.

<answer>

<request>

13.2.7.2.1.16. <answer>

This tag defines an answer command. The attribute name is required.

Example 106. answer syntax

<answer name='Error-Answer-Message'>

The answer tag can contain the following tags.

<header-bits>

<layout>

13.2.7.2.1.17. <header-bits>

This tag defines the header bits of a command.

587
Desktop 7.1

Example 107. header-bits syntax

<header-bits>
<header-bit name='request' value='0'/>
<header-bit name='proxiable' value='1'/>
<header-bit name='error' value='1'/>
</header-bits>

The header-bits tag can contain the following tags.

<header-bit>

13.2.7.2.1.18. <header-bit>

This tag defines a command header bit. For an example of the syntax, see Example 107, “header-bits
syntax”.

The header-bit tag has two required attributes. name is the header bit name and can be request,
proxiable or error. The value is the bit value (0 or 1). Any other value will cause the XML
aborter to abort with an error message.

13.2.7.2.2. DTD

diameter.dtd supports the XML file in Section 13.2.7.2.1, “XML”.

<?xml version='1.0' encoding='ascii'?>


<!ELEMENT diameter-protocol (avp|command)*>
<!ATTLIST diameter-protocol name CDATA #REQUIRED>
<!ELEMENT avp (flag-rules,enumeration?,layout?)>
<!ATTLIST avp id CDATA #REQUIRED>
<!ATTLIST avp name CDATA #REQUIRED>
<!ATTLIST avp datatype CDATA #IMPLIED>
<!ATTLIST avp vendor CDATA #IMPLIED>
<!ELEMENT flag-rules (flag-rule*)>
<!ELEMENT flag-rule EMPTY>
<!ATTLIST flag-rule name CDATA #REQUIRED>
<!ATTLIST flag-rule rule CDATA #REQUIRED>
<!ELEMENT enumeration (enumerator*)>
<!ELEMENT enumerator EMPTY>
<!ATTLIST enumerator value CDATA #REQUIRED>
<!ATTLIST enumerator name CDATA #REQUIRED>
<!ELEMENT layout (fixed?,required?,optional?)>
<!ELEMENT fixed (avp-ref*)>
<!ELEMENT required (avp-ref*,any-avp?)>
<!ELEMENT optional (avp-ref*,any-avp?)>
<!ELEMENT avp-ref EMPTY>
<!ATTLIST avp-ref name CDATA #REQUIRED>
<!ATTLIST avp-ref min CDATA #IMPLIED>
<!ATTLIST avp-ref max CDATA #IMPLIED>
<!ELEMENT any-avp EMPTY>
<!ELEMENT command (request?, answer?)>
<!ATTLIST command id CDATA #REQUIRED>
<!ATTLIST command application CDATA #IMPLIED>
<!ENTITY % command-children "(header-bits, layout)">
<!ELEMENT request %command-children;>
<!ATTLIST request name CDATA #REQUIRED>
<!ELEMENT answer %command-children;>

588
Desktop 7.1

<!ATTLIST answer name CDATA #REQUIRED>


<!ELEMENT header-bits (header-bit*)>
<!ELEMENT header-bit EMPTY>
<!ATTLIST header-bit value CDATA #REQUIRED>

13.2.8. Configuration and Design Considerations


This section describes details of the Diameter Base Protocol implementation that you should take into
account when working with the Diameter agents.

13.2.8.1. Limitations
While MediationZone® provides the capability of a Diameter Server and a Diameter Client, it does
not provide all the capabilities of a Diameter Agent as defined in the Diameter Base Protocol (RFC
6733), chapter 1.2 Terminology

The following limitations apply:

• The agent can not act as a relay agent.

• Cache handling during redirect is not supported.

• DTLS over SCTP is not supported.

• Transport security (TLS) is negotiated via the Inband-Security AVP in CER/CEA exchange and
not prior to the CER/CEA exchange as recommended in RFC 6733.

13.2.8.2. Number of Decoding Threads


You can use the property mz.workflow.decoderqueue.max_threads to specify the maximum
number of threads used by the Diameter_Stack agent for decoding messages. Setting a lower value
than default (10) may enhance performance if the EC host has a low number of CPU cores and the
active workflows are complex. On the other hand, decoding may constitute a bottleneck when performing
simple processing on a host machine with a high number of CPU cores. In this case, setting a higher
value may provide better performance. This property must be set manually in
executioncontext.xml.

13.2.8.3. Failed-AVP
The AVP Failed-AVP is populated for the following values in the Result-Code AVP:

DIAMETER_INVALID_AVP_VALUE 5004

DIAMETER_MISSING_AVP 5005

DIAMETER_AVP_OCCURS_TOO_MANY_TIMES 5009

DIAMETER_UNABLE_TO_COMPLY 5012

DIAMETER_INVALID_AVP_LENGTH 5014

13.2.8.4. NAPTR Service Field Format


The Diameter_Stack agent uses NAPTR records in DNS for dynamic peer discovery. It is case insens-
itive to the service-parms in the NAPTR service fields that are configured in the DNS server.
For more information about NAPTR, see RFC 6408.

589
Desktop 7.1

13.2.8.5. TWCLOSE Property


The optional property TWCLOSE should be used when connecting to peers that do not send Diameter
Watchdog Requests in the REOPEN state. The property enables a timeout timer that is reset for each
received message. Add the parameter mz.diameter.watchdog.twclose to executioncontext.xml
to set the timeout. The value should be set to a higher value than Watchdog (ms) in the Advanced
tab of the Diameter application profile.

13.3. Kafka Agents


13.3.1. Introduction
This section describes the Kafka agents. These are standard agents of the MediationZone® platform.
The agents are used to forward and collect messages using Kafka. Kafka is a cluster-based software
that can either be executed embedded in MediationZone® or externally outside of MediationZone®
.Kafka uses a high throughput publish-subscriber messaging model. It persists all messages and by
design it decouples the publisher from the subscriber. That means that the forwarding agent will keep
writing to the log even if the collection agents terminate. The agents can be used in real-time workflows.

The Kafka Forwarding agent is listed among the processing agents in Desktop while the Kafka Collec-
tion agent is listed among the collection agents.

13.3.1.1. Prerequisites
The reader of this document should be familiar with:

• The MediationZone® Platform

• Apache Kafka

13.3.2. Preparations
13.3.2.1. For a Quick Start of Embedded Kafka
To use the Kafka Collection and Forwarding agents, you are required to install a Kafka cluster. To
create a cluster embedded in MediationZone® , take the following steps:

1. Start all three of the predefined Service Contexts mapped to Zookeeper, zk1 zk2 zk3 , and all
three of the predefined Service Contexts mapped to Kafka, sc1 sc2 sc3. Then start the services
defined in $MZ_HOME/etc/custom-services.conf, in this case, Zookeeper and Kafka, as
Kafka requires Zookeeper to keep track of its cluster. To do this, use the following commands:

mzsh mzadmin/dr startup zk1 zk2 zk3 sc1 sc2 sc3


mzsh mzadmin/dr service start --scope custom

2. You must create a Kafka topic and one or more partitions to write to. You use the kafka command
to do this as described in the mzsh Command Line Tool document. Refer to the example below to
create a topic named mytopic with three partitions and a replication factor of two.

mzsh mzadmin/dr kafka --service-key kafka1 --create --topic mytopic


--partitions 3 --replication-factor 2

3. You can now create your Kafka profile in the MediationZone® Desktop. Refer to Section 13.3.4,
“Kafka Profile”. Enter the Kafka Topic which you have created. Select the Use Embedded Kafka

590
Desktop 7.1

check box, and enter 'kafka1' as the Kafka Service Key. When you proceed to creating Kafka
Forwarding and Collection agents, you can then refer to the profile you have created.

Figure 407. Embedded Kafka Profile Configuration

13.3.2.2. Configuring Embedded Kafka


In addition to creating topics and partitions, there are a number of other kafka command options
available. It is important to note that all of the command options take the Kafka service key as an input
parameter. To find out which service key to use when you start your Kafka services, refer to
$MZ_HOME/etc/custom-services.conf. For further information, see the Command Line
Tool document.

By default, the topic logs are stored in $MZ_HOME/storage/kafka. The customizable property
selecting this storage path is in $MZ_HOME/common/config/templates/kafka/<ver-
sion>/custom/template.conf. In addition, other Kafka broker properties, such as log retention
rules, are stored in $MZ_HOME/common/config/templates/kafka/<version>/cus-
tom/broker-defaults.properties.

13.3.3. Overview
The Kafka agents enable you to configure workflows in MediationZone® with improved scalability
and fault tolerence. As part of the data collection, data is written to Kafka to secure it from being lost
if a failure occurs, and each topic can be set up to be replicated across several servers.

Figure 408. Example of Kafka Workflows

591
Desktop 7.1

13.3.3.1. Service Context


If you choose to use embedded Kafka, you use the Kafka and Zookeeper embedded in the Service
Contexts. Service Contexts are used as a convenient way of configuring and running Kafka and Zoo-
keeper clusters embedded “inside” MediationZone® to provide more flexibility to manage these services
using MediationZone® . For further information on Service Contexts, see the System Administrator's
Guide.

Note! If the platform is restarted, you must also restart the Service Contexts using the following
command:

mzsh mzadmin/dr service restart

13.3.3.2. Controlled Shutdown of Embedded Kafka


If you require to shutdown embedded Kafka, you must first shutdown the Service Contexts used for
Kafka and then shutdown the ones used by Zookeeper, as shown below. If you shutdown Zookeeper
Service Contexts first, a controlled shutdown of Kafka is not possible.

mzsh shutdown sc1 sc2 sc3


mzsh shutdown zk1 zk2 zk3

13.3.3.3. Scaling
Using Kafka provides the capability to scale as required. One of the many ways to scale a Kafka cluster
is when you create your Kafka configuration. It is recommended that when creating your Kafka con-
figuration, you consider how many partitions you may eventually require, and add more than you may
currently require as this will make it easier to scale up at a later stage. If necessary, you can add partitions
later on using the kafka --alter option but it is a more complicated process. For information on
how to use the kafka --alter option, see the Command Line Tool document.

Figure 409. Example of Scaling a Kafka cluster

You can also refer to http://kafka.apache.org for guidance on scaling using partitions.

13.3.4. Kafka Profile


The Kafka Profile enables you to configure which topic and which embedded service key to use.

592
Desktop 7.1

The Kafka profile is loaded when you start a workflow that depends on it. Changes to the profile become
effective when you restart the workflow.

13.3.4.1. Kafka Profile Menu


The main menu changes depending on which Configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all Configurations and these are
described in Section 3.1.1, “Configuration Menus”.

13.3.4.2. Kafka Profile Buttons


The toolbar changes depending on which Configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all Configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.

There are no additional buttons for the Kafka Profile.

13.3.4.3. Profile Configuration


To open the editor, click the New Configuration button in the upper left part of the MediationZone®
Desktop window, and then select Kafka Profile from the menu.

The Kafka configuration contains two tabs: Connectivity and Advanced.

13.3.4.3.1. Connectivity tab

The Connectivity tab is displayed by default when creating or opening a Kafka profile.

Figure 410. Kafka Profile Configuration - Connectivity tab

Kafka Topic Enter the Kafka topic that you want to use for your configuration.

For information on how to create a Kafka topic, refer to the Command Line
Tool document.
Use Embedded Kafka If you want to use the Kafka Service which is the Kafka embedded in Me-
diationZone® , select this check box.
Kafka Service Key If you have selected to use Embedded Kafka, you must complete the Kafka
Service Key. To determine which service key to use for Kafka Services,
refer to $MZ_HOME/etc/custom-services.conf
Host If you are using external Kafka, enter the host name for Zookeeper.
Port If you are using external Kafka, enter the port for Zookeeper.
Kafka Brokers Use the Add button to enter the addresses of the Kafka Brokers that you
want to connect to.

593
Desktop 7.1

13.3.4.3.2. Advanced tab

In the Advanced tab you can configure properties for optimizing the performance of the Kafka Producer
and Consumer. The Advanced tab contains two tabs: Producer and Consumer.

13.3.4.3.2.1. Producer tab

In the Producer tab, you can configure the properties of the Kafka Fowarding agent.

Figure 411. Kafka Profile Configuration - Producer tab in the Advanced tab

The property producer.abortunknown=true sets the agent to abort if the broker replies with
Unknown topic or partition. For further information on the other properties, see the text
in the Advanced producer properties field or refer to https://kafka.apache.org.

13.3.4.3.2.2. Consumer tab

In the Consumer tab, you can configure the properties of the Kafka Collection agent.

Figure 412. Kafka Profile Configuration - Consumer tab in the Advanced tab

See the text in the Advanced consumer properties field for further information about the properties.

13.3.5. Kafka Forwarding Agent


The Kafka Forwarding agent (producer) is responsible for sending data to the Kafka log.

594
Desktop 7.1

Figure 413. The Kafka Forwarding Agent in a Real-Time Workflow

13.3.5.1. Workflow Configuration


The Kafka Forwarding agent configuration window is displayed when you right-click on the agent
and select Configuration..., or when you double-click on the agent.

Figure 414. Kafka Forwarding Agent Configuration

Profile The name of the profile as defined in the Kafka Profile Editor (select Kafka
Profile after clicking the New Configuration button in the Desktop).
Route On Error Select this check box if you want a KafkaExceptionUDR, containing the error
message, to be routed from the producer agent when an error occurs.

Note! The emission of error UDRs is under flood protection, which means
only one unique error message UDR is issued per second to prevent flooding
of identical errors.

13.3.5.1.1. Transaction Behavior

This section includes information about the Kafka Forwarding agent transaction behavior. For inform-
ation about the general MediationZone® transaction behavior, see Section 4.1.11.8.

13.3.5.1.1.1. Emits

If you select the Route On Error check box in the Kafka Forwarding agent configuration window,
the agent emits data in the KafkaExceptionUDR. For further information, refer to Section 13.3.5.1,
“Workflow Configuration”.

13.3.5.1.1.2. Retrieves
The agent retrieves data from the KafkaUDR.

13.3.5.1.2. Introspection

This section includes information about the data type that the agent expects and delivers.

The agent consumes KafkaUDR types.

595
Desktop 7.1

13.3.5.1.3. Meta Information Model

For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

13.3.5.1.3.1. Publishes

MIM Parameter Description


Topic (string) The name of the topic to which the UDR is sent

13.3.5.1.3.2. Accesses

The agent accesses various resources from the workflow and all its agents to configure the mapping
to the Named MIMs (that is, what MIMs to refer to the collection workflow).

13.3.5.1.4. Agent Message Events

There are no agent message events for this agent.

For information about the agent message event type, see Section 5.5.14, “Agent Event”.

13.3.5.1.5. Debug Events

There are no debug events for this agent.

13.3.6. Kafka Collection Agent


The Kafka Collection agent (consumer) consumes messages from the topic and partitions set in the
Kafka Collection agent configuration.

Figure 415. The Kafka Collection Agent in a Real-Time Workflow

13.3.6.1. Workflow Configuration


The Kafka Collection agent configuration window is displayed when you right-click on the agent and
select Configuration... or when you double-click on the agent.

Figure 416. Kafka Collection Agent Configuration

596
Desktop 7.1

Profile The name of the profile as defined in the Kafka Profile Editor (select Kafka
Profile after clicking the New Configuration button in the Desktop).
All If enabled, messages will be collected from all of the partitions.
Range If enabled, messages will be collected from the range that you specify.
Specific If enabled, messages will be collected from the specified partition(s). This is a
comma separated list.
Start at beginning You must determine from which offset you want to start collecting. If enabled,
messages are collected from the first offset. If you select this option, there is a
risk that messages will be processed multiple times after a restart.
Start at end You must determine from which offset you want to start collecting. If enabled,
messages are selected from the last offset from when the workflow was started.
If you select this option, there is a risk that data can be lost after a restart.

13.3.6.1.1. Transaction Behavior

This section includes information about the Kafka Collection agent transaction behavior. For inform-
ation about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

13.3.6.1.1.1. Emits

The agent emits data in the KafkaUDR.

13.3.6.1.1.2. Retrieves

The agent retrieves a message from the Kafka log and places it in a KafkaUDR.

13.3.6.1.2. Introspection

This section includes information about the data type that the agent expects and delivers.

The agent produces KafkaUDR types.

13.3.6.1.3. Meta Information Model

For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

13.3.6.1.3.1. Publishes

MIM Parameter Description


Topic (string) The name of the topic

13.3.6.1.3.2. Accesses

The agent does not itself access any MIM resources.

13.3.6.1.4. Agent Message Events

There are no agent message events for this agent.

For information about the agent message event type, see Section 5.5.14, “Agent Event”.

13.3.6.1.5. Debug Events

There are no debug events for this agent.

597
Desktop 7.1

13.3.7. Kafka UDR Types


The Kafka UDR types are designed to exchange data between the workflows.

The Kafka UDR types can be viewed in the UDR Internal Format Browser. To open the browser,
first open an APL Editor, and, in the editing area, right-click and select UDR Assistance.

13.3.7.1. KafkaUDR
KafkaUDR is the UDR that is populated via APL and routed to the Kafka Forwarding agent, which
in turn writes the data to the specified partition, and the topic set in the Kafka Profile. The Kafka
Collection agent consumes the data from the Kafka log, from the specified partition(s), and the topic
set in the Kafka Profile, and places it in a KafkaUDR.

The following fields are included in the KafkaUDR:

Field Description
data (byte- Producer: This field holds data to be passed to the Kafka log by the Kafka
array) Forwarding Agent (producer).

Consumer: This field is populated with the data read from the Kafka log.
offset (long) This is a read only field, which is only relevant for the Kafka Forwarding
agent . This field is populated by the Kafka Collection agent and contains the
offset in the Kafka log from where the message was consumed.
partition Producer: This field holds the partition to which the Kafka Forwarding agent
(short) (producer) writes the message. If this field is not populated, the partition is
chosen randomly.

Consumer: This field holds the partition from which the message was con-
sumed by the Kafka Collection agent (consumer).

13.3.7.2. KafkaExceptionUDR
The KafkaExceptionUDR is used to return a message if an error occurs.

Field Description
message (string) This field provides a message with information on the error which has oc-
curred.

13.4. SMPP Agents


13.4.1. Introduction
This section describes the SMPP Receiver, and Transmitter agents. These agents are extension agents,
available for real time workflows on the DigitalRoute® MediationZone® Platform.

13.4.1.1. Prerequisites
The user of this information must be familiar with:

• The MediationZone® Platform

• The SMPP protocol, version 3.4, see http://www.tele2.lt/files/SMPP_protocol_v3_4.pdf

598
Desktop 7.1

13.4.2. Overview
13.4.2.1. SMPP Protocol
The Short Message Peer to Peer (SMPP) protocol is an open, industry standard protocol designed to
provide a flexible data communications interface for transfer of short message data between a Short
Message Service Centre (SMSC), or other type of Message center, and an SMS application system.

13.4.2.2. SMPP Session Description


An SMPP session between an SMSC and an ESME (External Short Message Entity) is initiated by
the ESME first establishing a network connection with the SMSC and then issuing an SMPP Bind re-
quest to open an SMPP session. An ESME wishing to submit and receive messages is required to es-
tablish two network connections and two SMPP sessions (Transmitter and Receiver). During an SMPP
session, an ESME may issue a series of requests to an SMSC and shall receive the appropriate responses
to each request, from the SMSC. Likewise, the SMSC may issue SMPP requests to the ESME, which
must respond accordingly.

13.4.3. Agents
The SMPP agents, the Receiver that is located among the collection agents, and the Transmitter that
is located among the processing agents, can receive and submit SMs (Short Messages) using the store
and forward message mode.

Note! Outbind is not supported, which means that the agents can only connect to the SMSC,
and the SMSC cannot connect to the agents.

13.4.3.1. Configuration
The SMPP agents' configuration windows are displayed when double clicking on the the agents in a
workflow, or when right clicking on the agents and selecting Configuration....

Both agents' configuration dialogs contain three different tabs; SMSC, ESME, and Connection.

13.4.3.1.1. SMSC Tab

The SMSC tab contains configurations related to the SMSC to/from which the agent will send/receive
data.

Figure 417. SMSC tab

Remote Host Enter the IP address or hostname of the SMSC with which the agent will communic-
ate in this field.
Remote Port Enter the port number on the SMSC with which the agent will communicate in this
field.

599
Desktop 7.1

13.4.3.2. ESME Tab


The ESME tab contains configurations related to the ESME application connected to the SMSC to/from
which the agent will send/receive data.

Figure 418. SMPP Agent Configuration, ESME tab.

System ID Enter the ID of the ESME system requesting to bind with the SMSC in
this field.
Password Enter the password used by the SMSC to authenticate the ESME in this
field.
System Type Enter the type of ESME system in this field, e g VMS (Voice Mail Sys-
tem), OTA (Over-The-Air Activation System), etc.
Type of Number Enter the type of number (TON) used in the SME address in this field, e
g International, National, Subscriber Number, etc.
Numbering Plan Indicator Enter the numbering plan indicator (NPI) used in the SME address in
this field, e g ISDN, Data, Internet, etc.
Address Range Enter the range of SME addresses used by the ESME in this field.

Note! For IP addresses, it is only possible to specify a single IP


address. A range of IP addresses is not allowed.

13.4.3.3. Connection Tab

Figure 419. SMPP Agent Configuration, Connection tab.

Reconnect Attempts Enter the number of reconnect attempts you want to allow in case a connection
goes down in this field.

600
Desktop 7.1

Note! If you use the default setting, 0, in this field, this will mean that
the number of reconnect attempts will be infinite, i e maxint.

Reconnect Interval Enter the time interval you want to pass before making a reconnect attempt
in this field.
Transaction Timer Enter the time interval allowed between an SMPP request and the correspond-
ing SMPP response in this field.
Enquire Link Timer Enter the time interval allowed between operations after which an SMPP
entity should interrogate whether its peer still has an active session in this
field. This setting determines how often the enquire_link operation should
be sent.

This timer may be active on either communicating SMPP entity (i.e. SMSC
or ESME).

13.4.3.4. Operations
For the Transmitter agent, the following operation pairs are supported:

• bind_transmitter - bind_transmitter_resp

• unbind - unbind_resp

• submit_sm - submit_sm_resp

• enquire_link - enquire_link_resp

For the Receiver agent, the following operation pairs are supported:

• bind_receiver - bind_receiver_resp

• unbind - unbind_resp

• deliver_sm - deliver_sm_resp

• enquire_link - enquire_link_resp

Note! Only one request - response operation pair can be handled simultaneously, which means
that a response must be sent for a pending request before the next request can be handled.

Note! The bind and unbind operations only occur when starting/stopping the workflow.

13.4.4. Introspection
The introspection is the type of data an agent expects and delivers.

The Receiver agent produces DELIVER_SM UDRs and delivers DELIVER_SM_RESP UDRs.

The Transmitter agent expects SUBMIT_SM UDRs and produces SUBMIT_SM_RESP UDRs.

601
Desktop 7.1

13.4.5. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

13.4.5.1. Publishes

MIM Value Description


Session State This MIM parameter contains information about the session state.

Session State is of the string type and is defined as a global MIM context
type.

13.4.5.2. Accesses
The agent does not itself access any MIM resources.

13.4.6. Agent Message Events


There are no agent message events for these agents.

13.4.7. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Connecting retries <number of retries>

This message is displayed when the agents are trying to connect.

• Connect attempt failed <reason for failure>.

This message is displayed when a connection attempt has failed.

• Wait <time interval> millis.

This message is displayed during the time interval set in the Reconnect Interval field when a recon-
nect attempt has failed.

• Setup successful.

This message is displayed when a connection has been successfully established.

• Exceeded number of retries.

This message is displayed when the number of attempts specified in the Reconnect attempts field
has been exceeded.

• Session state <session state>.

This message is displayed when the session state changes.

• DELIVER_SM / SUBMIT_SM

These messages are displayed when DELIVER_SMs and SUBMIT_SMs are received.

602
Desktop 7.1

13.4.8. SMPP UDRs


The SMPP Receiver agent generates DELIVER_SM UDRs and receives DELIVER_SM_RESP UDRs,
while the SMPP Transmitter agent receives SUBMIT_SM UDRs and generates SUBMIT_SM_RESP
UDRs

13.4.8.1. DELIVER_SM UDRs


The following fields are included in the DELIVER_SM UDRs:

Field Description
data_coding (int) This field defines the encoding scheme of the short message user
data, (3) Latin-1 (ISO-8859-1), or (8), USC2 (UTF-16BE).

Note! Even though there are several other encoding schemes


defined, the mentioned schemes are the only ones currently
supported by the MediationZone® SMPP agents.

dest_addr_npi (int) This field indicates the NPI (Numbering Plan Indicator) of the
destination address.
dest_addr_ton (int) This field indicates the TON (Type Of Number) of the destination
address.
destination_addr This field contains the destination address.
(string)
esm_class (int) This field is used for indicating special message attributes associ-
ated with the short message.

Note! Currently only the messaging mode Store and Forward


is supported.

priority_flag (int) This field designates the priority level of the message.
protocol_id (int) This field contains the Protocol Identifier. This is a network specific
field.
registered_delivery This field indicates whether an SMSC delivery receipt or an SME
(int) acknowledgement is required or not.
replace_if_present_flag This field indicates whether a submitted message should replace
(int) an existing message or not.
schedule_delivery_time This field defines when the short message is to be scheduled by
(string) the SMSC for delivery. Set to NULL for immediate message deliv-
ery.
sequence_number (int) This field is used for correlating responses with requests.

The allowed sequence_number range is from 0x00000001 to


0x7FFFFFFF.
service_type (string) This field can be used to indicate the SMS Application service as-
sociated with the message. Set to NULL for default SMSC settings.

603
Desktop 7.1

Field Description
short_message (byte- This field contains the actual SM (Short Message) which can consist
array) of up to 254 octets of user data.

Note! Long messages are not supported.

sm_default_msg_id (int) If the SM is to be sent from a list of pre-defined ('canned') SMs


stored on the SMSC, this field indicates the ID of the SM. If not
using an SMSC canned message, set to NULL.
source_addr (string) This field contains the source address.
source_addr_npi (int) This field indicates the NPI (Numbering Plan Indicator) of the
source address.
source_addr_ton (int) This field indicates the TON (Type Of Number) of the source ad-
dress.
validity_period This field indicates the validity period of this message. Set to NULL
(string) to request the SMSC default validity period.
OriginalData (byte- This field contains the original data in bytearray format.
array)

13.4.8.2. DELIVER_SM_RESP UDRs


The following fields are included in the DELIVER_SM_RESP UDRs:

Field Description
command_status (int) This field of an SMPP message response indicates the success
or failure of an SMPP request.
OriginalData (bytearray) This field contains the original data in bytearray format.

13.4.8.3. SUBMIT_SM UDRs


The following fields are included in the SUBMIT_SM UDRs:

Field Description
data_coding (int) This field defines the encoding scheme of the short message user
data, (3) Latin-1 (ISO-8859-1), or (8), USC2 UTF-16BE).

Note! Even though there are several other encoding schemes


defined, the mentioned schemes are the only ones currently
supported by the MediationZone® SMPP agents.

dest_addr_npi (int) This field indicates the NPI (Numbering Plan Indicator) of the
destination address.
dest_addr_ton (int) This field indicates the TON (Type Of Number) of the destination
address.
destination_addr This field contains the destination address.
(string)

604
Desktop 7.1

Field Description
esm_class (int) This field is used for indicating special message attributes associ-
ated with the short message.

Note! Currently only the messaging mode Store and Forward


is supported.

priority_flag (int) This field designates the priority level of the message.
protocol_id (int) This field contains the Protocol Identifier. This is a network specific
field.
registered_delivery This field indicates whether an SMSC delivery receipt or an SME
(int) acknowledgement is required or not.
replace_if_present_flag This field indicates whether a submitted message should replace
(int) an existing message or not.
schedule_delivery_time This field defines when the short message is to be scheduled by
(string) the SMSC for delivery. Set to NULL for immediate message deliv-
ery.
service_type (string) This field can be used to indicate the SMS Application service
associated with the message. Set to NULL for default SMSC set-
tings.
short_message (byte- This field contains the actual SM (Short Message) which can
array) consist of up to 254 octets of user data.

Note! Long messages are not supported.

sm_default_msg_id (int) If the SM is to be sent from a list of pre-defined ('canned') SMs


stored on the SMSC, this field indicates the ID of the SM. If not
using an SMSC canned message, set to NULL.
source_addr (string) This field contains the source address.
source_addr_npi (int) This field indicates the NPI (Numbering Plan Indicator) of the
source address.
source_addr_ton (int) This field indicates the TON (Type Of Number) of the source ad-
dress.
validity_period This field indicates the validity period of this message. Set to
(string) NULL to request the SMSC default validity period.
OriginalData (byte- This field contains the original data in bytearray format.
array)

13.4.8.4. SUBMIT_SM_RESP UDRs


The following fields are included SUBMIT_SM_RESP UDRs:

Field Description
command_status (int) This field of an SMPP message response indicates the success or
failure of an SMPP request.
message_id (string) This field contains the unique message identifier reference as-
signed by the SMSC to each submitted short message. It is an
opaque value and is set according to SMSC implementation.

605
Desktop 7.1

Field Description
submitSmUDR (SUBMIT_SM his field contains the SUBMIT_SM UDR for which this SUB-
(SMPP)) MIT_SM_RESP has been received.
OriginalData (bytearray) This field contains the original data in bytearray format.

13.4.9. Examples
This section contains one example each for the SMPP Receiver and Transmitter agents.

13.4.9.1. Receiver agent


In this workflow example for the SMPP Receiver agent:

Figure 420. Receiver workflow example

the SMPP receiver agent sends DELIVER_SM UDRs to the Analysis agent, which contains the fol-
lowing code:

consume {
DELIVER_SM_RESP deliver_sm_resp = udrCreate(DELIVER_SM_RESP);
if ((input.sequence_number % 2) == 0) {
deliver_sm_resp.command_status = 2;
} else {
deliver_sm_resp.command_status = 0;
}
udrRoute(deliver_sm_resp);
}

With this code, the Analysis agent will:

• Create a UDR of DELIVER_SM_RESP type called deliver_sm_resp.

• Check whether the sequence number in the incoming DELIVER_SM UDR is even or odd.

• If the sequence number is even, the command_status field in the deliver_sm_resp UDR will be
set to 2, and if it is odd, the field will be set to 0.

• The deliver_sm_resp UDR will then be routed back to the SMPP receiver agent.

13.4.9.2. Transmitter agent


In this workflow example for the SMPP Transmitter agent:

Figure 421. Transmitter workflow example

the TCP/IP agent sends TCP_TI UDRs into the workflow using a decoder that defines this UDR type.

The Analysis agent contains the following code:

606
Desktop 7.1

import ultra.SMPP;

consume {

if (instanceOf(input, TCP_TI)) {
TCP_TI tcp_udr = udrCreate(TCP_TI);
tcp_udr = (TCP_TI) input;
strToBA(tcp_udr.response, "message=" + tcp_udr.message + "\r\n");
SUBMIT_SM submit_sm = udrCreate(SUBMIT_SM);
bytearray sm;
strToBA(sm, "MESSAGE", "UTF-16BE");
submit_sm.short_message = sm;
submit_sm.data_coding = 8;
submit_sm.source_addr = "555123456";
submit_sm.destination_addr = "555987654";
udrRoute(tcp_udr, "OUT_TCP");
udrRoute(submit_sm, "OUT_SMPP");
}
}
}

which will:

• Import the SMPP Ultra formats

• If the received UDR is of the TCP_TI type, the UDR will be named tcp_udr, and the response
field in the UDR will be populated with the text "message=<contents of the message field>" in
bytearray format.

• Create a UDR of type SUBMIT_SM called submit_sm.

• Create a bytearray object called sm, and populate this bytarray with the text "MESSAGE" in bytearray
format with UTF-16BE encoding.

• Populate the short_message field in the submit_sm UDR with the new bytearray.

• Set the data coding to 8, which equals the UTF-16BE encoding according to the specification.

• Set the source address to 555123456 and the destination address to 555987654.

• Route the submit_sm UDR to the SMPP transmitter agent, and the tcp_udr UDR to the TCP/IP
agent.

The SMPP transmitter agent will then send SUBMT_SM_RESP UDRs back to the Analysis agent
when receiving the corresponding SUBMIT_SM_RESP UDRs from the SMSC. The SUB-
MIT_SM_RESP UDRs contains the original SUBMIT_SM for which the SMSC has responded.

13.5. Web Service Agents


13.5.1. Introduction
This section describes the Web Service agents. These are real-time extension agents, on the Digital-
Route® MediationZone® Platform.

13.5.1.1. Prerequisites
The reader of this information should be familiar with:

607
Desktop 7.1

• The MediationZone® Platform

• APL

• Web Service

• WSDL

13.5.2. Overview
Web Service is a software system that supports interaction between computers over a network.

The MediationZone® Web Service agents communicate through SOAP in XML syntax, and use WSDL
files.

MediationZone® supports the following:

• Web Service Interoperability Organization Basic Profile 1.1

• WSDL 1.1

• XML 1.0

• SOAP 1.1

• Partial support of Web Service Security 1.1

• HTTP 1.1

• HTTP Basic Access Authentication

• HTTPS

You enable Web Service transactions in MediationZone® by defining a WS profile, or profiles, and
including the Web Service agents and their configurations in a workflow.

13.5.2.1. WS Profile
In WS profile you specify a WSDL file that mainly includes the following parts of a Web Service
definition:

• XML Schema: Defines information about the service either directly or via an XSD-file

• WSDL: Communication relevant information

• Binding elements: MediationZone® supports only SOAP bindings

The WS profile can include more than one WSDL file references.

The WS profile is loaded when you start a workflow that depends on it. Changes to the profile become
effective when you restart the workflow.

Saving a WS profile that is assigned with a WSDL file, maps data types that are specified in the WSDL
Schema section as UDR types for the MediationZone® workflow. For further information, see Sec-
tion 13.5.4, “UDR Type Structure”.

13.5.2.2. The Web Service Agents


There are two Web Service agents that you can include in real-time workflows in the Workflow Ed-
itor:

608
Desktop 7.1

• The Web Service Collection agent

• The Web Service Processing agent

13.5.2.2.1. Web Service Collection Agent

The collecting agent works in the same way as a Service Provider, or server, in the sense that it receives
requests from a client, or clients, and transfers the requests to a MediationZone® workflow.

In a synchronous operation, when the collection agent receives a reply back from the workflow, it de-
livers the response to the requesting client.

In an asynchronous operation the collection agent does not receive any reply, and therefore does not
respond the client.

Figure 422. The Web Service Provider - Synchronous Operation

Figure 423. The Web Service Provider - Asynchronous Operation

13.5.2.2.2. Web Service Processing Agent

The processing agent works in the same way as a Service Requester, or a client, that sends a request
to a server, where a certain service is available.

In a synchronous operation, when the processing agent receives a reply, it delivers the reply to its
configured output.

In an asynchronous operation, the requester does not receive any reply and does not deliver one, either.

Figure 424. The Web Service Requester

609
Desktop 7.1

Figure 425. The Web Service Requester - Asynchronous Operation

13.5.3. WS Profile Configuration


The WS profile enables you to define a web service. One profile defines one web service.

To open the editor, click the New Configuration button in the upper left part of the MediationZone®
Desktop window, and then select WS Profile from the menu.

Note! Any restrictions on the WSDL format will be ignored by the outgoing web service

13.5.3.1. WS Profile Menu


The main menu changes depending on which configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all configurations and these are
described in Section 3.1.1, “Configuration Menus”.

There is one menu that is specific for WS profile configurations, and it is described in the coming
section:

13.5.3.1.1. The Export Menu

Item Description
Export WSDL... The export the original WSDL file to a directory on the local
workstation. Please refer to Section 13.5.3.3, “Configuration Tab”
for further information.
Export Transport Level Security The export the original Keystore file to a directory on the local
Keystore... workstation. Please refer to Section 13.5.3.5, “Security Tab” for
further information.
Export Web Service Security The export the original Keystore file to a directory on the local
Settings Keystore... workstation. Please refer to Section 13.5.3.5, “Security Tab” for
further information.

13.5.3.2. WS Profile Buttons


The toolbar changes depending on which configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.

There are no additional buttons for WS profile.

610
Desktop 7.1

13.5.3.3. Configuration Tab

Figure 426. The Web Service Profile - Configuration Tab

Note! The Web Service Security settings are not populated automatically in accordance with
the policy you hold. You must go to the Security tab in the WS profile and complete the relevant
settings. The security settings completed in the Security tab determines your Security settings.
If you do not enter your settings, no Web Service Security is enabled. For further information,
refer to Section 13.5.3.5, “Security Tab”

Transport Protocol

Select the protocol with which the web service will be communicated: HTTP or HTTPS.

Create Configuration From

The WS Profile configuration can either handle single WSDL files or several WSDL files, which all
can be concatenated, using the concatenate WSDL file functionality.

Single WSDL The Import WSDL button is used to browse for, and import, a selected WSDL file.
File If the WSDL file is linked to adherent xsd files, all included files must be stored in
the same directory as the imported WSDL file. If present, they will be imported at
the same time as the WSDL file. If not, a validation error will occur.

611
Desktop 7.1

Basic validation of the WSDL file is performed before the file is imported. After the
file is imported, the content of the WSDL file and adherent files can be viewed in the
View WSDL Content tab.

Full validation of the WSDL file is performed when the profile is saved.

If you configure the Web Service agents with any value that contradicts the
WSDL file specifications, your configuration will override the WSDL file.

Concatenated Used when parts of several WSDL files are required. The functionality is only useful
WSDL File if operations defined in bindings in several WSDL files shall be published at the same
endpoint.

To add files to the Files list click on the Add button and browse for the correct WSDL
files in the Add WSDL File dialog box.

The list of WSDL files will be concatenated when the profile is saved. The concaten-
ation functionality concatenates and arranges all operations defined in the WSDL
binding element of several WSDL files and everything else that is needed for the
result to be a valid WSDL file.

The original WSDL file can be exported to a directory on the local workstation. Click on the Export
menu and select the Export WSDL option.

XML Binding

Enable JAXB Simple Binding When selected, XSD Schemas of loaded WSDL files are compiled
Mode into Java code using the experimental "Simple and better binding
mode". This is necessary for some complex types, for instance when
duplicate element names are used within the same complex type.
Disable JAXWS Wrapper When this check box is selected, the cycleUDR will contain request
Style Mode and response parameters that wrap all the arguments in request and
response UDRs.
Enable Processing of Implicit When selected, any SOAP headers defined in the binding section of
SOAP Headers WSDL files are compiled into Java code. This is necessary in order
to manipulate the SOAP headers in outgoing Web Service requests.

Service Port Definition

This drop-down list consists of the service port definitions that are included in the WSDL file. By se-
lecting a port you set the binding address.

• If the WSDL file consists of several concatenated files, only the first WSDL file service port
is applicable.

• The Service Port Definition entries appear in the following format:

XXX:YYY(ZZZ).

• XXX: Currently only SOAP ports are supported.

• YYY: The name of the port.

• ZZZ: The binding that the port is connected to.

612
Desktop 7.1

13.5.3.4. View WSDL Content Tab


The View WSDL Content tab is used to display the content of imported files, both WSDL and adherent
files.

Figure 427. The Web Service Profile - View WSDL Content Tab

WSDL Definition When a WSDL file is successfully imported to the Web Service profile the WSDL
filename will be stated here. The View button will open a read-only view of the
file contents.
Included Files If the imported WSDL Definition contains references to other WSDL or xsd files
included in the configuration, they will be listed here.
View Selected If one of the files in the Included Files list is selected, this button will open a
read-only view of its imported content.

13.5.3.5. Security Tab


Web Services can be secured by using various combinations of Security configurations:

• Transport Level Security with the option of enabling a Timestamp

• Transport Level Security with Web Service Security standard with the option of enabling a
Timestamp

• Transport Level Security with Username Token and/or Addressing with the option of enabling
a Timestamp

• Transport Level Security with Web Service Security standard combined with Username Token
and/or Addressing with the option of enabling a Timestamp

• Web Service Security standard with the option of enabling a Timestamp

613
Desktop 7.1

• Web Service Security standard with Username Token and/or Addressing with the option of enabling
a Timestamp

• Username Token and/or Addressing with the option of enabling a Timestamp

To apply Transport Level Security, select the transfer protocol HTTPS in the Configuration tab.
The Web Service agents provide Web Service Security by supporting XML-signature and encryption.
A TimeStamp records the time of messages. Username Token uses authentication tokens and Ad-
dressing provides unique message IDs.

Figure 428. The Web Service Profile - Security Tab

Transport Level Security Applicable only when HTTPS is selected in the Configuration tab.
Keystore Click on the Import Keystore button and select the keystore JKS-file
that contains the private keys that you want to apply.

To export the original Keystore file, select Export from the main
menu of the Web Service profile configuration, and then select
Export Web Service Security Settings Keystore.

Keystore Password Enter the password that protects the keystore file.
Web Service Security Set- Applicable for any selected protocol in the Configuration tab.
tings
Enable Web Service Secur- When selected, Web Service security is used, and the other text boxes
ity For This Profile in the dialog are highlighted and must be completed. The Web Service
Security Settings and Username Token and Addressing check boxes
are also enabled for you to configure your Security settings. If you do

614
Desktop 7.1

not select any other check boxes in this tab, no Web Service Security
is enabled.
Keystore Alias The alias of the keystore entry that should be used.
Key Password Enter the password that is used to protect the private key that is associ-
ated with the Keystore alias.
Enable Encryption When selected, messages will be encrypted. If you select this option,
you must complete the text boxes in the Web Service Security Settings
dialog.
Enable Signing When selected, messages will be signed. If you select this option, you
must complete the text boxes in the Web Service Security Settings
dialog.
Enable TimeStamp When selected, messages will be recorded with the date and time.
Enable Username Token When selected, Username Token authentication is used, and the text
and Addressing boxes WS Token Username and WS Token Password are enabled
and must be completed.
Enable Addressing When selected, messages will be sent with a unique ID.

13.5.3.6. Saving a WS Profile


When saving a WS profile, the profile will be saved in the folder selected in the Save As dialog with
the name entered.

When the WS profile is saved, a number of UDRs are generated. They will be saved in a folder structure
based on the WS profile name. Therefore it is important to make sure the WS profile is saved in the
appropriate place with a suitable name. The UDR's folder structure will not automatically be adjusted
and saved along with the WS profile if a user decides to rename or move the profile to a new folder.

If at all possible, avoid renaming. In the event you must rename the profile it must be saved
again in its new location, regenerating the UDRs there. For further information about viewing
the UDR type structure, see Section 13.5.4.1, “The Folder Structure of the UDR Types”.

13.5.4. UDR Type Structure


When you save a WS profile, a number of UDR types are created and mapped according to specifications
in the WSDL file. To see a structured list of these UDR types you open the APL editor, right click on
the text pad, and select UDR Assistance. Scroll down the UDR types list to the WS folder.

The UDR types that are created once you save a WS profile are:

• Abstract[port type name]WSCycle: is created for every WS profile

• WSCycleUDR(s): is generated for every WSDL operation

The UDR type that might be created once you save a WS profile is:

• UDR type: describes the complex types that are defined in the XML Schema

The UDR types that are stored in WS folder by default are:

• AbstractWSCycle: The type of the input UDR

• ws.QName: This UDR type matches a qname data type in an XML Schema. There can only be one
ws.QName UDR type under ws.

• XML Element: A wrapper type that is defined as "nillable" in the XML Schema.

615
Desktop 7.1

13.5.4.1. The Folder Structure of the UDR Types


UDR types can be viewed in the UDR Internal Format Browser. It is accessed either by right-
clicking on the text pad in the Analysis or Aggregation agents and selecting the UDR Assistance...
option, or by clicking on the New Configuration button in MediationZone® Desktop, and then selecting
APL Code from the menu, which will open the APL Code Editor, then right clicking on the text pad
and selecting the UDR Assistance... option.

To open the editor, click the New Configuration button in the upper left part of the MediationZone®
Desktop window, and then select Alarm Detection from the menu.

Figure 429. The UDR Assistance Menu in the APL Code Editor

Figure 430. The UDR Folder Structure

The AbstractWSCycle UDRs and WSCycleUDR are created and saved in a folder, created and
named according to the following structure:
WS.[Directory Name].[WSProfile Name].cycles.[WSCycleUDR Names]

616
Desktop 7.1

The UDR types related to the XML-schema are created and saved in a folder, created and named ac-
cording to the following structure:
WS.[Directory Name].[WSProfile Name].[alias].[complexTypeUDRs]

The alias is replaced with the name of the Target Name space (tns). The name is set in the XML
Schema part of the WSDL file. If the target name space has no name alias, the UDR will be saved in
the [WSProfile Name] folder.

When concatenating WSDL files, the aliases in the files can be identical, and if this occurs the
structure will be changed to avoid a name conflict. If the names include invalid characters, they
will be replaced with underscore characters.

The structure can be as follows:


WS.[Directory Name].[WSProfile Name].[WSDL Filename].[alias]

If the WSDL filename includes invalid name characters these will be replaced with underscore
characters in the WSDL Filename.

13.5.4.2. The AbstractWSCycle UDR Type


All the WSCycleUDRs that belong to a specific WS profile inherit the same AbstractWSCycle.
By checking the AbstractWSCycle you can tell which WS profile is applied on certain UDRs.

The AbstractWSCycle UDR is used as a marker to connect all WSCycleUDRs belonging to the same
WS profile. It consists of the following parts:

The AbstractWSCycle UDR consists of the following parts:

• context

• errorMessage

• operation

617
Desktop 7.1

Figure 431. AbstractWSCycle UDR fields part of WSCycleUDR

context (any) This field is used to store information about the context in which the
operation has been invoked, when it is needed.
errorMessage (string The error message field is set if an error occurred during sending or
optional) receiving of a message, or if a Soap Exception occurred at the com-
munication endpoint.
operation (string This is a constant string with the name of the operation as value. If
constant) the operation corresponding to this WSCycleUDR is operation-
Name, the field will be operationName.

13.5.4.3. The WSCycle UDR Type


The WSCycle UDR represents an operation that is specified in the WSDL file. The naming structure
is WSCycle_[operationName].

The agent using a WS profile will have as many input and output types as operations in the web service
defined by the WS profile.

The number of fields and field types in a WSCycle UDR will be set based on how the Web Service
operation is defined in the WSDL file. Each WSCycle UDR contains the number of fields necessary
to hold the information needed when sending or receiving a request or response.

For example an operation with nothing in the input and output messages and no declared fault types
will have no other fields than the ones from the AbstractWSCycle UDR.

618
Desktop 7.1

Figure 432. WSCycleUDR

The structure of the WSCycleUDR_operationName depends on the definition of the operation in the
WSDL file. The following table will present more information about possible fields.

param (type corres- This field exists only if the operation has at least one request message
ponding to the in- type in its input message declaration.
put type)
The param field type corresponds to the type defined in the XML schema,
simple or complex type. If it is a complex type, a UDR containing fields
corresponding to the types in the complex type will be created.

If the name of a complex input type in the XML Schema starts


with a lower case character, the input type might be represented
by several fields. The name of each of the fields is the name of
the input type, prefixed by param_.

response (type cor- This field exists only if the operation has a response message that is not
responding to the empty.
output message of
the operation) The field shall be set before the WSCycleUDR is routed back to a Web
Service Provider agent to send back a response message to the requester.

This field is set to a value corresponding to the response message when


the WSCycle UDR comes from a Web Service Request agent.
fault_FaultTypeName It is possible to declare fault types for an operation. Messages of the
(FaultTypeName is fault types can be sent back from a Web Service instead of an ordinary
optional) response message. To send back a fault message instead of an ordinary
response message, you have to set one of the fault fields.

The FaultTypeName part of the field name will be the name of the
declared fault type.

13.5.5. Web Service Provider Agent


The Web Service Provider agent has a great resemblance to a server and allows Web Service requests
to be collected and inserted into a workflow.

When a request arrives to the Web Service Provider it will first decode and validate it into a pre-gen-
erated UDR type, WSCycle UDR. WSCycleUDR is then routed through the workflow with the param
field set to the incoming message. If the client expects a response message the workflow is responsible
for populating the response field with an appropriate answer message (through the udrCreate APL
function). The WSCycleUDR must then be routed back to the Web Service Provider agent to transmit
the answer.

619
Desktop 7.1

Configurations made in the agent always overrides settings originating from the WSDL file.

13.5.5.1. Configuration
The Web Service Provider agent configuration dialog is displayed when right clicking on the Web
Service Provider agent and selecting the Configuration option, or when double-clicking on the agent
in the Workflow Editor.

Figure 433. The Web Service Provider Agent Configuration View

Web Service Profile Click on the Browse button and select the appropriate user defined WS
profile.
Workflow Response Determines the number of milliseconds the Web Service Provider agent
Timeout (ms) will wait for a response from the Workflow before timeout.

If a timeout occurs in the provider agent, an error message will be logged


in the System Log and no response message will be sent to the requesting
client.

13.5.5.1.1. HTTP

This tab is highlighted when the selected WS Profile is configured with either HTTP or HTTPS as
the transfer protocol.

Extract Profile Settings Click on this button to automatically fill in the settings from the Ser-
vice Port Definition in the profile.
HTTP Address Enter the complete URL address, including port, for the web service
used to connect to the information requesting client.
Enable Basic Access Authen- Select this check box to enable Basic Access Authentication.
tication
Username Enter the username that should be provided by the requesting client
when using Basic Access Authentication.
Password Enter the password that should be provided by the requesting client
when using Basic Access Authentication.

620
Desktop 7.1

When Basic Access Authentication is enabled, in order to perform a request, the client program
will have to provide credentials such as username and password. Otherewise, a HTTP 401 status
code will be returned.

13.5.5.2. Introspection
The agent emits and retrieves UDRs of the WSCycle_[operation name] UDR type. For further
information, see Section 13.5.4.3, “The WSCycle UDR Type”.

13.5.5.3. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

The agent does not publish nor access any MIM parameters.

13.5.5.4. Agent Message Events


There are no message events for this agent.

13.5.6. Web Service Request Agent


The Web Services Request agent can be compared to a client and allows Web Service requests to be
sent. The response, if any, can then be routed into the workflow.

13.5.6.1. Configuration
The Web Service Request agent configuration dialog is displayed when right-clicking on the Web
Service Request agent and selecting the Configuration option, or when double-clicking on the agent
in Workflow Editor.

Figure 434. The Web Service Request Agent Configuration View

Web Service Profile Click on the Browse button and select a predefined Web Service profile.
Response Timeout (ms) The timeout value specifies the maximum allowed response time in milli-
seconds back to the Request agent after a request has been sent to the service.
If the response time is exceeded, the Request agent times out and an error

621
Desktop 7.1

message will be logged in the System Log and the WSCycleUDR will be
routed out from the agent with the errorMessage field set.
Support CDATA encap- If activated, content encapsulated with CDATA tag will always be sent
sulated content without escape characters in the SOAP message.

The content must be completely encapsulated by CDATA tag for


correct output. No leading or trailing characters are allowed, for ex-
ample:

data = "<![CDATA[This will be output as


CDATA]]>"

13.5.6.1.1. HTTP

Extract Profile Settings Click on this button to automatically fill in the settings from the
Service Port Definition in the profile.
HTTP Address Enter the complete URL address for the web service used to connect
to the information provider.
Enable Basic Access Authentic- Select this check box to enable use of Basic Access Authentication.
ation
Username Enter the username that should be used when making a request with
Basic Access Authentication.
Password Enter the password that should be used when making a request with
Basic Access Authentication.

13.5.6.2. Introspection
The agent emits and retrieves UDRs of the WSCycle_[operation name] UDR type. For further
information, see Section 13.5.4.3, “The WSCycle UDR Type”.

13.5.6.3. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

The agent does not publish nor access any MIM parameters.

13.5.6.4. Agent Message Events

about to send Reported right before a request is sent to a Web Service provider.

done Reported after a response has been received from a Web Service provider. It is
also reported when a timeout occurs.

13.5.7. Example
The following example demonstrates a configuration of a simplified use of a Premium-SMS payment
procedure, performed with the MediationZone® Web Service agents.

The Web Service configuration consists of three main steps:

• Defining a WS Profile

• Configuring a Web Service Provider workflow

622
Desktop 7.1

• Configuring a Web Service Requester workflow

13.5.7.1. Defining the WS Profile


Use the following instructions to create a WS Profile:

1. Click the New Configuration button in the upper left part of the MediationZone® Desktop window,
and then select WS Profile from the menu.

The WS profile configuration opens.

2. In the Configuration tab, click on the Import WSDL button, and select the WSDL file you want
to import.

You can now see the file contents on the View WSDL Contents tab.

3. At the bottom of the Configuration tab, select the SOAP: Charger (Charger_SOAPBinding) in
the Service Port Definition drop-down list.

4. In the WS profile configuration, click on the File menu and select the Save As... option.

5. In the Save as dialog box select a folder and type Example in the Name text box.

6. Click OK.

7. Check the WS directory in the APL Code Editor and see the data structure that your WS profile
just generated. The APL Code Editor is opened by clicking on the New Configuration button in
MediationZone® Desktop, and then selecting WS Profile from the menu.

8. Right-click on the text pad and select the UDR Assistance ... option.

The UDR Internal Format Browser opens.

9. Scroll down to the WS directory and expand it to see where data is stored once you save your WS
profile.

13.5.7.1.1. The WSDL File

<?xml version="1.0" encoding="UTF-8"?>


<wsdl:definitions
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
xmlns="http://schemas.xmlsoap.org/wsdl/"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:tns="http://example.com/webservice/charger"
xmlns:x1="http://example.com/webservice/charger/types"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" name="Charger"
targetNamespace="http://example.com/webservice/charger">

<wsdl:types>

<schema xmlns="http://www.w3.org/2001/XMLSchema"
xmlns:tns="http://example.com/webservice/charger/types"
elementFormDefault="qualified"
targetNamespace="http://example.com/webservice/charger/types">

<complexType name="ChargingEvent">
<sequence>
<element name="id" type="string"/>
<element name="amount" type="float"/>
</sequence>

623
Desktop 7.1

</complexType>

<element name="Charge">
<complexType>
<sequence>
<element name="serviceType" type="string"/>
<element name="chargingEvent" type="tns:ChargingEvent"/>
</sequence>
</complexType>
</element>

<element name="ChargeResult">
<complexType>
<sequence>
<element name="success" type="boolean"/>
<element name="message" type="string"/>
</sequence>
</complexType>
</element>

<element name="FaultDetail">
<complexType>
<sequence>
<element name="reason" type="int"/>
<element name="message" type="string"/>
</sequence>
</complexType>
</element>
</schema>
</wsdl:types>

<wsdl:message name="ChargingRequest">
<wsdl:part name="in" element="x1:Charge" />
</wsdl:message>

<wsdl:message name="ChargingRespone">
<wsdl:part name="in" element="x1:ChargeResult" />
</wsdl:message>

<wsdl:message name="chargeFault">
<wsdl:part name="faultDetail" element="x1:FaultDetail"/>
</wsdl:message>

<wsdl:portType name="Charger">
<wsdl:operation name="charge">
<wsdl:input name="chargingRequest" message="tns:ChargingRequest"/>
<wsdl:output name="chargingResponse" message="tns:ChargingRespone"/>
<wsdl:fault name="chargingFault" message="tns:chargeFault"/>
</wsdl:operation>
</wsdl:portType>

<wsdl:binding name="Charger_SOAPBinding" type="tns:Charger">


<soap:binding style="document"
transport="http://schemas.xmlsoap.org/soap/http"/>

<wsdl:operation name="charge">
<soap:operation soapAction="" style="document"/>
<wsdl:input name="chargingRequest">

624
Desktop 7.1

<soap:body use="literal"/>
</wsdl:input>

<wsdl:output name="chargingResponse">
<soap:body use="literal"/>
</wsdl:output>
<wsdl:fault name="chargingFault">
<soap:fault name="chargingFault" use="literal"/>
</wsdl:fault>
</wsdl:operation>
</wsdl:binding>

<wsdl:service name="Charger_Service">
<wsdl:port binding="tns:Charger_SOAPBinding" name="Charger">
<soap:address location="http://localhost:8080/charge"/>
</wsdl:port>
</wsdl:service>
</wsdl:definitions>

13.5.7.2. Creating a Web Service Provider Workflow


A Web Service Provider workflow consists of a Web Service Provider agent (Collection), and an
Analysis agent. The Web Service Provider agent routes UDRs, that are of the type WSCycle_charge,
to the Analysis agent. The Analysis agent then routes the UDRs, carrying a response, back to the Web
Service Provider agent.

Figure 435. The Web Service Provider Workflow

13.5.7.2.1. Configuring the Agents

In the Workflow Editor, open the configuration views of both agents.

1. In the Web Service Provider configuration view, click on the Browse button.

The Configuration Selection dialog opens.

2. Select example and click OK.

In the Web Service Profile text box, example.charge will appearl.

3. Set the Workflow Response Timeout to 3000 ms.

4. In the HTTP tab, assign the HTTP Address with a value by clicking on the Extract Profile Settings
button.

5. Click OK.

6. In the Analysis configuration view, enter the following APL code:

import ultra.ws.example.charge.cycles;
import ultra.ws.example.charge.x1;

625
Desktop 7.1

consume {
// Verifying that the UDR type is matches the
// UDR definition generated by the WS profile.

if (instanceOf(input, WSCycle_charge)) {
// Cast the input type to the WSCycle_charge UDR.

WSCycle_charge udr = (WSCycle_charge) input;


string errorMessage = null;
int faultReason = -1;

// debug
debug("The ServiceType is " + udr.param.serviceType);
debug("The amount to charge is "
+ udr.param.chargingEvent.id);
debug("The amount to charge is "
+ udr.param.chargingEvent.amount);

// Perform some business logic ...

// In case an error occured when performing


// the business logic, send back a fault
if (errorMessage != null) {
FaultDetail fault = udrCreate(FaultDetail);
fault.message = errorMessage;
fault.reason = faultReason;
udr.fault_FaultDetail = fault;
udr.errorMessage = errorMessage;
}
else {
ChargeResult result = udrCreate(ChargeResult);
result.success = true;
result.message = "OK";
udr.response = result;
}

// The UDR is routed back to the


// Web Service Provider agent
udrRoute(udr);
}
}

The APL code first verifies that the UDRs that enter the workflow are of the WSCycle_charge type.
If so, the UDRs are casted from the abstractWScycle type to the WSCycle_charge type.

7. Click on the Compilation Test button.

If the compilation fails check that the name of the folder in which you saved the WS profile is the
same as in the path that the APL code specifies.

8. Click on the Set To Input.

9. Click OK.

13.5.7.3. Creating Web Service Request Workflow


This Web Service Requester workflow consists of the following agents:

626
Desktop 7.1

• TCP_IP: Collects the input data.

• Analysis_1: Creates the UDR type WSCycle_charge and routes it to the Web Service Requester

• Web Service Requester (Processing): Sends a request to a Web Service server. Once the web
server replies, Web Service Requester forwards the reply to Analysis_2.

• Analysis_2: Receives the Web Service reply. In this example we use this agent for output demon-
stration.

Figure 436. The Web Service Requester Workflow

13.5.7.3.1. Configuring the Agents

In the Workflow Editor open the configuration views of both agents.

1. In the TCP_IP agent configuration view, set Port to 3210.

This port number should not be the same one that the Web Service Provider is configured
with.

2. In the Analysis_1 agent configuration view, enter the following APL code into the text pane:

import ultra.ws.example.charge.cycles;
import ultra.ws.example.charge.x1;
consume {
// Create a WSCycle_charge UDR
WSCycle_charge udr = udrCreate(WSCycle_charge);
// Creaete a Charger UDR as parameter
udr.param = udrCreate(Charge);
// Populate the parameter with data
udr.param.chargingEvent = udrCreate(ChargingEvent);
udr.param.chargingEvent.amount = 0.50;
udr.param.chargingEvent.id = "0123456789";
udr.param.serviceType = "SMS";
// Route the UDR
udrRoute(udr);
}

3. To enter the UDR type, click on the Set To Input button.

The type bytearray appear on the UDR types list.

4. In the Web Service agent Configuration tab, click on the Browse button to enter the WS profile.

5. To automatically set the HTTP address click on the Extract Profile Settings button.

627
Desktop 7.1

In this example the HTTP address is http://localhost:8080/charge. This is the same address
that the Web Service Provider is assigned with. Using the same address both in the Provider
agent as well as in the Requester agent, enables the Web Service Requester workflow to act
as a client of the Web Service Provider workflow.

6. In the Analysis_2 Configuration view, enter the following APL code into the text pane:

consume {
debug(input);
}

7. Click on the Set To Input button.

bytearray will appear on the UDR types list.

13.5.7.4. Running the Workflows


To run the workflows and watch their operation, open Workflow Monitor and follow the instructions
below:

1. In the Workflow Monitor view, select the Debug option in the Edit.

The text "Debug Active (Event)" will appear at the bottom left corner of the Workflow Monitor.

2. Click the on the Start button.

3. Once both workflows are running, to establish a connection port, run the following command from
a command line view:

telnet localhost 3210

4. From the telnet view enter data to the Provider workflow. To trigger the Analysis agent and have
it route WSCycle_charge UDRs to the Web Service Requester agent, press ENTER repeatedly and
expect the following:

• Prior to every request, the debug event message "About to send" appears at the bottom of the
Workflow Monitor view, and the Requester then sends the request to the Web Service Provider.

• In the Web Service Provider workflow, the Analysis agent first generates a debug message with
the content of the param field, and then creates a response.

• Analysis_1 sends the respond to the Web Service Requester agent.

• The Web Service Requester generates the debug event "Done" and routes the WSCycle_charge
UDR to Analysis_2.

• In the debug event pane, Analysis_2 announces the contents of the WSCycle_charge UDR .

628
Desktop 7.1

13.6. Workflow Bridge Agents


13.6.1. Introduction
This section describes the Workflow Bridge agents. These are standard agents of the MediationZone®
Platform. The agents are used for fast collection and forwarding of data between MediationZone®
workflows. The agents can be part of both batch and real-time workflows. The Workflow Bridge for-
warding agent is listed among the processing agents in Desktop while the Workflow Bridge collection
agent is listed among the collection agents.

13.6.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

13.6.2. Overview
The Workflow Bridge agents act as a bridge for communication between real-time workflows, or
between a batch and a real-time workflow, within the same MediationZone® system.

The Workflow Bridge agents do not use any storage server to manage the data. Data is instead stored
in memory cache when executing on the same EC, or streamed directly from one agent to another,
over TCP/IP, when executing on different ECs. This provides for efficient transfer of data, especially
from batch to real-time workflows.

The Forwarding and Collection workflows communicate by using a dedicated set of UDRs:

• Data is sent from a Forwarding workflow to a Collection workflow in a ConsumeCycleUDR. Refer


to Section 13.6.6.1, “ConsumeCycleUDR” for further information.

In the ConsumeCycleUDR there are also fields that enable broadcasting and load balancing.
Broadcasting, i e sending the same UDR to several different workflows in the collecting workflow
configuration, can be made to a configurable number of workflows. Load balancing enables you to
configure to which workflow that each UDR should be sent, based on critera of your choice.

Note! If any other UDR than ConsumeCycleUDR is routed to the forwarding agent, the bridge
will only support one collector, which means that it will not be possible to broadcast or load
balance.

• Each state of a Forwarding workflow is sent in a separate WorkflowState UDR to the Real-time
Collection agent. The Batch Forwarding workflow sends all states from initialize to deinitialize,
while a Real-time Forwarding workflow only sends the initialize state. The deinitialize state is sent
by the Real-time Collection workflow if the connection goes down between the Collection workflow
and a Forwarding workflow (Batch or Real-time).

For more information regarding the workflow execution states, see Section 4.1.11.6, “Workflow
Execution State”

• User defined action UDRs can be sent from the Collection workflow to communicate actions back
to the Forwarding workflow. Refer to Section 13.6.6.4, “User Defined Action UDRs” for further
information.

In the collecting workflow, the APL has to be configured to communicate responses for the UDRs to
the Workflow Bridge Collection agent. When both the forwarding and collecting workflows are real
time workflows, only responses for WorkflowState UDRs have to be configured. However, when the

629
Desktop 7.1

forwarding workflow is a batch workflow, responses for ConsumeCycle UDRs have to be configured
as well.

Responses for WorkflowState UDRs are always communicated back to the forwarding workflow. If
you want to communicate responses for ConsumeCycle UDRs back to the forwarding workflow as
well, the Send Reply Over Bridge option has to be selected in the Workflow Bridge profile, see
Section 13.6.3.3, “Workflow Bridge Profile Configuration” for further information.

Workflow Bridge has two essential features; Session Context and Bulk Forwarding.

13.6.2.1. Session Context


By using a session context, arbitrary data can be stored and managed by the Workflow Bridge Real-
time Collection agent during the whole transaction. This is achieved by populating the SessionContext
field in the InitializeCycleUDR or BeginBatchCycleUDR with the data, which is then included in all
subsequent UDRs after "initialize" or "begin batch". The field is cleared at "deinitialize", when the
connection to the Forwarding workflow goes down.

Note! The SessionContext field is only writable in the InitializeCycleUDR and BeginBatchUDR
and the session context is only available in a Collection workflow.

Refer to Section 13.6.6, “Workflow Bridge UDR Types” for more information about the Workflow
Bridge UDR types.

13.6.2.2. Bulk Forwarding of Data


To enhance performance, it is possible to collect and send data in a bulk from the Forwarding agent.
When the data bulk has been received by the Workflow Bridge Real-time Collection agent, it is unpacked
and forwarded as separate UDRs by the agent.

The bulk is created by the Workflow Bridge Forwarding agent after a configured number of UDRs
has been reached, or after a configured timeout. This is specified in the Workflow Bridge profile, see
Section 13.6.3.3, “Workflow Bridge Profile Configuration” for more information.

Bulking of data can only be performed for data being sent in the ConsumeCycleUDR and not for the
states that are sent in the state specific UDRs.

Bulk forwarding is not performed when the Workflow Bridge agents are executing on the same EC.

13.6.2.3. Transaction Safety


The Workflow Bridge Collection workflow is a real-time workflow and as such, the transaction safety
will have to be handled with APL code. The forwarding workflow execution states are transferred as
Workflow State UDRs from the Workflow Bridge Forwarding Agent, which enables the collection
workflow to take different actions depending on the execution state of the forwarding workflow.

The Workflow Bridge Collection agent requires that all Workflow State UDRs are returned before the
next Workflow State UDR are forwarded into the workflow. The only exception to this is that the
Collection agent accepts to have several ConsumeCycleUDRs outstanding at the same time. However,
all ConsumeCycleUDRs must have been returned before the next type of Workflow State UDRs are
forwarded. If the forwarding workflow is a batch workflow, this means that all ConsumeCycleUDRs
must be returned before the DrainCycleUDR is forwarded into the workflow.

For more information regarding the workflow execution states, see Section 4.1.11.6, “Workflow Exe-
cution State”.

630
Desktop 7.1

13.6.2.4. Broadcasting and Load Balancing


The same UDR can be sent to multiple workflows in the collection workflow configuration by using
broadcasting. In case the forwarding workflow configuration has many workflows, all the UDRs sent
from these can also be sent to all the workflows in the collecting workflow configuration.

Load balancing can be used to direct different UDRs to different workflows. Each workflow in the
collection workflow configuration is assigned a LoadId by adding this field to the workflow table in
the Workflow Properties, and then entering specific Ids for each workflow. In the APL code in the the
forwarding workflow configuration, you can then determine which UDRs should be routed to which
LoadId.

You configure the number of workflows you want to send UDRs to in the Workflow Bridge profile.

13.6.3. Workflow Bridge Profile


The Workflow Bridge Profile enables you to configure the bridge that the Forwarding and Collection
agents use for communication. The profile ties the workflows together.

The Workflow Bridge profile is loaded when you start a workflow that depends on it. Changes to the
profile become effective when you restart the workflow.

13.6.3.1. Workflow Bridge Profile Menu


The contents of the menus in the menu bar may change depending on which Configuration type that
has been opened in the currently displayed tab. The Workflow Bridge profile uses the standard menu
items that are visible for all Configurations, and these are described in Section 3.1.1, “Configuration
Menus”.

There is one menu item that is specific for Workflow Bridge profile configurations, and it is described
in the coming section:

13.6.3.1.1. The Edit Menu

Item Description
External References To Enable External References in a Workflow Bridge profile. This can be
used for configuring Number of Collectors, Bulk Size and Bulk Timeout.

13.6.3.2. Workflow Bridge Profile Buttons


The contents of the button panel may change depending on which Configuration type that has been
opened in the currently displayed tab. There is a set of standard menu items that are visible for all
Configurations and these are described in Section 3.1.2, “Configuration Buttons”.

13.6.3.3. Workflow Bridge Profile Configuration


To open a Workflow Bridge profile configuration, click on the New Configuration button in the upper
left part of the MediationZone® Desktop window, and then select Workflow Bridge Profile from the
menu.

The Workflow Bridge configuration contains two tabs; General and Advanced.

13.6.3.3.1. General configuration

The General tab is displayed by default when creating or opening a Workflow Bridge profile.

631
Desktop 7.1

Figure 437. Workflow Bridge Profile

Send Reply over Check this if the Collection workflow shall send a reply back to the Forwarding
Bridge workflow each time a ConsumeCycleUDR has been received. If this is not checked
only ConsumeCycleUDRs with an wfbActionUDR will be sent back.

Note! This only applies for the ConsumeCycleUDR, since the WorkflowState
UDRs are always acknowledged.

Note! There is no timeout handling for outstanding ConsumeCycleUDRs


requests in the forwarding agent, which means that this must be handled in
the Workflow logic if so required.

Force Serializa- The Force Serialization option is enabled by default and applies to situations
tion where the Workflows are running on the same EC. It can be disabled for a perform-
ance increase, if it can be assured that no configurations will be changed during
the running of these Workflows.

Note! If this option is disabled, it is strongly recommended to NOT perform


any configuration changes while the Workflows are running.

Response This is the time (in seconds) that the Workflow Bridge Forwarding agent will wait
Timeout (s) for a response for a WorkflowState UDR from the Workflow Bridge Real-time
Collection agent. After the specified time, the Workflow Bridge Forwarding agent
will time out and abort the workflow. The default value is "60".
Bulk Size Bulk Size is configured if data should be bulked by the Workflow Bridge Forward-
ing agent before it is sent to the collection side.

Configure the number of UDRs that should be bulked. The default value is "0",
which means that the bulk functionality will not be used.

632
Desktop 7.1

Bulk Timeout This is the time (in milliseconds) that the Workflow Bridge Forwarding agent will
(ms) wait in case the bulk size criteria is not fulfilled. Default value is "0" which is an
infinite timeout.
Number of Col- If you want to configure broadcasting or load balancing, i e if you want several
lectors different Workflow Bridge collecting workflows to be able to receive data from
the forwarding workflow, you enter the number of collecting workflows you want
to use in this field.

The number of collecting workflows connected to the workflow bridge must not
exceed the limit set by this value. In case of a batch forwarding workflow, it must
be started after the specified Number of Collectors are running or it will abort.
Collecting workflows that are started after the limit has been reached will also
abort.

Real-time forwarding workflows do not require that any collectors running when
started, and the Number of Collectors represents an upper limit only.

Validation of this configuration will be performed.


UDR Types Click the Add button to select which UDR types to include in the Consume-
CycleUDR. This could be of type 'bytearray' or any UDR. Click the Remove button
to remove a UDR type.

Note! After a Workflow Bridge profile has been changed and saved, all running workflows that
are connected to this profile must be restarted.

13.6.3.3.2. Advanced configurations

In the Advanced tab you can configure additional properties for optimizing the performance of the
Workflow Bridge.

Figure 438. The Workflow Bridge profile - Advanced tab

One example of properties that can be configured under the advanced tab is forwardingQueueSize,
which controls the number of UDRs that can be queued by the Workflow Bridge Forwarding Agents.

See the text in the Properties field for further information about the properties.

13.6.4. Workflow Bridge Forwarding Agents


The Workflow Bridge Forwarding agent is responsible for sending data to a Workflow Bridge Collection
agent.

633
Desktop 7.1

13.6.4.1. Batch Workflow Configuration


To open the configuration dialog for the Workflow Bridge agent, right-click the agent and select
Configuration..., or double-click the agent.

Figure 439. Forwarding Agent Configuration Dialog for a Batch Workflow

Profile This is the profile to use for communication between the workflows. For information about
how to configure a Workflow Bridge profile see, Section 13.6.3.3, “Workflow Bridge Profile
Configuration”.

The Workflow Bridge supports one Batch Forwarding workflow connected to one or several
Collection workflows.

All workflows in the same workflow configuration can use separate profiles. For this to
work, the profile must be set to Default in the Workflow Table tab found in the Workflow
Properties dialog. For further information on the Workflow Table tab, refer Section 4.1.7,
“Workflow Table”.

To select a profile, click on the Browse... button, select the profile to use, and then click
OK.

13.6.4.1.1. Transaction Behavior

13.6.4.1.1.1. Emits

The agent emits data in the ConsumeCycleUDR and the WorkflowState UDRs.

13.6.4.1.1.2. Retrieves

The agent retrieves ConsumeCycleUDRs and WorkflowState UDRs from the Collection workflow.

13.6.4.1.2. Introspection

The agent consumes bytearray types and any UDRs, as configured in the profile. Please refer to
Section 13.6.3.3, “Workflow Bridge Profile Configuration” for more information.

13.6.4.1.3. Meta Information Model

Note! For information about the MediationZone® MIM and a list of the general MIM parameters,
see Section 2.2.10, “Meta Information Model”.

13.6.4.1.3.1. Publishes

MIM Parameter Description


Number of collectors Number of collectors is of the int type and indicates the number
of collectors configured in the Workflow Bridge profile.
Forwarding Queue Size Forwarding Queue Size is of the int type and contains the value
of forwardingQueueSize, which is defined under the Advanced tab
in the Workflow Bridge profile configuration.
Forwarding Queue Utiliz- Forwarding Queue Utilization is of the any type and contains
ation a hashmap of the number of UDRs that are currently queued in the forward-
ing workflows. The loadId values of the collecting workflows are used as
keys to the hashmap. If loadId is not used, the key will have the value 1.

634
Desktop 7.1

The hashmap is periodically refreshed and cached in order to prevent impact


on performance when the MIM value is repeatedly queried.

Example 108. Example use of Forwarding Queue Utilization in APL

consume {
map<int, int> queueMap = (map<int, int>)
mimGet("Workflow_Bridge_1", "Forwarding Queue Utilization");
int loadId = 1;
int queueSize = mapGet(queueMap, loadId);
debug("Queue Utilization: " + queueSize);
wfb.ConsumeCycleUDR ccUDR = udrCreate(wfb.ConsumeCycleUDR);
ccUDR.Data = input;
udrRoute(ccUDR);
}

13.6.4.1.3.2. Accesses

The agent does not access any MIM resources.

13.6.4.1.4. Agent Message Events

There are no agent message events for this agent.

For information about the agent message event type, see Section 5.5.14, “Agent Event”.

For information about the agent message event type, see the MediationZone® Desktop User's Guide.

13.6.4.1.5. Debug Events

Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Connected to local:<Forwarding agent id>.workflow_bridge

This message is displayed when a connection has been established with a Workflow Bridge Real-
time Collection agent.

• Ready with transaction <transaction number>

This message is displayed each time a transaction has been finished, that is, after endBatch.

• Disconnected

This message is displayed when a connection has been released.

13.6.4.2. Real-time Workflow Configuration


To open the configuration dialog for the Workflow Bridge agent, right-click the agent and select
Configuration..., or double-click the agent.

635
Desktop 7.1

Figure 440. Forwarding Agent Configuration Dialog for a Real-time Workflow

Profile This is the profile to use for communication between the workflows. For information about
how to configure a Workflow Bridge profile, see Section 13.6.3.3, “Workflow Bridge Profile
Configuration”.

All workflows in the same workflow configuration can use separate profiles. For this to
work, the profile must be set to Default in the Workflow Table tab found in the Workflow
Properties dialog. For further information on the Workflow Table tab, refer to Section 4.1.7,
“Workflow Table”.

To select a profile, click on the Browse... button, select the profile to use, and then click
OK.

13.6.4.2.1. Transaction Behavior

For information about the general MediationZone® transaction behavior, see Section 4.1.11.8,
“Transactions”.

13.6.4.2.1.1. Emits

The agent emits data in the ConsumeCycleUDR and the WorkflowState UDRs.

13.6.4.2.1.2. Retrieves

The agent retrieves ConsumeCycleUDRs and ErrorCycleUDRs from the Collection workflow.

13.6.4.2.2. Introspection

The agent consumes bytearray types and any UDRs, as configured in the profile, refer to Sec-
tion 13.6.3.3, “Workflow Bridge Profile Configuration” for more information.

13.6.4.2.3. Meta Information Model

Note! For information about the MediationZone® MIM and a list of the general MIM parameters,
see Section 2.2.10, “Meta Information Model”.

13.6.4.2.3.1. Publishes

MIM Parameter Description


Number of collectors Number of collectors is of the int type and indicates the number
of collectors configured in the Workflow Bridge profile.
Forwarding Queue Size Forwarding Queue Size is of the int type and contains the value
of forwardingQueueSize, which is defined under the Advanced tab
in the Workflow Bridge profile configuration.
Forwarding Queue Utiliz- Forwarding Queue Utilization is of the any type and contains
ation a hashmap of the number of UDRs that are currently queued in the forward-
ing workflows. The loadId values of the collecting workflows are used as
keys to the hashmap. If loadId is not used, the key will have the value 1.
The hashmap is periodically refreshed and cached in order to prevent impact
on performance when the MIM value is repeatedly queried.

636
Desktop 7.1

Example 109. Example use of Forwarding Queue Utilization in APL

consume {
map<int, int> queueMap = (map<int, int>)
mimGet("Workflow_Bridge_1", "Forwarding Queue Utilization");
int loadId = 1;
int queueSize = mapGet(queueMap, loadId);
debug("Queue Utilization: " + queueSize);
wfb.ConsumeCycleUDR ccUDR = udrCreate(wfb.ConsumeCycleUDR);
ccUDR.Data = input;
udrRoute(ccUDR);
}

13.6.4.2.3.2. Accesses

The agent does not access any MIM resources.

13.6.4.2.4. Agent Message Events

There are no agent message events for this agent.

For information about the agent message event type, see Section 5.5.14, “Agent Event”.

13.6.4.2.5. Debug Events

Debug messages are dispatched in debug mode. During execution the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Connected to local <Collector agent address>

This message is displayed when a connection has been established with a Collection agent.

• Disconnected from <Collector agent address>

This message is displayed when a connection has been released.

• Trying to connect to <Collector agent address>

This message is displayed when a forwarding agent is trying to connect to a collector.

13.6.4.3. Defining forwarding host


In case you want to specify which host the forwarding agents should connect to, you can add the fol-
lowing property:

<property name="wfb.host" value="<host>"/>

in the executioncontext.xml file and enter the host you want to the agent to connect to as value.

637
Desktop 7.1

13.6.5. Workflow Bridge Collection Agent


The Workflow Bridge Real-time Collection agent collects data that has been sent by a Workflow
Bridge Forwarding agent.

13.6.5.1. Real-time Workflow Configuration


To open the configuration dialog for the Workflow Bridge agent, right-click the agent and select
Configuration..., or double-click the agent.

Figure 441. Collection agent Configuration Dialog for a Real-time Workflow

Profile This is the profile to use for communication between the workflows. For information about
how to configure a Workflow Bridge profile, see Section 13.6.3.3, “Workflow Bridge Profile
Configuration”.

All workflows in the same workflow configuration can use separate profiles. For this to
work, the profile must be set to Default in the Workflow Table tab found in the Workflow
Properties dialog. For further information on the Workflow Table tab, refer to Section 4.1.7,
“Workflow Table”.

To select a profile, click on the Browse... button, select the profile to use, and then click
OK.
Port This is the default port that the collecting server will listen to for incoming requests. A valid
port value is between 1 and 65535.

If you have a collecting workflow configuration with several workflows, you have to open
the Workflow Properties, and set the WFB Collector - Port field to Default. Then you can
enter the different ports you want to use for the workflows in the workflow table, as each
one need to listen to a separate port.

Note! If both the collection and forwarding workflows are executing on the same
execution context, an ephermal port will be used regardless of the value set in this
field.

13.6.5.1.1. Transaction Behavior

In the Batch Forwarding to Real-time Collection scenario, the Workflow Bridge Real-time Collection
agent routes the states retrieved from the Workflow Bridge Batch Forwarding agent to the Collection
workflow. These are the states:

• initialize

• beginBatch

• drain

• endBatch

• commit

638
Desktop 7.1

• deinitialize

• cancelBatch

• rollback

The Collection workflow must handle all the states and send a reply to the batch Forwarding workflow
by returning the corresponding WorkflowState UDR. For more information regarding the states, see
Section 4.1.11.6, “Workflow Execution State”.

13.6.5.1.2. Introspection

The agent consumes and emits WorkflowStateUDR and ConsumeCycleUDR types.

13.6.5.1.3. Meta Information Model

Note! For information about the MediationZone® MIM and a list of the general MIM parameters,
see Section 2.2.10, “Meta Information Model”.

13.6.5.1.3.1. Publishes

The agent does not publish any MIM resources.

13.6.5.1.3.2. Accesses

The agent does not access any MIM resources.

13.6.5.1.4. Agent Message Events

There are no agent message events for this agent.

For information about the agent message event type, see Section 5.5.14, “Agent Event”.

13.6.5.1.5. Debug Events

Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• The collector started at <local:address>

This message is displayed when a collector is started.

• The collector started at <address>

This message is displayed when a connection has been released.

• Client <Worfklow Bridge client> connected to <local:address>

This message is displayed when a forwarding agent disconnects from a collector.

• The collector stopped.

This message is displayed when a collector is stopped.

639
Desktop 7.1

13.6.5.1.6. Load Balancing

In order to enable load balancing, you have to enter the Workflow Properties and configure the load
ID field to Default.

A loadID field will then be added in the workflow table. Configure separate Load IDs for each workflow
and save. In the forwarding workflow you can then use these load IDs in the APL for directing certain
UDRs to certain workflows.

13.6.5.1.7. Defining collector host

In case you want to specify which host the collecting agent should connect to, you can add the following
property:

<property name="wfb.host" value="<host>"/>

in the executioncontext.xml file and enter the host you want to the agent to bind to as value.

13.6.6. Workflow Bridge UDR Types


The Workflow Bridge UDR types are designed to exchange data between the workflows.

The Workflow Bridge UDR types can be viewed in the UDR Internal Format Browser in the 'wfb'
folder. To open the browser, first open an APL Editor, and, in the editing area, right-click and select
UDR Assistance.

13.6.6.1. ConsumeCycleUDR
The ConsumeCycleUDR is the UDR that the Workflow Bridge Forwarding agent populates with data
and routes to the Workflow Bridge Real-time Collection agent. The ConsumeCycleUDR must always
be acknowledged and sent back from an Analysis agent to the Workflow Bridge Real-time Collection
agent if the ConsumeCycleUDR was initially sent by a Workflow Bridge Batch Forwarding agent,
see Section 13.6.7.1.4.2, “Analysis” for an example. This is not needed if the Workflow Bridge For-
warding agent is of type Real-time.

If the Send reply over bridge setting has been configured in the profile, the ConsumeCycleUDR is
sent the whole way back to the Workflow Bridge Batch Forwarding agent. Refer to Section 13.6.3.3,
“Workflow Bridge Profile Configuration” for more information.

The ConsumeCycleUDRs can be sent in a bulk from the Workflow Bridge Forwarding agent, for a
more efficient transfer of the data. This is further described in Section 13.6.2.2, “Bulk Forwarding of
Data”.

The following fields are included in the ConsumeCycleUDR:

Field Description
Action (wfbActio- This field is used by the user to communicate actions back to the Forwarding
nUDR) workflow. It can be populated with a user defined action UDR, see Sec-
tion 13.6.6.4, “User Defined Action UDRs”.
AgentId (string) This field includes the agent id that is created for the Workflow Bridge For-
warding agent each time a workflow is started. The id is unique per Workflow
Bridge Forwarding agent and workflow execution.
Broadcast This field indicates whether broadcast should be enabled (true) or not (false).
(boolean) If broadcast is enabled, the forwarded UDRs will be sent to all the configured
workflows in the collecting workflow configuration.
Data (any) This field can be populated with anything and contains the UDRs or bytearrays
that are sent from the Forwarding workflow.

640
Desktop 7.1

Field Description
LoadId (int) If you have configured the number of collectors to be more than 1 in the
Workflow Bridge profile, LoadIds can be used for determining how data
should be distributed. Each workflow is assigned a specific LoadId, and then
you can use this field in the ConsumeCycle UDR to indicate which LoadId,
i e which workflow, the UDR should be routed to.
SessionContext This field might contain data that has been populated in the Initialize-
(any) CycleUDR or BeginBatchCycleUDR by the Workflow Bridge Real-time
Collection agent. For more information about session context, refer to Sec-
tion 13.6.2.1, “Session Context”. This field is only readable in this UDR.

13.6.6.2. Workflow Execution State UDRs


The Workflow Bridge Forwarding agent sends information to the Workflow Bridge Real-time Collection
agent to report workflow state changes. The state information is delivered in a WorkflowState UDR.

The Workflow Bridge Real-time Collection agent always has to acknowledge a workflow state change
by sending back the WorkflowState UDR to the Workflow Bridge Forwarding agent. For Consume-
CycleCDRs, this behavior can be controlled using the Send Reply over Bridge setting in the Workflow
Bridge Profile.

For more information regarding the states, see Section 4.1.11.6, “Workflow Execution State”.

The following fields are common for all of the workflow state UDRs:

Field Description
AgentId (string) This field includes the agent id that is created for the Workflow Bridge For-
warding agent each time a workflow is started. The id is unique per Workflow
Bridge Forwarding agent and workflow execution.
SessionContext This field might contain data that has been populated in the Initialize-
(any) CycleUDR or BeginBatchCycleUDR by the Workflow Bridge Real-time
Collection agent. For more information about session context, refer to Sec-
tion 13.6.2.1, “Session Context”. This field is only readable in this UDR.

The following field is included for all Workflow Execution State UDRs that are specific for batch
forwarding workflows:

Field Description
TxnId (long) This field includes the id for the batch transaction of the batch forwarding workflow.

13.6.6.2.1. WorkflowStateUDR

The WorkflowStateUDR defines the common attributes and behaviors for any WorkflowState UDRs.

13.6.6.2.2. InitializeCycleUDR

This UDR is sent when the forwarding workflow enters the initialize execution state.

13.6.6.2.3. BeginBatchCycleUDR

This UDR is sent when the batch forwarding workflow enters the beginBatch execution state.

13.6.6.2.4. ConsumeCycleUDR

This is the UDR that contains the data that is being collected from the forwarding workflow. For more
information about ConsumeCycleUDRs, refer to Section 13.6.6.1, “ConsumeCycleUDR”.

641
Desktop 7.1

13.6.6.2.5. DrainCycleUDR

This UDR is sent when the batch forwarding workflow enters the drain execution state.

13.6.6.2.6. EndBatchCycleUDR

This UDR is sent when the batch forwarding workflow enters the endBatch execution state.

13.6.6.2.7. CommitCycleUDR

This UDR is sent when the batch forwarding workflow enters the commit execution state.

In addition to the common UDR fields the CommitCycleUDR also includes the following field:

Field Description
IsRecovery (boolean) This field includes information on recovery status, to be able to know
if a rollback shall be committed.

13.6.6.2.8. DeinitializeCycleUDR

This UDR is sent when the forwarding workflow enters the deinitialize execution state.

13.6.6.2.9. CancelBatchCycleUDR

This UDR is sent when the batch forwarding workflow enters the cancelBatch execution state.

13.6.6.2.10. RollbackCycleUDR

This UDR is sent when the batch forwarding workflow enters the rollback execution state.

In addition to the common UDR fields the CommitCycleUDR also includes the following field:

Field Description
IsRecovery (boolean) This field includes information on recovery status, to be able to know
if a rollback shall be committed.

13.6.6.3. ErrorCycleUDR
In the real-time to real-time case, The ErrorCycleUDR is used for returning the original UDR in case
the connection between the forwarding and collection workflow is lost. ErrorCycleUDRs are also
generated if the connection is not yet established due to the starting order of workflows, or if the
forwardingQueueSize is exceeded. For futher information about the queue size, see Sec-
tion 13.6.3.3.2, “Advanced configurations”.

Note! In Workflow Bridge real time forwarding and collecting workflow functions for handling
the ErrorCycleUDRs have to be added. Either you can select to route back the ErrorCycleUDR
to the previous Analysis agent, if this agent contains error handling, or you can route it to a
separate Analysis agent dedicated for error handling.

In addition to the common field AgentId, the ErrorCycleUDR also includes the following field:

Field Description
OriginalUDR (WorkflowBridgeUDR) This field contains the original WorkflowBridgeUDR.

642
Desktop 7.1

13.6.6.4. User Defined Action UDRs


It is possible to define an Action UDR and send it from the Collection workflow, in a Consume-
CycleUDR, to communicate actions back to the forwarding workflow. The UDR is sent even if Send
Reply over Bridge is not selected in the Workflow Bridge Profile.

The Action UDR is created using the Ultra Format Definition Language (UFDL). This is the classname
you need to extend in UFDL:

com.digitalroute.workflowbridge.transport.ultra.WfbActionUDR

Example 110.
An Action UDR in Ultra Format

external my_ext sequential {


// field definitions
int ActionType : static_size(1);
ascii Anum : terminated_by(0xA);

};

internal WFBActionUDR :
extends_class(
"com.digitalroute.workflowbridge.transport.ultra.WfbActionUDR" ) {
};

in_map ACTION_inMap :
external( my_ext ),
internal( WFBActionUDR ),
target_internal( my_ACTION_TI ) {
automatic;
};

decoder myDecoder : in_map( ACTION_inMap );

For further information about the Ultra Format Editor and the UFDL syntax, refer to the Ultra Format
Management User's Guide.

13.6.7. Examples
This section gives two examples of how to setup Workflow Bridge workflows, in a batch to real-time
and a real-time to real-time scenario. The examples are simple and intended to be used as a base for
further development.

13.6.7.1. Batch to Real-Time Scenario with Action UDR


This section will show an example of a scenario where a Batch Forwarding workflow sends data to a
Real-time Collection workflow. The batch workflow will deliver four UDRs to the realtime workflow
and the second one will be returned with a user defined Action UDR connected to it.

The following configurations will be created:

• An Ultra format

643
Desktop 7.1

• A Workflow Bridge Profile

• A Workflow Bridge Batch Forwarding Workflow

• A Workflow Bridge Real-Time Collection Workflow

Figure 442. Example of a Batch to Real-Time Scenario

13.6.7.1.1. Define an Ultra Format

A simple Ultra Format needs to be created both for the incoming UDRs as well as for the user defined
WfbActionUDR. For more information about the Ultra Format Editor and the UFDL syntax, refer to
the Ultra Format Management User's Guide.

Create an Ultra Format as defined below:

internal WFBActionUDR :
extends_class( "com.digitalroute.workflowbridge.transport.ultra.WfbActionUDR"
int type;
ascii action;
};

external my_input sequential {


// field definitions
ascii myId : int(base10),terminated_by(",");
ascii text : terminated_by(0xa);
};

// Decoder mapping
in_map inputMap :
external( my_input ),
target_internal( my_internal_TI ) {
automatic;
};

decoder myDecoder : in_map( inputMap );

644
Desktop 7.1

The input file used in this example should look like:

1,My first UDR


2,My second UDR
3,My third UDR
4,My forth UDR

13.6.7.1.2. Define a Profile

The profile is used to connect the two workflows. See Section 13.6.3.3, “Workflow Bridge Profile
Configuration” for information how to open the Workflow Bridge Profile editor.

Figure 443. Example of a Profile Configuration

In this dialog, the following settings have been made:

• The Send reply over bridge is not selected which means that only responses for WorkflowStateUDRs
and UDRs with an Action UDR attached to the response will be returned to the forwarding workflow.

• Force serialization is not used since there will be no configuration changes during workflow exe-
cution.

• The Workflow Bridge Real-time Collection agent must always respond to the WorkflowState UDRs.
The Response timeout (s) has been set to "60" and this means that the forwarding workflow that
is waiting for a WorkflowState UDR reply will timeout and abort (stop) after 60 seconds if no reply
has been received from the Collection workflow.

• The Bulk size has been set to "0". This means that the UDRs will be sent from the Workflow Bridge
Forwarding agent one by one, and not in a bulk. Enter the appropriate bulk size if you wish to use
bulk forwarding of UDRs.

• The Bulk timeout (ms) has been set to "0" since there will be no bulk forwarding. Enter the appro-
priate bulk timeout if you wish to use bulk forwarding of UDRs. Bulk timeout can only be specified
if the bulk functionality has been enabled in the Bulk size setting.

• The Number of Collectors are set to 1 since only one collector since there will be a one-to-one
connection in this example.

• Set the UDR type to my_internal_TI by clicking on the Add button. To remove a UDR type
from the UDR Types list, select the UDR type click the Remove button.

645
Desktop 7.1

13.6.7.1.3. Create a Batch Forwarding Workflow

In this workflow, a Disk agent collects data that is forwarded to an Analysis agent. The data is routed
by the Decoder agent to the Workflow Bridge Forwarding agent, which in turn forwards the data in a
ConsumeCycleUDR to a Workflow Bridge Real-time Collection agent. Each time the Workflow Bridge
Batch Forwarding workflow changes state, a WorkflowState UDR is sent to the Workflow Bridge
Real-time Collection agent as well.

For more information regarding the states a workflow can have, see Section 4.1.11.6, “Workflow Ex-
ecution State”.

Since the Send reply over bridge option has not been configured, only ConsumeCycleUDRs with an
Action UDR attached are returned from the Workflow Bridge Real-time Collection agent and routed
to an Analysis agent in the Batch Forwarding workflow.

Figure 444. Example of a Batch Forwarding Workflow

The workflow consists of a Disk Collection agent named Disk, a Decoder agent named Decoder, a
Workflow Bridge Batch Forwarding agent named Workflow_Bridge_FW and an Analysis agent
named Actions.

13.6.7.1.3.1. Disk

Disk is a Collection agent that collects data from an input file and forwards it to the Decoder agent.

Double-click on the Disk agent to display the configuration dialog for the agent:

Figure 445. Example of a Disk Agent Configuration

In this dialog, the following settings have been made:

646
Desktop 7.1

• The agent is configured to collect data from the /home/trunk/in directory, which is stated in
the Directory field. Enter the path to the directory where the file you want to collect is located.

• The agent will collect all files in the directory.

13.6.7.1.3.2. Decoder

The Decoder agent receives the input data from the Disk agent, translates it into UDRs and forwards
them to the Workflow_Bridge_Forw agent. Double-click on the Decoder agent to display the config-
uration dialog.

Figure 446. Example of an Decoder Agent Configuration

In this dialog, choose the Decoder that you defined in your Ultra Format.

13.6.7.1.3.3. Workflow_Bridge_FW

Workflow_Bridge_FW is the Workflow Bridge Batch Forwarding agent that sends data to the
Workflow Bridge Real-time Collection agent. Each incoming UDR will be included in the Data field
of a ConsumeCycleUDR which is sent to the realtime workflow. Double-click on Work-
flow_Bridge_Forw to display the configuration dialog for the agent.

Figure 447. Example of a Workflow Bridge Agent Configuration

In this dialog, the following setting has been made:

• The agent has been configured to use the profile that was defined in Section 13.6.7.1.2, “Define a
Profile”.

13.6.7.1.3.4. Analysis

The Analysis agent is an analysis agent that receives the responses from the Workflow_Bridge_Forw
agent. Since the profile does not have the Send Reply Over Bridge checked, the agent will only receive
responses with an Action UDR. Double-click on the Analysis agent to display the configuration dialog.

647
Desktop 7.1

Figure 448. Example of an Analysis Agent Configuration

In this dialog, the APL code for handling input data is written. In the example, there will be a debug
priontout of the UDRs with an Action UDR connected. Adapt the code according to your requirements

You can also see the UDR type used in the UDR Types field, in this example it is a ConsumeCycleUDR.

13.6.7.1.4. Create a Real-Time Collection Workflow

In this workflow, a Workflow Bridge Real-time Collection agent collects the data that has been sent
in a ConsumeCycleUDR from the Workflow Bridge Batch Forwarding agent. It also collects the
WorkflowState UDRs that inform about state changes in the batch forwarding workflow.

An Analysis agent returns all ConsumeCycleUDRs to the Workflow Bridge Real-time Collection
agent, to let the agent know when to send the DrainCycleUDR. The Analysis agent also replies to all
WorkflowState UDRs, so that the Workflow Bridge Batch Forwarding agent will know when to move
forward to the next Agent Execution State. For more information regarding the workflow execution
states, see Section 4.1.11.6, “Workflow Execution State”.

Figure 449. Example of a Real-time Collection Workflow

13.6.7.1.4.1. Workflow_Bridge_C

Workflow_Bridge_C is the Workflow Bridge Real-time Collection agent that receives the data that
the Workflow Bridge Batch Forwarding agent has sent over the bridge. Double-click on the Work-
flow_Bridge_C agent to display the configuration dialog for the agent.

Figure 450. Example of a Workflow Bridge Agent Configuration

648
Desktop 7.1

In this dialog, the following settings have been made:

• The agent has been configured to use the profile that was defined in Section 13.6.7.1.2, “Define a
Profile”.

• The port that the collector server will listen on for incoming requests has been set to default value
"3299". However, if the two workflows will execute on the the same execution context, an ephem-
eral port is used instead.

13.6.7.1.4.2. Analysis

The Analysis agent is the Analysis agent that receives and analyses the data originally sent from the
Workflow Bridge Batch Forwarding agent in the ConsumeCycleUDR, as well as the workflow state
information delivered in the WorkflowState UDRs.

This agent will also look for the UDR that has its Id set to 2 and create an Action UDR for this.

Double-click on the agent to display the configuration dialog.

Figure 451. Example of an Analysis Agent Configuration

649
Desktop 7.1

Example 111. Example Code

consume {
if (instanceOf(input, wfb.WorkflowStateUDR)) {
udrRoute((wfb.WorkflowStateUDR) input);
} else if (instanceOf(input, wfb.ConsumeCycleUDR)) {
wfb.ConsumeCycleUDR ccUDR = (wfb.ConsumeCycleUDR) input;
//validate content of the incoming UDR
WFBridge.UltraFormat.my_internal_TI myUDR =
(WFBridge.UltraFormat.my_internal_TI) ccUDR.Data;
if (myUDR.myId == 2) {
//Create an action UDR
WFBridge.UltraFormat.WFBActionUDR myAction =
udrCreate( WFBridge.UltraFormat.WFBActionUDR);
myAction.type = 44;
myAction.action = "The second UDR will be returned
to the WF";
ccUDR.Action = myAction;
}
udrRoute((wfb.ConsumeCycleUDR) ccUDR);
} else {
debug(input);
}
}

In this example, a reply is sent back to the Workflow_Bridge_Coll agent, by routing back the Work-
flowStateUDR and ConsumeCycleUDRs. Adapt the code according to your requirements.

Note! Since WorkflowState UDRs have to be routed back to the Workflow Bridge Collection
agent in order to be returned to the forwarding workflow, a "response" route have to be added
from the Analysis agent to the Workflow Bridge Collection agent.

You can see the UDR types used in the UDR Types field, i. e. WorkflowStateUDR and Consume-
CycleUDR.

13.6.7.2. Real-Time to Real-Time Scenario with Load Balancing


This section will show an example of a scenario where a Real-time Forwarding workflow will split
the execution of UDRs between three Real-time Collection workflows depending on the incoming
data. Each Collection workflow will add information and send back the consumeCycleUDR to the
Real-time Forwarding workflow for further execution. The following configurations will be created:

• An Ultra Format

• A Workflow Bridge Profile

• A Workflow Bridge Real-Time Forwarding Workflow

• A Workflow Bridge Real-Time Collection Workflow

650
Desktop 7.1

Figure 452. Example of a Real-Time to Real-Time Scenario

13.6.7.2.1. Define an Ultra Format

A simple Ultra Format needs to be created in order to forward the incoming data and enable the Col-
lection workflows to populate it with more information. For more information about the Ultra Format
Editor and the UFDL syntax, refer to the Ultra Format Management User's Guide.

Create an Ultra Format as defined below:

internal myInternal {
string inputValue;
string executingWF;
};

13.6.7.2.2. Define a Profile

The profile is used to connect the forwarding workflow towards the three collection workflows. See
Section 13.6.3.3, “Workflow Bridge Profile Configuration” for information how to open the Workflow
Bridge Profile editor.

651
Desktop 7.1

Figure 453. Example of a Profile Configuration

In this dialog, the following settings have been made:

• The Send Reply Over Bridge is selected which means that all ConsumeCycleUDRs will be returned
to the Workflow Bridge forwarding agent.

• Force serialization is not used since there will be no configuration changes during workflow exe-
cution.

• The Workflow Bridge Real-time Collection agent must always respond to the WorkflowState UDRs.
The Response Timeout (s) has been set to "60" and this means that the Workflow Bridge Real-time
Forwarding agent that is waiting for a WorkflowState UDR reply will timeout and abort (stop) after
60 seconds if no reply has been received from the Real-time Collection workflow.

Enter the appropriate timeout value to set the timeout for the Workflow Bridge Real-time Forwarding
agent.

• The Bulk Size has been set to "0". This means that the UDRs will be sent from the Workflow Bridge
Real-time Forwarding agent one by one, and not in a bulk. Enter the appropriate bulk size if you
wish to use bulk forwarding of UDRs.

• The Bulk Timeout (ms) has been set to "0" since there will be no bulk forwarding. Enter the appro-
priate bulk timeout if you wish to use bulk forwarding of UDRs. Bulk timeout can only be specified
if the bulk functionality has been enabled in the Bulk size setting.

• Since the UDRs in this example will be split between three different workflows, the Number of
Collectors has been set to "3".

13.6.7.2.3. Create a Real-Time Forwarding Workflow

In this workflow, a TCP/IP agent collects data that is forwarded to an Analysis agent. The Analysis
agent will define the receiving Real-time Collection workflow before the ConsumeCycleUDR is sent
to the Workflow Bridge Forwarding agent. The Workflow Bridge Forwarding agent will distribute
the UDRs to the correct collection workflow and forward the returning ConsumeCycleUDR to another
Analysis agent for further execution.

652
Desktop 7.1

Figure 454. Example of a Real-Time Forwarding Workflow

The workflow consists of a TCP/IP agent, an Analysis agent named Analysis, a Workflow Bridge
Real-time Forwarding agent named Workflow_Bridge_FW and a second Analysis agent named
Result.

13.6.7.2.3.1. TCP/IP

TCP/IP is a Collection agent that collects data using the standard TCP/IP protocol and forwards it to
the Analysis agent.

Double-click on the TCP_IP agent to display the configuration dialog for the agent:

Figure 455. Example of a TCP/IP Agent Configuration

In this dialog, the following settings have been made:

• Host has been set to "10.46.20.136". This is the IP address or hostname to which the TCP/IP agent
will bind.

• Port has been set to "3210". This is the port number from which the data is received.

• Allow Multiple Connections has been selected and Number of Connections Allowed has been
set to "2". This is the number of TCP/IP connections that are allowed simultaneously.

13.6.7.2.3.2. Analysis

The Analysis agent is an Analysis agent that receives the input data from the TCP/IP agent. It defines
which Real-time Collection workflow should be chosen and forwards the ConsumeCycleUDR to the
Workflow_Bridge_FWD agent. Double-click on the Analysis agent to display the configuration
dialog.

653
Desktop 7.1

Figure 456. Example of an Analysis Agent Configuration

Example 112. Example Code

consume {
wfb.ConsumeCycleUDR ccUDR = udrCreate(wfb.ConsumeCycleUDR);
WFBridge.myFormat.myInternal data =
udrCreate(WFBridge.myFormat.myInternal);
data.inputValue = baToStr(input);
debug("First character is: " +
strSubstring(data.inputValue,0,1));
if (strStartsWith(data.inputValue,"1") ||
strStartsWith(data.inputValue,"2")) {
int wfId;
strToInt(wfId,strSubstring(data.inputValue,0,1));
ccUDR.LoadId = wfId;
} else {
ccUDR.LoadId = 3;
}
ccUDR.Data = data;
udrRoute(ccUDR);
}

In this dialog, the APL code for handling input data is written. In the example, the incoming data is
analyzed and depending on the first character in the incoming data, the receiving Real-time Collection
workflow is chosen by setting the LoadId in the ConsumeCycleUDR, which is sent to the Work-
flow_Bridge_FWD agent. Adapt the code according to your requirements.

13.6.7.2.3.3. Workflow_Bridge_FWD

Workflow_Bridge_FWD is the Workflow Bridge Real-time Forwarding agent that sends data to the
Workflow Bridge Real-time Collection agent. Double-click on Workflow_Bridge_FWD to display
the configuration dialog for the agent.

654
Desktop 7.1

Figure 457. Example of a Workflow Bridge Agent Configuration

In this dialog, the following settings have been made:

• The agent has been configured to use the profile that was defined in Section 13.6.7.2.2, “Define a
Profile”.

13.6.7.2.3.4. Result

The Result agent is an Analysis agent that receives the returning ConsumeCycleUDRs and potential
ErrorCycleUDRs from the Workflow_Bridge_FWD agent. Double-click on the Analysis agent to
display the configuration dialog.

Figure 458. Example of an Analysis Agent Configuration

Example 113. Example Code

consume {
if (instanceOf(input, wfb.ErrorCycleUDR)) {
debug("Something went wrong");
} else if (instanceOf(input, wfb.ConsumeCycleUDR)) {
wfb.ConsumeCycleUDR ccUDR = (wfb.ConsumeCycleUDR)input;
WFBridge.myFormat.myInternal data =
(WFBridge.myFormat.myInternal)ccUDR.Data;
string msg = ("Value " + data.inputValue +
" was executed by " + data.executingWF);
debug(msg);
}
}

In this dialog, the APL code for further handling of the UDRs is written. In the example, only simple
debug messages are used as output. Adapt the code according to your requirements.

655
Desktop 7.1

13.6.7.2.4. Create the Real-Time Collection Workflows

In this workflow, a Workflow Bridge Real-time Collection agent collects the data that has been sent
in a ConsumeCycleUDR from the Workflow Bridge Real-time Forwarding agent and returns an updated
ConsumeCycleUDR.

Figure 459. Example of a Real-time Collection Workflow

13.6.7.2.4.1. Workflow_Bridge_Coll

Workflow_Bridge_Coll is the Workflow Bridge Real-time Collection agent that receives the data
that the Workflow Bridge Real-time Forwarding agent has sent over the bridge. Double-click on the
Workflow_Bridge_Coll agent to display the configuration dialog for the agent.

Figure 460. Example of a Workflow Bridge Agent Configuration

In this dialog, the following settings have been made:

• The agent has been configured to use the profile that was defined in Section 13.6.7.2.2, “Define a
Profile”.

• The default port that the collector server will listen on for incoming requests has been set to default
value "3299".

13.6.7.2.4.2. Analysis

The Analysis agent is the Analysis agent that receives and analyses the data originally sent from the
Workflow Bridge Real-time Forwarding agent in the ConsumeCycleUDR, as well as the workflow
state information delivered in the WorkflowStateUDR.

Double-click on the agent to display the configuration dialog.

656
Desktop 7.1

Figure 461. Example of an Analysis Agent Configuration

Example 114. Example Code

consume {
if (instanceOf(input, WorkflowStateUDR )) {
udrRoute((WorkflowStateUDR)input, "response");
} else if (instanceOf(input, ConsumeCycleUDR)) {
wfb.ConsumeCycleUDR ccUDR = (wfb.ConsumeCycleUDR)input;
WFBridge.myFormat.myInternal data =
(WFBridge.myFormat.myInternal)ccUDR.Data;
debug("Incoming data: " + data.inputValue);
data.executingWF =
(string)mimGet("Workflow","Workflow Name");
ccUDR.Data = data;
udrRoute(ccUDR, "response");
} else {
debug(input);
}

In this example, each ConsumeCycleUDR will populate the data field executingWF with the name
of the executing workflow. Also WorkflowStateUDRs are routed back. Adapt the code according to
your requirements.

13.6.7.2.4.3. Workflow Table

Since this example will load balance between three workflows, additional workflows is added in the
workflow table.

Right-click in the workflow template and choose Workflow Properties to display the Workflow
Properties dialog.

657
Desktop 7.1

Figure 462. Example of Workflow Properties

In this dialog, the following settings have been made:

• Workflow - Execution - Execution Settings and Workflow_Bridge_Coll - WFB_Collector -


Port have default checked, which means they will use the configured value in the template unless
a new value is given in the Workflow Table.

• Workflow_Bridge_Coll - WEB_Collector - loadID has Per Workflow set, which means that the
value must be specified in the Workflow Table.

• Number of Workflows to Add has been set to "2", since one is already existing and the example
needs two additional workflows.

The Workflow Table will contain three workflows, that all will communicate with the Real-time for-
warding workflow.

Figure 463. Example of a workflow table

Populate the Workflow Table with correct settings for each workflow:

• Name should be set to a unique name for each workflow.

• Set which EC each workflow will execute on in Execution Settings.

• Each workflow needs a unique port for communication with the Workflow_Bridge_FWD agent.

• The loadID need to correspond with the APL code and should be "1", "2" and "3" in this example.

658
Desktop 7.1

14. Appendix VI - Collection and Forwarding


Agents

14.1. Database Agents


14.1.1. Introduction
This section describes the Database collection and Database forwarding agents. These are standard
agents on the DigitalRoute® MediationZone® Platform.

The MediationZone® Database agent is supported for use only with the following databases:

• Oracle

• Sybase

• SQL Server databases

Unless specified otherwise, Oracle is the MediationZone® standard and default database.

14.1.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• Structured Query Language (SQL)

• UDR structure and contents

14.1.2. Database Collection Agent


The Database Collection agent collects rows from a database table on a remote or local database and
inserts them as UDRs into a MediationZone® workflow.

When the workflow is executed the agent will create a query in SQL, based on the user configuration
and retrieve all rows matching the statement. For each row a UDR is created and populated according
to the assignments in the configuration window.

The agent use and require a transaction ID column to utilize a rollback functionality. Additionally,
based on configurations, the agent deletes the data in the table after it has been inserted into the
workflow. When all the matching data has been successfully processed, the agent stops to await the
next activation, scheduled or manual initialization.

14.1.2.1. Configuration
The Database Collection agent configuration window is displayed when a database agent in a workflow
is double-clicked or right-clicked, selecting Configuration....

659
Desktop 7.1

14.1.2.1.1. Source Tab

Figure 464. Database Collection agent configuration window, Source tab.

The Source tab contains configurations related to the placement and handling of the source database
table and its data, as well as the UDR type to be created and populated by the agent.

UDR Type Type of UDR to be created and populated.


Database Profile name of the database that the agent will connect to and retrieve data from.
The list is populated each time the configuration window is opened. For further
information about database profile setup, see Section 9.3, “Database Profile”.

Refresh must be selected if changes have been made in the customer database.
This will update the presented information in the Source tab.

The Database Collection agent does not support Fast Connection Failover
(FCF) used when using an Oracle RAC enabled database for the database
agent.

Use Default Data- Check this to use the default database schema for the chosen database and user.
base Schema

This is not applicable for all database types. Use Default Database Schema
is available for selection only when accessing Oracle databases.

Tables within the default schema will be listed without schema prefix.

Table Name Name of the working table in the selected Database, in which the data to be
collected resides. The list is populated each time a new Database is selected.
For further information and an example of a working table, see Section 14.1.4.3.1,
“Working Table”
Transaction ID Name of the column in the selected Table, which is utilized for the transaction
column ID. The list is populated each time a Table Name is selected. The column must
be of the data type number, with at least twelve digits.
Remove If enabled, this option will remove the collected data rows from the working
table.
Mark as Collected If enabled, this option will assign the value -1 to the Transaction ID column
for all the collected rows.

660
Desktop 7.1

Run SP If enabled, this option executes a user defined stored procedure that is responsible
for the handling, most often removal, of the collected data.

It is important that this procedure actually deletes the data or sets the
Transaction ID to -1, to avoid the data being recollected.

For further information and an example of such a stored procedure, see Sec-
tion 14.1.4.3.2, “After Collection Stored Procedure”.
Ignore Select to have the collected data remain in the table even after collection. Note
that while the data state remains unchanged after collection, the Transaction ID
value is updated.

By keeping the data in the table you can collect it repeatedly while designing
and testing a workflow, for example.

14.1.2.1.2. Assignment Tab

Figure 465. Database Collection agent configuration window, Assignment tab.

The Assignment tab contains the mapping of column values to UDR fields. The content and use of
this tab is described in detail in Section 14.1.4.2.1, “Assignments”.

If the Source tab is correctly configured and the Assignment tab is selected, the table will automatically
be populated, as if Refresh was clicked. If assignments already exist in the Assignment tab, then
Refresh must be manually clicked for the assignments to be updated with the configurations in the
Source tab.

Potential changes in the database table will not be visible until the Refresh button for the data-
base, in the Source tab, has been clicked.

Only the value types UDR Field, To UDR and NULL, described in Section 14.1.4.2.2, “Value Types”,
are available for selection.

661
Desktop 7.1

14.1.2.1.3. Condition Tab

Figure 466. Database Collection agent configuration window, Condition tab.

In the Condition tab, query constraints may be added to limit the selection of data. The statement must
follow the standard SQL WHERE-clause syntax, except for the initial where and the final semi-colon
(;) which are automatically appended to the entered condition statement. It is, for instance, possible
to include an order by statement to get the rows sorted.

The condition statement may contain dynamic parameters, represented by question marks that in
run-time will be replaced by a value. If the text area contains question marks, Assign Parameters...
must be selected, to be able to assign values to these parameters. The assignments are made in the
Parameter Editor dialog.

Figure 467. Database Collection agent configuration window, Parameter Editor window.

In this dialog each parameter, represented as a question mark in the condition statement, appears as
one row. The value types available are MIM Entry and Constant. Since constant values are also
possible to be given directly in the condition statement, MIM Entry is most likely to be used here.

662
Desktop 7.1

14.1.2.1.4. Advanced Tab

Figure 468. Database Collection agent configuration window, Advanced tab.

The Advanced tab contains a setting for performance tuning and allows viewing of the generated SQL
statement, based on the configuration in the Source, Assignment and Condition tabs.

Commit Window The number of UDRs (rows) to be removed between each database commit
Size command. This value is used to tune the performance. If tables are small and
contain no Binary Objects, the value may be set higher than the default. Default
is 1000. The window size can be set to any value between 1-60000, where setting
1 means that commit is performed after 1 UDR, and setting 60000 means that
commit is performed after 60000 UDRs.

Rows are only removed if Remove is enabled in the Source tab.


Generated SQL In this window the SQL statement to be used to query the database is shown. It
Statement may be used for debug purposes or for pure interest.

In order for the statement to appear, the Source and Assignment tabs have to
be properly configured. If not, information about the first detected missing or
erroneous setting is displayed.

14.1.2.2. Transaction Behavior


The Database Collection agent performs some extra maneuvers to ensure that data is not recollected
or lost if the workflow aborts before the collection of a batch has finished correctly.

1. A unique Transaction ID is retrieved for each new batch.

2. The pending transaction table is queried for all pending Transaction IDs, to be compared with the
transaction IDs in the working table, from which the agent will collect.

3. The SQL query is built and executed, and all matching rows are collected. In addition to the user
defined condition, the agent adds some conditions to the query, to ensure that pending data, cancelled
data and data marked as collected is not collected.

4. For each row that has been successfully converted to a UDR, the agent updates its Transaction ID
column to the Transaction ID retrieved in bullet 1.

5. When all rows matching the query have been successfully collected, the After Collection config-
uration in the Source tab, is used.

663
Desktop 7.1

a. If Remove, all rows with the given Transaction ID are removed in batches of the size configured
as the Commit Window Size, in the Advanced tab.

b. If Mark as Collected, all rows with the given Transaction ID are updated with the reserved
Transaction ID value -1.

c. If Run SP, the user defined stored procedure is executed. For further information, see Sec-
tion 14.1.4.3.2, “After Collection Stored Procedure”.

14.1.2.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted after the SQL select statement execution.
End Batch Emitted after the SQL select statement execution, when all possible matching rows
have been successfully inserted as UDRs in the workflow.

If the SQL select statement does not return any data, Begin and End Batch will not be emitted.
Not even if Produce Empty Files is selected in a Forwarding Disk agent.

14.1.2.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch All rows with the current Transaction ID are updated with the reserved Transaction
ID -2. If these rows are to be recollected, the Transaction ID column must first be
set to 0(zero). If set to NULL this row cannot be collected.

The database row that issued the Cancel Batch request is written to the System Log.

If the Cancel Batch behavior defined on workflow level is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation the rows will not be updated with the reserved Transaction ID
-2.

Hint End Batch An End Batch call will be issued, causing the original batch returned by the SQL
query to be split at the current UDR. The database commit command is executed,
followed by a new select statement to fetch the remaining UDRs from the table.

14.1.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces the UDR types selected from the UDR Type list.

14.1.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

664
Desktop 7.1

14.1.2.4.1. Publishes

MIM Parameter Description


Database This MIM parameter contains the name of the database the agent is collecting
from.

Database is of the string type and is defined as a global MIM context type.
Table This MIM parameter contains the name of the working table the agent is collect-
ing from.

Table is of the string type and is defined as a global MIM context type.
Source Filename This MIM parameter contains the name of the currently processed file, as defined
at the source.

Source Filename is of the string type and is defined as a header MIM


context type.

14.1.2.4.2. Accesses

The agent does access MIM resources, if MIM parameter assignments are set in the Parameter Editor
in the Condition tab.

14.1.2.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with table: tablename

Reported, along with the name of the working table, when all rows are collected from it.

14.1.2.6. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Start collecting

Indicates that possible cleanup procedures are finalized and that the actual collection begins.

• No rows selected from table tablename

Reported, along with the name of the working table, if no rows have been selected for collection.

• Avoids reading the following pending txn ids: list of ids

Reported, along with a list of Transaction IDs, if the constructed SQL select statement finds any
pending Transaction IDs in the pending transaction table. Rows marked with these transaction IDs
will be excluded by the query.

• Marking collected data as cancelled

Reported when a Cancel Batch is received.

665
Desktop 7.1

• Deleting collected data

Reported when collected rows are removed after collection, if Remove is selected in After Collection.

• Has deleted n rows in tablename

Subevent to the Deleting collected data event, stating the number of rows removed by
each SQL commit command. The maximum number depends on the Commit Window Size.

• Marking collected data

Reported when collected rows are marked, if Mark as Collected is selected in After Collection.

• Has updated n rows in tablename

Subevent to the Marking collected data event, stating the number of rows marked as col-
lected by each SQL commit command. The maximum number depends on the Commit Window
Size.

• Taking care of collected data via SPname

Reported, along with the name of the stored procedure, when it is called after collection, if Run SP
is selected in After Collection.

• Try to force running Stored Procedure to Stop

Reported when stopping a workflow that has a Stored Procedure running.

14.1.3. Database Forwarding Agent


The Database Forwarding agent inserts UDR data into a database table, based on user defined mappings
between UDR fields and database table columns. Also, the agent offers the possibility of populating
columns with other types of data. The data is inserted either using a plain SQL statement, or through
a call to a stored procedure that will be responsible of inserting the data.

The agent does not only map and forward the data. A special column in the target table is also assigned
a unique Transaction ID, generated for each batch. In relation to this, a pending transaction table is
utilized to indicate that a batch is open. The Database Collection agent also utilizes this table to prevent
problems if collecting data from the target table.

14.1.3.1. Configuration
The Database Forwarding agent configuration window is displayed when the database agent in a
workflow is double-clicked or right-clicked, selecting Configuration....

666
Desktop 7.1

14.1.3.1.1. Target Tab

Figure 469. Database Forwarding agent configuration window, Target tab.

The Target tab contains configurations related to the target database table and the UDR Type that will
populate it with data.

UDR Type Type of UDR to populate the target database table.


Database Profile defining the database that the agent is supposed to connect and forward data
to. The list is populated each time the configuration window is opened. For further
information about database profile setup, see Section 9.3, “Database Profile”.

Select Refresh if changes have been made in the customer database, to update the
presented information in the Target tab.

The Database Forwarding agent does not support Fast Connection Failover
(FCF) used when using an Oracle RAC enabled database for the database
agent.

Use Default Check this to use the default database schema for the chosen database and user.
Database
Schema
This is not applicable for all database types. Use Default Database Schema
is available for selection only when accessing Oracle databases.

Tables within the default schema will be listed without schema prefix.

Access Type Determines if the insertion of data is to be performed directly into the target table,
or via a stored procedure.

• Direct - Insertion of data is performed directly.

• Stored Procedure - Insertion of data is performed via a stored procedure.

Table Name or Depending on the selected Access Type, the target database table name, or the
SP Name stored procedure name, is selected. The list is populated each time a new Database
or Access Type is selected.

667
Desktop 7.1

For further information and an example of a working table, see Section 14.1.4.3.1,
“Working Table”. For further information about the stored procedure, see Sec-
tion 14.1.4.3.3, “Database Forwarding Target Stored Procedure”.
Transaction ID Name of the column in the selected table, or the parameter from the selected stored
column procedure, which is utilized for the Transaction ID. The list is populated each time
a Table Name or SP Name is selected.

The column must be of the data type number, with at least twelve digits.
Cleanup SP If the selected Access Type is Stored Procedure, the agent does not automatically
clean up the target table, in case of a workflow abortion (Cancel Batch). If that is
the case, the customer must supply a stored procedure that manages the clean up.
The list is populated each time a new Database is selected.

For further information and an example of a Cleanup Stored Procedure, see Sec-
tion 14.1.4.3.4, “Cleanup Stored Procedure”.
SP Target Table Name of the target table for the stored procedure. This field is only enabled if the
Access Type is Stored Procedure. The list is populated each time a new Database
is selected.

If this agent is chained with a Database collection agent in another workflow, both
agents need to be aware of the mutual table. In the collection agent, a table to collect
from is always selected. However, in the forwarding agent, it is possible to select
the update of the table to be done via a stored procedure. If that is the case, the target
table for the stored procedure must be selected here. For further information, see
Section 14.1.4.1.1, “Pending Transaction Table”.

The correct name of the SP Target Table must be selected, or else a Database
collection agent will be able to collect pending data that is not supposed to
be collected. This may cause data duplication.

Run SP If enabled, this option causes a user defined stored procedure to be called when the
forwarding process terminates. It will then receive the transaction ID for the forwar-
ded rows as input.

This option is used for transaction safety when the table is read from another system,
to ensure no temporary rows are read. Rows are classified as temporary until End
Batch is reached. In case of a crash before End Batch is reached, the workflow
needs to be restarted for the temporary rows to be expunged.

MediationZone® specific database tables from the Platform database must never be utilized as
targets for output. This may cause severe damage to the system in terms of data corruption that
in turn may make the system unusable.

14.1.3.1.2. Assignment Tab

Figure 470. Database Forwarding agent configuration window, Assignment tab.

668
Desktop 7.1

The Assignment tab contains the assignment of values to each column or stored procedure parameter.
The content and use of this tab is described further in Section 14.1.4.2.1, “Assignments”.

The Column Name column does not necessarily contain column names. If Stored Procedure is selected
as the Access Type, this column will hold the names of all incoming parameters that the stored procedure
expects.

If the Target tab is correctly configured and the Assignment tab is selected, the table will automatically
be populated, as if Refresh was clicked. If assignments already exist in the Assignment tab, then
Refresh must be manually selected, for the assignments to be updated with the configurations in the
Target tab.

Potential changes in the database table will not be visible until the Refresh for the database, in
the Target tab, has been selected.

All Value Types, described in Section 14.1.4.2.2, “Value Types”, except for To UDR, are available
for selection.

When using Function as Value Type, it is not allowed to use question marks embedded in
strings. MediationZone® will interpret a question mark as a parameter.

14.1.3.1.3. Advanced Tab

Figure 471. Database Forwarding agent configuration window, Advanced tab.

The Advanced tab contains a setting for performance tuning and viewing the generated SQL statement,
based on the configuration in the Target and Assignment tabs.

Commit Win- The number of UDRs (rows) to be inserted or removed between each database
dow Size commit command. This value may be used to tune the performance. If tables are
small and contain no Binary Objects, the value may be set to a higher value than
the default. Default is 1000. The window size can be set to any value between 1-
60000, where setting 1 means that commit is performed after 1 UDR, and setting
60000 means that commit is performed after 60000 UDRs.

Rows are inserted for each UDR that is fed to the agent. All UDRs are stored in
memory between each database commit command, to enable rollback. Rows are
removed at the next workflow startup in case of a crash recovery.

669
Desktop 7.1

General SQL In this window, the SQL statement that will be used to populate the database, is
Statement shown. This field may not be edited, however, it is useful for debug purposes or
for pure interest.

In order for the statement to appear, the Target and Assignment tabs have to be
properly configured, or else information about the first detected missing or erroneous
setting is displayed.

14.1.3.2. Transaction Behavior


The agent utilizes a Transaction ID, unique for each batch, in two ways.

A. To make sure that inserted (distributed) rows are removed in case the batch is cancelled. This to
avoid duplicated rows. To handle this, the agent inserts its batch Transaction ID in the assigned
Transaction ID column. If the batch is cancelled, all rows matching the batch Transaction ID will
be removed again.

If a stored procedure is used to populate the table, the configured Cleanup SP must be able to do
the same, or something similar, to avoid duplicates. For further information and an example of a
cleanup stored procedure, see Section 14.1.4.3.4, “Cleanup Stored Procedure”.

B. To make sure that a potential Database Collection agent does not collect rows from the target table,
before the current batch is closed. To handle this, the agent populates a pending transaction table
with the current Transaction ID, database and table name in the beginning of the batch and removes
the entry in the end of the batch. For a detailed description of this behavior, see Section 14.1.4.1,
“Inter-Workflow Communication, Using Database Agents”.

14.1.3.2.1. Emits

None.

14.1.3.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Begin Batch Retrieves a Transaction ID and inserts an entry in the pending transaction table.
End Batch Deletes the pending Transaction ID row.
Cancel Batch Removes the distributed rows with the current Transaction ID or calls the configured
Cleanup SP. The pending Transaction ID row is deleted.

14.1.3.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent consumes the selected UDR type.

14.1.3.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.1.3.4.1. Publishes

MIM Parameter Description


Database This MIM parameter contains the name of the database the agent is distributing
to.

670
Desktop 7.1

Database is of the string type and is defined as a global MIM context type.
Table This MIM parameter contains the name of the working table or the stored pro-
cedure the agent is distributing to.

Table is of the string type and is defined as a global MIM context type.

14.1.3.4.2. Accesses

Various resources, if MIM parameter or function assignments are made in the Assignment tab.

14.1.3.5. Agent Message Event


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with table/SP: tablename

Reported when a stored procedure starts running, if Run SP is selected in the Database Forwarding
Target tab.

14.1.3.6. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Rollback transaction data

Reported when the agent receives a Cancel Batch or when recovering after system abortion.

• Has deleted n rows in tablename

Subevent to the Rollback transaction data event, stating the number of rows removed
by each SQL commit command. The maximum number depends on the Commit Window Size.

• Try to force running Stored Procedure to Stop

Reported when stopping a workflow that has a Stored Procedure running.

14.1.4. General
14.1.4.1. Inter-Workflow Communication, Using Database Agents
Data may propagate between workflows or MediationZone® systems by combining a Database for-
warding agent with a Database collection agent, where the exchange point is a mutual database table.

671
Desktop 7.1

When using the same table, the collection agent must make sure that it does not collect data that the
forwarding agent is simultaneously feeding with data from its current batch.

Transfer of UDRs between workflows is ideally handled using the Inter Workflow agents. The
Database agent approach is useful in case of wanting to change the content of the UDRs. Another
use is when wanting to pass on MIM values and merge batches at the same time. In the Inter
Workflow agent case, only the MIM values for the first (Header MIMs) and last batch (Trailer
and Batch MIMs) are considered. Using the Database agents, MIM values may be mapped into
database columns.

14.1.4.1.1. Pending Transaction Table

The MediationZone® database, hosts a table where pending transactions are registered. A pending
transaction is an ongoing population of a table by a Database Forwarding agent. The pending transaction
continues from a Begin Batch to an End Batch. The purpose of this table is for Database Collection
agents to avoid collecting pending data from the table that a Database Forwarding agent is currently
distributing to.

The pending transaction table holds database names and table names. Thus, before a collection session
starts, the collector evaluates if there are any pending Transaction IDs registered for the source database
and table. If there are, rows matching the Transaction IDs will be excluded.

In the following figure, the Database Collection agent will exclude all rows with transaction ID 187.

A Database Forwarding agent may be configured to target a stored procedure, instead of a table directly.
In such cases the user must specifically select the table that the stored procedure will populate (SP
Target Table). The reason for that is that the pending transaction table must contain the table name,
not the SP name, so that the selected table name in the Database Collection agent can be matched.

14.1.4.1.2. Exchanging Storable Data

All MediationZone® UDRs have a special field named Storable. This field contains the complete
UDR description and all its data. If UDRs, having many fields or a complex structure to be exchanged,
it could be suitable to store the content of the Storable field in the database. In that way the table
would only need one column. The database type of that column must be a RAW, LONG RAW or a
BLOB.

The data capacity of the column types RAW, LONG RAW and BLOB differs. Consult the
database documentation. For performance reasons it is advised to use the smallest type possible
that fits the UDR content.

When configuring the Database Forwarding agent, the Storable field from the UDR is be assigned to
the table column in a straight forward fashion. However, when collecting that type of data the column
assignment must not be made to the Storable field. Instead To UDR is selected in the Value Type
field.

672
Desktop 7.1

When the Database Collection agent detects a mapping of type To UDR, the selected UDR type is not
consulted for what UDR type to create. The information about the UDR type will be found in the data
of the column itself. Thus, if the UDR stored in the column is of another type than selected in the
Source tab, the type to be distributed by the Database Collection agent is the type actually found.

14.1.4.2. Configuration
14.1.4.2.1. Assignments

The Database agents are designed to either collect data from a database column and assign it to a UDR
field, or vice versa. In their configuration they share the Assignment tab, where these mappings are
configured. Due to the resemblance this configuration is described here.

Figure 472. Database Collection/Forwarding agents configuration window - Assignment tab.

Refresh Updates the table with all the columns or parameters from the selected table or stored
procedure (Database Forwarding agent, only).

Potential changes in the database table will not be visible until Refresh for the
database in the Source tab, has been selected.

If rows already exist in the table, the refresh operation will preserve the configuration
for all rows with a corresponding column or parameter name. Thus, if a table has been
extended with a new column, the old column configurations will be left untouched and
the new column will appear when Refresh is selected.

The value type on each new column that appears in the table is automatically set to
UDR Field.

Auto assignment:

All rows with no value assigned and with a value type of UDR Field will be targeted
for auto assignment in the end of the refresh process. If the selected UDR type contains
a field whose name matches the column name, the field will be automatically assigned
in the Value column. Matching is not case-sensitive and is done after stripping both
the column and field names from any characters, except a-z and 0-9.
Column Displays a list of all columns or stored procedure parameters (Database Forwarding
Name agent, only) for the selected table or stored procedure, except the Transaction ID
column.
Column Displays the data type for each column as declared in the database table. If the column
Type does not accept NULL this is displayed as: (NOT NULL).

673
Desktop 7.1

Note! If using Oracle and assigning a value of type bigint, the column type
VARCHAR should be used. Setting a full range of the bigint value type could
otherwise lead to a wrong value being inserted, due to a limitation in the JDBC
interface.

Value Type Allows the user to select what type of value to be assigned to the column, or vice versa.
For further information, see Section 14.1.4.2.2, “Value Types”.
Value The value to be assigned to the column, or vice versa. The technique of selecting a
value depends on the selected Value Type.

Note! It is important that the data type of the selected value corresponds to the
data type of the column. Most incompatibilities will automatically be detected,
however, there are situations where validation is not possible.

14.1.4.2.2. Value Types

The Database agents offer six different types of values that may be assigned to a column, or vice versa.
Depending on the agent, not all value types are applicable and will therefore not be available in the
list.

UDR Field If selected, a UDR browser will be launched when the corresponding Value cell is selected.
When a UDR field has been selected in the browser it will appear in the Value cell.

To save the user from launching the UDR browser for every cell to be assigned,
the browser window may be kept on display. When a UDR field is selected and
Apply is selected or if a UDR field is double-clicked, the field will go into the
Value cell of the selected row, provided that this row has a value type of UDR
Field.

The same rule applies when OK is selected in the browser, however the browser
will be dismissed. It is possible to change target (Value cell) by selecting the desired
row in the Assignment tab in the configuration window, while still keeping the
UDR browser window open.

Whether data types of the selected UDR and the database column are compatible or not,
is validated when the configuration dialog is confirmed.
MIM Only applicable for the Database Forwarding agent.
Entry
If selected, a MIM browser will be launched when the corresponding Value cell is clicked.
When a MIM resource has been selected in the browser it will appear in the Value cell.
The previous Note for the UDR Field applies to this browser as well.

Whether data types of the selected UDR and the database column are compatible or not,
is validated when the configuration window is confirmed.
Constant Only applicable for the Database Forwarding agent.

If selected, a text entry field will be available in the Value cell where any constant to be
assigned to the column may be entered. The agent automatically appends possible quotes
needed in the SQL statement, based on the data type of the column.
Function Only applicable for the Database Forwarding agent.

674
Desktop 7.1

If selected, a text entry field will be available in the Value cell where any database related
function to be called may be entered. If the function takes parameters, these must be
marked as question marks. Selecting a cell containing question marks will display the
Function Editor window where each question mark is represented by a row.

Figure 473. Function Editor window.

The selection of parameter values follows the same procedures as for the assignment of
column values however Constant, UDR Field and MIM Entry are the only available
value types.

If constants are entered in the Function Editor they must be quoted correctly since
the agent has no way of knowing what data types they must have.

NULL If selected, no value may be entered. In Database Collection agents, NULL must be se-
lected for all columns whose values are not mapped into a UDR Field. In Database For-
warding agents, NULL must be selected for columns populated with a NULL value or
columns that, when inserted, will be populated by internal database triggers.
From Only applicable for the Database Forwarding agent.
UDR
It is selected if a complete UDR is to be stored in a binary column to later be collected
by a Database Collection agent. The Database Forwarding agent must populate the column
from the special field Storable, available in all UDR types. This is only applicable for
column types RAW, LONG RAW or BLOB.
To UDR Only applicable for the Database Collection agent.

It is supposed to be selected if a complete UDR has been stored in a binary column by a


Database Forwarding agent and that UDR will be recollected by the Database Collection
agent. The Database Forwarding agent must have been populating the column from the
special field Storable, available in all UDR types. If this value type is selected no other
assignments are allowed. If other columns exist their value types must be set to NULL.
An evaluation to ensure the column type is actually a RAW, LONG RAW or BLOB, is
carried out.

14.1.4.3. Tables and Stored Procedures


14.1.4.3.1. Working Table

Following list holds information to be taken into consideration when creating the database table that
a Database Collection agent collects from, or a Database Forwarding agent distributes to.

• The table must have a Transaction ID column, dedicated to the Database agent's internal use. The
column could be named arbitrary however it must be numeric with at least twelve digits. It must
also not allow NULL.

• Reading from or writing to columns defined as BLOB will have a negative impact on performance
for both Database agents.

• It has been proved inefficient to put an index on the Transaction ID column.

675
Desktop 7.1

• Entries with Transaction ID column set to -1 (Mark as Collected) or -2 (Cancel Batch) must be
attended to manually at regular intervals.

The following example shows a working table with a Transaction ID column named txn_id.

Example 115.

CREATE TABLE my_tab (


txn_id NUMBER(12) NOT NULL,
a_num VARCHAR2(25) NOT NULL,
b_num VARCHAR2(25) NOT NULL,
duration NUMBER(10)
);

14.1.4.3.2. After Collection Stored Procedure

If a Database Collection agent has been configured to call a stored procedure after collection, it will
be called when each batch has been successfully collected and inserted into the workflow.

The procedure is expected to take one (1) parameter. The parameter must be declared as a NUMBER
and the agent will assign the current Transaction ID to the parameter. The procedure must ensure that
the rows with the supplied transaction ID are removed from the table, or their Transaction ID column
is set to -1.

The following example shows such a procedure that moves the rows to another table.

Example 116.

CREATE or REPLACE PROCEDURE my_move_sp


(txn IN NUMBER) IS
BEGIN
--copy collected rows to another table
INSERT INTO my_collected_data_tab (txn_id, a_num, b_num,
duration) SELECT txn_id, a_num, b_num, duration FROM
my_tab WHERE txn_id = txn;
--now delete the rows
DELETE FROM my_tab WHERE txn_id = txn;
END;

It is recommended for the previously described stored procedure to use an internal cursor with
several commits, not to overflow the rollback segments.

14.1.4.3.3. Database Forwarding Target Stored Procedure

If a Database Forwarding agent has been configured to use a stored procedure as the Access Type the
agent will call this procedure for each UDR that is to be distributed. The stored procedure must be
defined to take the parameters needed, often including a parameter for the Transaction ID. In the dialog
these parameters are assigned their values. When the procedure is called, the agent will populate each
parameter with the assigned value.

The following example shows a stored procedure that selects the number of calls made by the
a_number subscriber from another table, calls_tab, and uses that value to populate the target
table.

676
Desktop 7.1

Example 117.

CREATE OR REPLACE PROCEDURE my_insert_sp


(a_num IN CHAR, b_num IN CHAR, txn IN NUMBER) IS
BEGIN
DECLARE
cnt_calls NUMERIC(5);
BEGIN
SELECT COUNT(*) INTO cnt_calls FROM calls_tab
WHERE anumber=a_num;

INSERT INTO my_tab (from_num, to_num, txn_id, num_calls)


VALUES (a_num, b_num, txn, cnt_calls);
END;
END;
/

14.1.4.3.4. Cleanup Stored Procedure

If a Database Forwarding agent uses a stored procedure to populate the target table, a cleanup stored
procedure must be defined, that will remove all inserted entries in case of a Cancel Batch in the
workflow. The procedure is expected to take one parameter. The parameter must be declared as a
NUMBER and the agent will assign the current Transaction ID to the parameter.

The following example shows such a procedure that removes all the entries with the current Transaction
ID.

Example 118.

CREATE OR REPLACE PROCEDURE my_clean_sp


(txn IN NUMBER) IS
BEGIN
DELETE FROM my_tab WHERE txn_id = txn;
END;
/

14.1.4.3.5. After Forwarding Stored Procedure

The following example shows a stored procedure that marks the row as safe to read by another system.

677
Desktop 7.1

Example 119.

CREATE TABLE billing_data (


customer_id varchar2(100) NULL,
number_of_calls number(5) NULL,
money_to_pay number(9) NULL,
txn_id number(12) NULL,
txn_safe_indicator varchar2(10) DEFAULT 'UNSAFE' NOT NULL
);

CREATE or REPLACE PROCEDURE mark_billing_data_as_safe


(txn IN number) IS
BEGIN
LOOP
--updates 5000 rows at the time to spare rollback segments
update billing_data set txn_safe_indicator = 'SAFE'
where txn_id = txn and rownum <= 5000;

EXIT WHEN SQL%ROWCOUNT < 5000;


COMMIT;
END LOOP;
COMMIT;
END;

The billing system must avoid reading rows that contains 'UNSAFE' in the txn_safe_indicator
column, to ensure no data is read that could be rolled back later on.

14.1.4.4. SQL Server Considerations


When distributing data to an SQL Server table, the following must be taken into consideration:

• The table must have at least one UNIQUE column.

• The column reserved for the Transaction ID must be of type bigint.

• Columns of type UNIQUEIDENTIFIER must be set with a function. Hence, map it to NULL in the
agent.

• Columns with an IDENTITY set, must be mapped to NULL in the agent.

For MS SQL, the column type timestamp is not supported in tables accessed by MZ. Use
column type datetime instead.

See the System Administration Guide for information about time zone settings.

14.2. Disk Agents


14.2.1. Introduction
This section describes the Disk Collection- and Forwarding agents. These agents are standard agents
and are available on the DigitalRoute® MediationZone® Platform.

678
Desktop 7.1

14.2.1.1. Prerequisites
The reader of this information must be familiar with:

• The MediationZone® Platform

14.2.2. Disk Collection Agent


The Disk Collection agent collects files from a local file system and inserts them into a MediationZone®
workflow. Initially, the source directory is scanned for all files matching the current filter. In addition,
the Filename Sequence and Sort Order services may be utilized to further manage the matching of
files, although they may not be used at the same time since it will cause the workflow to abort. All
files found will be fed one after the other into the workflow.

When a file has been successfully processed by the workflow, the agent offers the possibility of moving,
renaming, removing or ignoring the original file. The agent can also be configured to keep files for a
set number of days. In addition, the agent offers the possibility of decompressing compressed (gzip)
files after they have been collected. When all the files are successfully processed, the agent stops to
await the next activation, whether it is scheduled or manually initiated.

14.2.2.1. Configuration
The Disk collection agent configuration window is displayed when the agent in a workflow is right-
clicked, selecting Configuration... or double-clicked. Part of the configuration may be done in the
Filename Sequence or Sort Order service tab described in Section 4.1.6.2.2, “Filename Sequence
Tab” and Section 4.1.6.2.3, “Sort Order Tab”.

14.2.2.1.1. Disk Tab

The Disk tab contains configurations related to the placement and handling of the source files to be
collected by the agent.

679
Desktop 7.1

Figure 474. Disk Collection Agent configuration, Disk tab.

Collection If there are more than one collection strategy available in the system a Collection Strategy
Strategy drop down list will also be visible. For more information about the nature of the collection
strategy please refer to Section 15, “Appendix VII - Collection Strategies”.
Directory Absolute pathname of the source directory on the local file system, where the source
files reside. The pathname might also be given relative to the $MZ_HOME environment
variable.

Note! Even if a relative path is defined, for example, input, the value of MIM
parameter Source Pathname (see Section 14.2.2.4.1, “Publishes”) will include
the whole absolute path; /$MZHOME/input.

Filename Name of the source files on the local file system. Regular expressions according to Java
syntax applies. For further information, see:

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

Example 120.

To match all filenames beginning with TTFILE, type: TTFILE.*

Compres- Compression type of the source files. Determines if the agent will decompress the files
sion before passing them on in the workflow.

• No Compression - agent does not decompress the files. Default setting.

• Gzip - agent decompresses the files using gzip.

680
Desktop 7.1

Move to If enabled, the source files will be moved to the automatically created subdirectory
Temporary DR_TMP_DIR in the source directory, prior to collection. This option supports safe
Directory collection of a source file reusing the same name.
Append Enter the suffix that you want added to the file name prior to collecting it.
Suffix to
Filename
Important! Before you execute your workflow, make sure that none of the file
names in the collection directory include this suffix.

Inactive If the specified value is greater than zero, and if no file has been collected during the
Source specified number of hours, the following message is logged:
Warning
(hours) The source has been idle for more than <n> hours, the last
inserted file is <file>.

Move to If enabled, the source files will be moved from the source directory (or from the directory
DR_TMP_DIR, if using Move Before Collecting) to the directory specified in the Des-
tination field, after the collection.

If the Prefix or Suffix fields are set, the file will be renamed as well.

It is possible to move collected files from one file system to another however it
causes negative impact on the performance. Also, the workflow will not be
transaction safe, because of the nature of the copy plus delete functionality.

If it is desired to move files between file systems it is strongly recommended to


route the Disk collection agent directly to a Disk forwarding agent, configuring
the output agent to store the files in the desired directory. Refer to the Sec-
tion 14.2.3, “Disk Forwarding Agent” for information.

This is because of the following reasons:

• It is not always possible to move collected files from one file system to another.

• Moving files between different file systems usually cause worse performance
than having them on the same file system.

• The workflow will not be transaction safe, because of the nature of the copy
plus delete functionality.

Rename If enabled, the source files will be renamed after the collection, remaining in the source
directory from which they were collected (or moved back from the directory
DR_TMP_DIR, if using Move Before Collecting).
Remove If enabled, the source files will be removed from the source directory (or from the direct-
ory DR_TMP_DIR, if using Move Before Collecting), after the collection.
Ignore If enabled, the source files will remain in the source directory after collection.
Destination Absolute pathname of the directory on the local file system of the EC into which the
source files will be moved after collection. The pathname might also be given relative
to the $MZ_HOME environment variable.

This field is only enabled if Move to is selected.


Prefix/Suf- Prefix and/or suffix that will be appended to the beginning respectively the end of the
fix name of the source files, after the collection.

681
Desktop 7.1

These fields are only enabled if Move to or Rename is selected.

If Rename is enabled, the source files will be renamed in the current directory
(source or DR_TMP_DIR). Be sure not to assign a Prefix or Suffix, giving files
new names, still matching the filename regular expression, or else the files will
be collected over and over again.

Search and
To apply Search and Replace, select either Move to or Rename.
Replace

• Search: Enter the part of the filename that you want to replace.

• Replace: Enter the replacement text.

Search and Replace operate on your entries in a way that is similar to the Unix sed
utility. The identified filenames are modified and forwarded to the following agent in
the workflow.

This functionality enables you to perform advanced filename modifications, as well:

• Use regular expression in the Search entry to specify the part of the filename that you
want to extract.

A regular expression that fails to match the original file name will abort the
workflow.

• Enter Replace with characters and meta characters that define the pattern and content
of the replacement text.

Example 121. Search and Replace Examples

To rename the file file1.new to file1.old, use:

• Search: .new

• Replace: .old

To rename the file JAN2011_file to file_DONE, use:

• Search: ([A-Z]*[0-9]*)_([a-z]*)

• Replace: $2_DONE

Note that the search value divides the file name into two parts by using brackets.
The replace value applies the second part by using the place holder $2.

Keep Number of days to keep source files after the collection. In order to delete the source
(days) files, the workflow has to be executed (scheduled or manually) again, after the configured
number of days.

Note, a date tag is added to the filename, determining when the file may be removed.
This field is only enabled if Move to or Rename is selected.

682
Desktop 7.1

Route Select this check box if you want to forward the data to an SQL Loader agent. See the
FileRefer- description of the SQL Loader agent for further information.
enceUDR

14.2.2.2. Transaction Behavior


The transaction behavior for the Disk collection agent is presented here. For more information about
general MediationZone® transaction behavior please refer to Section 4.1.11.8, “Transactions”.

14.2.2.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted before the first part of each collected file is fed into a workflow.
End Batch Emitted after the last part of each collected file has been fed into the system.

14.2.2.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

If the Cancel Batch behavior defined on workflow level is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation ECS will not be involved, and the file will not be moved, but left
at its current place.

Hint End Batch If a Hint End Batch message is received, the collector splits the batch at the end of
the current block processed (32 kB), If the block end occurs within a UDR, the batch
will be split at the end of the preceding UDR.

After a batch split, the collector emits an End Batch message, followed by a Begin
Batch message (provided that there is data in the subsequent block).

14.2.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray types.

14.2.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.2.2.4.1. Publishes

MIM Value Description


File Modified This MIM parameter contains a timestamp, indicating when the file is stored
Timestamp in the collection directory.

683
Desktop 7.1

File Modified Timestamp is of the date type and is defined as a


header MIM context type.
File Retrieval This MIM parameter contains a timestamp, indicating when the file processing
Timestamp starts.

File Retrieval Timestamp is of the date type and is defined as a


header MIM context type.
Source File Size This MIM parameter contains the file size, in bytes, of the source file.

Source File Size is of the long type and is defined as a header MIM
context type.
Source Filename This MIM parameter contains the name of the currently processed file, as defined
at the source.

Source Filename is of the string type and is defined as a header MIM


context type.
Source Filenames This MIM parameter contains a list of file names of the files that are about to
be collected from the current collection directory.

When the agent collects from multiple directories, the MIM value is
cleared after collection of each directory. Then, the MIM value is updated
with the listing of the next directory.

Source Filenames is of the list<any> type and is defined as a Header


MIM context type.
Source File Count This MIM parameter contains the number of files, available to this instance for
collection at startup. The value is constant throughout the execution of the
workflow, even if more files arrive during the execution. The new files will not
be collected until the next execution.

Source File Count is of the long type and is defined as a global MIM
context type.
Source Pathname This MIM parameter contains the path to the directory where the file currently
under processing is located.

Source Pathname is of the string type and is defined as a global MIM


context type. The path is defined in the Disk tab.

Note! Even if a relative path was defined when configuring the Disk
Collection agent (see Section 14.2.2.1.1, “Disk Tab”), for example,
input, the value of this parameter will include the whole absolute path;
/$MZHOME/input.

Source Files Left This parameter contains the number of source files that are yet to be collected.
This is the number that appears in the Execution Manager backlog.

Source Files Left is of the long type and is defined as a header MIM
context type.

14.2.2.4.2. Accesses

The agent does not itself access any MIM resources.

684
Desktop 7.1

14.2.2.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type please refer to Section 5.5.14, “Agent
Event”.

• Ready with file: filename

Reported along with the name of the source file that has been collected and inserted into the workflow.

• File cancelled: filename

Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted; refer to Section 14.2.2.2, “Transaction Behavior” for
further information.

14.2.2.6. Debug Events


There are no debug events for this agent.

14.2.3. Disk Forwarding Agent


The Disk Forwarding agent creates files on the local file system containing the received data. Files
are created when a Begin Batch message is received, and closed when an End Batch message is received.
In addition, the Filename Template service offers the possibility to compress (gzip) the files, or to
further process them, using commands.

To ensure that downstream systems will not use the files until they are closed, they are stored in a
temporary directory until the End Batch message is received. This behavior also applies to Cancel
Batch messages. If a Cancel Batch is received, file creation is cancelled.

14.2.3.1. Configuration
The Disk Forwarding agent configuration window is displayed when the agent in a workflow is double-
clicked, or right-clicked, selecting Configuration....

14.2.3.1.1. Disk Tab

Figure 475. Disk Forwarding agent configuration window, Disk tab.

685
Desktop 7.1

Input Type The agent can act on two input types. Depending on which one the agent is configured
to work with, the behavior will differ.

The default input type is bytearray, that is the agent expects bytearrays. If nothing
else is stated the documentation refer to input of bytearray.

If the input type is MultForwardingUDR, the behavior is different. For further in-
formation about the agent's behavior in MultiForwardingUDR input, refer to Sec-
tion 14.2.3.1.3, “MultiForwardingUDR Input”.
Directory Absolute pathname of the target directory on the local file system of the EC, where
the forwarded files will be stored.

The files will be temporarily stored in the automatically created subdirectory


DR_TMP_DIR, in the target directory. When an End Batch message is received, the
files are moved from the subdirectory to the target directory.
Create Direct- Check to create the directory, or the directory structure, of the path that you specify
ory in Directory.

The directories are created when the workflow is executed.

Compression Compression type of the target files. Determines if the agent will compress the files
or not.

• No Compression - agent does not compress the files. Default setting.

• Gzip - agent compresses the files using gzip.

No extra extension will be appended to the target filenames, even if compres-


sion is selected. The configuration of the filenames is managed in the Filename
Template tab only.

Command If a Command is supplied, it will be executed on each successfully closed temporary


file, using the parameter values declared in Arguments. Please refer to Section 1.3,
“Commands” for further information.

At this point the temporary file is created and closed, however the final file-
name has not yet been created.

The entered Command has to exist in the MediationZone® execution envir-


onment, either including an absolute path, or to be found in the PATH for the
execution environment.

Arguments This field is optional. Each entered parameter value has to be separated from the
preceding value with a space.

The temporary filename is inserted as the second last parameter, and the final filename
is inserted as the last parameter, automatically. This means that if, for instance, no
parameter is given in the field, the arguments will be as follows:

$1=<temporary_filename> $2=<final_filename>

If three parameters are given in the field Arguments, the arguments are set as:

686
Desktop 7.1

$1=<parameter_value_#1>
$2=<parameter_value_#2>
$3=<parameter_value_#3>
$4=<temporary_filename>
$5=<final_filename>

Produce Empty If you require to create empty files, check this setting.
Files

14.2.3.1.2. Filename Template Tab

The names of the created files are determined by the settings in the Filename Template tab.

For a detailed description of the Filename Template tab, see Section 4.1.6.2.4, “Filename Template
Tab”.

14.2.3.1.3. MultiForwardingUDR Input

When the agent is set to use MultiForwardingUDR input, it accepts input of the UDR type MultiFor-
wardingUDR declared in the package FNT. The declaration follows:

internal MultiForwardingUDR {
// Entire file content
byte[] content;
// Target filename and directory
FNTUDR fntSpecification;
};

Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.

A UDR of the type MultiForwardingUDR which target filename is not identical to its precedent is
saved in a new output file.

After a target filename that is not identical to its precedent is saved, you cannot use the first fi-
lename again. For example: Saving filename B after saving filename A, prevents you from using
A again. Instead, you should first save all the A filenames, then all the B filenames, and so forth.

Non-existing directories will be created if the Create Non-Existing Directories checkbox under the
Filename Template tab is checked. If not checked a runtime error will occur if a previously unknown
directory exists in the FNTUDR of an incoming MultiForwardingUDR. Every configuration option
referring to bytearray input is ignored when MultiForwardingUDRs are expected.

687
Desktop 7.1

Example 122.

This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.

import ultra.FNT;

MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){

//Create the FNTUDR


FNTUDR fntudr = udrCreate(FNTUDR);
fntAddString(fntudr, dir);
fntAddDirDelimiter(fntudr);//Add a directory
fntAddString(fntudr, file);//Add a file

MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;

return multiForwardingUDR;
}

consume {

bytearray file1Content;
strToBA (file1Content, "file nr 1 content");

bytearray file2Content;
strToBA (file2Content, "file nr 2 content");

//Send MultiForwardingUDRs to the forwarding agent


udrRoute(createMultiForwardingUDR
("dir1", "file1", file1Content));
udrRoute(createMultiForwardingUDR
("dir2", "file2", file2Content));
}

The Analysis agent mentioned previous in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the root directory. The Create Non-Existing Directories check box under the Filename
Template tab in the configuration of the forwarding agent must be checked if the directories do
not previously exist.

14.2.3.2. Transaction Behavior


The transaction behavior for the Disk forwarding agent is presented here. For more information about
general MediationZone® transaction behavior please refer to Section 4.1.11.8, “Transactions”.

14.2.3.2.1. Emits

None.

688
Desktop 7.1

14.2.3.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Begin Batch When a Begin Batch message is received, the temporary directory DR_TMP_DIR is
first created in the target directory, if not already created. Then a target file is created
and opened in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is first closed
and then the Command, if specified in After Treatment, is executed. Finally, the file
is moved from the temporary directory to the target directory.
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.

14.2.3.3. Introspection
The agent consumes bytearray or MultiForwardingUDR types.

14.2.3.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.2.3.4.1. Publishes

MIM Value Description


MultiForwardingUDR's This MIM parameter is only set when the agent expects input of MultiFor-
FNTUDR wardingUDR type. The MIM value is a string representing the sub path from
the output root directory on the target file system. The path is specified by
the fntSpecification field of the last received MultiForwardingUDR.
For further information on using input of MultiForwardingUDR type, refer
to Section 14.2.3.1.3, “MultiForwardingUDR Input”.

This parameter is of the string type and is defined as a batch MIM context
type.
File Transfer This MIM parameter contains a timestamp, indicating when the target file
Timestamp was created in the temporary directory.

File Transfer Timestamp is of the date type and is defined as a


trailer MIM context type.
Target Filename This MIM parameter contains the name of the target filename, as defined in
Filename Template.

Target Filename is of the string type and is defined as a trailer MIM


context type.
Target Pathname This MIM parameter contains the path to the output directory, as defined in
the Disk tab.

Target Pathname is of the string type and is defined as a global MIM


context type.

14.2.3.4.2. Accesses

This MIM parameter contains various resources from the Filename Template configuration are accessed
to construct the target filename.

689
Desktop 7.1

14.2.3.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message

• Ready with file: name

Reported along with the name of the target file when it has been successfully stored in the target
directory. If an After Treatment Command is specified, the message also indicate that it has been
executed.

14.2.3.6. Debug Events


There are no debug events for this agent.

14.3. FTP Agents


14.3.1. Introduction
This section describes the FTP Collection and FTP Forwarding agents. These are standard batch agents
of the DigitalRoute® MediationZone® Platform.

14.3.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• Standard FTP (RFC 959, http://www.ietf.org/rfc/rfc0959.txt)

14.3.2. FTP Collection Agent


The FTP Collection agent collects files from a remote file system and inserts them into a Medi-
ationZone® workflow, using the standard FTP (RFC 959) protocol. The agent supports FTP towards
Unix, Windows NT and VAX/VMS machines.

When activated, the collector establishes an FTP session towards the remote host. On failure, additional
hosts are tried if so configured. On success, the source directory on the remote host is scanned for all
files matching the current Filename settings, which are located in the Source tab. In addition, the Fi-
lename Sequence service may be used to further control the matching files. All files found will be
fed one after the other into the workflow.

The agent also offers the possibility to decompress compressed (gzip) files after they have been collected,
before they are inserted into the workflow. When all the files are successfully processed, the agent
stops to await the next activation, scheduled or manually initiated.

14.3.2.1. Configuration
The FTP Collection agent configuration window is displayed when right-clicking on the agent in a
workflow and selecting Configuration..., or when double-clicking on the agent. Part of the configur-
ation may be done in the Filename Sequence or Sort Order service tabs described in Section 4.1.6.2.2,
“Filename Sequence Tab” and Section 4.1.6.2.3, “Sort Order Tab”.

690
Desktop 7.1

14.3.2.1.1. Connection Tab

The Connection tab contains configuration data that is relevant to a remote server.

Figure 476. The FTP Collection Agent Configuration - Connection Tab

Server Informa- If your MediationZone® system is installed with the Multi Server functionality
tion Provider you can configure the FTP agent to collect from more than one server. For further
information, see the Multi Server File user's guide.
Host Primary host name or IP address of the remote host to be connected. If a connec-
tion cannot be established to this host, the Additional Hosts specified in the
Advanced tab, are tried.

Note! The FTP Agent supports both IPv4 and IPv6 addresses.

Username Username for an account on the remote host, enabling the FTP session to login.
Password Password related to the Username.
Transfer Type Data transfer type to be used during file retrieval.

• Binary - agent uses binary transfer type. Default setting.

• ASCII - agent uses ASCII transfer type.

File System Type Type of file system on the remote host.

• Unix - remote host using Unix file system. Default setting.

• Windows NT - remote host using Windows NT file system.

691
Desktop 7.1

• VAX/VMS - remote host using VAX/VMS file system.

Enable Collection Select this check box to enable repetitive attempts to connect and start a file
Retries transfer.

When this option is selected, the agent will attempt to connect to the host as many
times as is stated in the Max Retries field described below. If the connection
fails, a new attempt will be made after the number of seconds entered in the Retry
Interval (s) field described below.
Retry Interval (s) Enter the time interval in seconds, between retries.

If a connection problem occurs, the actual time interval before the first attempt
to reconnect will be the time set in the Timeout field in the Advanced tab plus
the time set in the Retry Interval (s) field. For the remaining attempts, the actual
time interval will be the number seconds entered in this field.
Max Retries Enter the maximum number of retries to connect.

In case more than one connection attempt has been made, the number of used
retries will be reset as soon as a file transfer is completed successfully.

Note! This number does not include the original connection attempt.

Enable RESTART Select this check box to enable the agent to send a RESTART command if the
Retries connection has been broken during a file transfer. The RESTART command
contains information about where in the file you want to resume the file transfer.

Before selecting this option, ensure that the FTP server supports the RESTART
command.

When this option is selected, the agent will attempt to re-establish the connection,
and resume the file transfer from the point in the file stated in the RESTART
command, as many times as is entered in the Max Retries field described below.
When a connection has been re-established, a RESTART command will be sent
after the number of seconds entered in the Retry Interval (s) field described be-
low.

Note! The RESTART Retries settings will not work if you have selected
to decompress the files in the Source tab, see Section 14.3.2.1.2, “Source
Tab”.

Note! RESTART is not always supported for transfer type ASCII.

For further information about the RESTART command, see http://www.w3.org/Pro-


tocols/rfc959/".
Retry Interval (s) Enter the time interval, in seconds, you want to wait before initiating a restart in
this field. This time interval will be applied for all restart retries.

If a connection problem occurs, the actual time interval before the first attempt
to send a RESTART command will be the time set in the Timeout field in the
Advanced tab plus the time set in the Retry Interval (s) field. For the remaining
attempts, the actual time interval will be the number seconds entered in this field.

692
Desktop 7.1

Max Retries Enter the maximum number of restarts per file you want to allow.

In case more than one attempt to send the RESTART command has been made,
the number of used retries will be reset as soon as a file transfer is completed
successfully.

14.3.2.1.2. Source Tab

The Source tab contains configurations related to the remote host, source directories and source files.
The following text describes the configuration options available when no custom strategy has been
chosen.

Figure 477. The FTP Collection Agent Configuration - Source Tab

Collection If there are more than one collection strategy available in the system, a Collection
Strategy Strategy drop down list will also be visible. For further information about the nature of
the collection strategy, see Section 15, “Appendix VII - Collection Strategies”.
Directory Absolute pathname of the source directory on the remote host, where the source files
reside. If the FTP server is of UNIX type, the path name might also be given relative to
the home directory of the User Name account.
Filename Name of the source files on the remote host. Regular expressions according to Java
syntax applies. For further information, see:

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

Example 123.

To match all file names beginning with TTFILE, type: TTFILE.*

693
Desktop 7.1

Note! When collecting files from VAX file systems, the names of the source files
include both path and filename, which has to be considered when entering the
regular expression.

Compres- Compression type of the source files. Determines if the agent will decompress the files
sion before passing them on in the workflow.

• No Compression - the agent will not decompress the files.

• Gzip - the agent will decompress the files using gzip.

Move to If enabled, the source files will be moved to the automatically created subdirectory
Temporary DR_TMP_DIR in the source directory, before collection. This option supports safe col-
Directory lection when source files repeatedly uses the same name.
Append Enter the suffix that you want added to the file name prior to collecting it.
Suffix to
Filename
Important! Before you execute your workflow, make sure that none of the file
names in the collection directory include this suffix.

Inactive If enabled, when the configured number of hours have passed without any file being
Source available for collection, a warning message (event) will appear in the System Log and
Warning Event Area:
(h)
The source has been idle for more than <n> hours, the last
inserted file is <file>.

Move to If enabled, the source files will be moved from the source directory (or from the directory
DR_TMP_DIR if using Move Before Collecting), to the directory specified in the Des-
tination field, after collection.

Note!

The Directory has to be located in the same file system as the collected files at
the remote host. Also, absolute pathnames must be defined (relative pathnames
cannot be used).

If a file with the same filename, but with a different content, already exists in the
target directory, the workflow will abort.

If a file with the same file name, AND the same content, already exists in the target
directory, this file will be overwritten and the workflow will not abort.

Rename If enabled, the source files will be renamed after the collection, and remain (or moved
back from the directory DR_TMP_DIR if using Move Before Collecting) in the source
directory from which they were collected.

694
Desktop 7.1

Note!

When the File System Type for VAX/VMS is selected, some issues must be
considered. If a file is renamed after collection on a VAX/VMS system, the file-
name might become too long. In that case the following rules will apply:

A VAX/VMS filename consists of <file name>.<extension>;<version>, where


the maximum number of characters for each part is:

• <file name>: 39 characters

• <extension>: 39 characters

• <version>: 5 characters

If the new filename turns out to be longer than 39 characters, the agent will move
part of the filename to the extension part. If the total sum of the filename and ex-
tension part exceeds 78 characters, the last characters are truncated from the ex-
tension.

An example:

A_VERY_LONG_FILENAME_WITH_MORE_THAN_39_
CHARACTERS.DAT;5

will be converted to:

A_VERY_LONG_FILENAME_WITH_MORE_THAN_39_.
CHARACTERSDAT;5

Note! Creating a new file on the FTP server with the same file name as the original
file, but with another content, will cause the workflow to abort.

Creating a new file with the same file name AND the same content as the original
file, will cause the file to be overwritten.

Remove If enabled, the source files will be removed from the source directory (or from the direct-
ory DR_TMP_DIR, if using Move Before Collecting), after the collection.
Ignore If enabled, the source files will remain in the source directory after the collection. This
field is not available if Move Before Collecting is enabled.
Destination Full pathname of the directory on the remote host into which the source files will be
moved after the collection. This field is only available if Move to is enabled.
Prefix and Prefix and/or suffix that will be appended to the beginning and the end of the name of
Suffix the source files, respectively, after the collection. These fields are only available if Move
to or Rename is enabled.

Warning! If Rename is enabled, the source files will be renamed in the current
(source or DR_TMP_DIR) directory. Be sure not to assign a Prefix or Suffix,
giving files new names still matching the Filename regular expression. That will
cause the files to be collected over and over again.

695
Desktop 7.1

Search and Select either Move to or Rename option to enable Search and Replace.
Replace
• Search: Enter the part of the filename that you want to replace.

• Replace: Enter the replacement text.

Search and Replace operate on your entries in a way that is similar to the Unix sed
utility. The identified filenames are modified and forwarded to the following agent in
the workflow.

This functionality enables you to perform advanced filename modifications, as well:

• Use regular expression in the Search entry to specify the part of the filename that you
want to extract.

Note! A regular expression that fails to match the original file name will abort
the workflow.

• Enter Replace with characters and meta characters that define the pattern and content
of the replacement text.

Example 124. Search and Replace Examples

To rename the file file1.new to file1.old, use:

• Search: .new

• Replace: .old

To rename the file JAN2011_file to file_DONE, use:

• Search: ([A-Z]*[0-9]*)_([a-z]*)

• Replace: $2_DONE

Note that the search value divides the file name into two parts by using parentheses.
The replace value applies to the second part by using the place holder $2.

Keep Number of days to keep moved or renamed source files on the remote host after the
(days) collection. In order to delete the source files, the workflow has to be executed (scheduled
or manually) again, after the configured number of days.

Note! A date tag is added to the filename, determining when the file may be re-
moved. This field is only available if Move to or Rename is enabled.

Route Select this check box if you want to forward the data to an SQL Loader agent. See the
FileRefer- description of the SQL Loader agent for further information.
enceUDR

14.3.2.1.3. Advanced Tab

The Advanced tab contains configurations related to the use of the FTP service.

696
Desktop 7.1

For example, in case the used FTP server does not return the file listed in a well-defined format the
Disable File Detail Parsing option can be useful. For information refer to that section.

Figure 478. The FTP Collection Agent Configuration - Advanced Tab

Command Port The value in this field defines which port number the FTP service will use on the
remote host.
Timeout (s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Passive Mode Must be enabled if FTP passive mode is used for data connection.
(PASV)
In passive mode, the channel for data transfer between client and server is initiated
by the client instead of by the server. This is useful when a firewall is situated
between the client and the server.
Disable File Detail Disables parsing of file detail information received from the FTP server. This
Parsing enhances the compatibility with unusual FTP servers but disables some function-
ality.

If file detail parsing is disabled, file modification timestamps will not be available
to the collector. The collector does not have the ability to distinguish between
directories and simple files, sub directories in the input directory must for that
reason not match the filename regular expression. The agent assumes that a file
named DR_TMP_DIR is a directory because a directory named DR_TMP_DIR
is used when Move to Temporary Directory under the Source tab is activated.
Therefore, it is not allowed to name a regular file in the collection directory
DR_TMP_DIR.

Note! When collecting files from a VAX file system, this option has to be
enabled.

Additional Hosts List of additional host names or IP addresses that may be used to access the source
directory, from which the source files are collected. These hosts are tried, in se-
quence from top to bottom, if the agent fails to connect to the remote host, set in
the Connection tab.

697
Desktop 7.1

Note! The FTP Agent supports both IPv4 and IPv6 addresses.

Use the Add, Edit, Remove, Move up and Move down buttons to configure the
order of the hosts in the list.

14.3.2.2. Transaction Behavior


This section includes information about the FTP collection agent transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

14.3.2.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted before the first byte of each collected file is fed into a workflow.
End Batch Emitted after the last byte of each collected file has been fed into the system.

14.3.2.2.2. Retrieves

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

Note! If the Cancel Batch behavior defined on workflow level (set in the
workflow properties) is configured to abort the workflow, the agent will never
receive the last Cancel Batch message. In this situation ECS will not be in-
volved, and the file will not be moved.

APL code where Hint End Batch is followed by a Cancel Batch will always
result in workflow abort. Make sure to design the APL code to first evaluate
the Cancel Batch criteria to avoid this sort of behavior.

Hint End Batch If a Hint End Batch message is received, the collector splits the batch at the end of
the current block processed (32 kB), provided that no UDR is split. If the block end
occurs within a UDR, the batch will be split at the end of the preceding UDR.

After a batch split, the collector emits an End Batch Message, followed by a Begin
Batch message (provided that there is data in the subsequent block).

14.3.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray types.

14.3.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.3.2.4.1. Publishes

MIM Parameter Description

698
Desktop 7.1

File Retrieval This MIM parameter contains a timestamp, indicating when the file transfer
Timestamp started.

File Retrieval Timestamp is of the date type and is defined as a


header MIM context type.
Source Filename This MIM parameter contains the name of the currently processed file, as defined
at the source.

Source Filename is of the string type and is defined as a header MIM


context type.

Note! When collecting files from a VAX file system, the name of the
source file will contain both path and filename.

Source Filenames This MIM parameter contains a list of file names of the files that are about to
be collected from the current collection directory.

Note! When the agent collects from multiple directories, the MIM value
is cleared after collection of each directory. Then, the MIM value is up-
dated with the listing of the next directory.

Source Filenames is of the list<any> type and is defined as a header


MIM context type.

Note! When collecting files from a VAX file system, the name of the
source file will contain both path and filename.

Source File Count This MIM parameter contains the number of files that were available to this
instance for collection at startup. The value is static throughout the execution
of the workflow, even if more files arrive during the execution. The new files
will not be collected until the next execution.

Source File Count is of the long type and is defined as a global MIM
context type.
Source Files Left This parameter contains the number of source files that are yet to be collected.
This is the number that appears in the Execution Manager in Running
Workflows tab in the Backlog column.

Source Files Left is of the long type and is defined as a header MIM
context type.
Source File Size This parameter provides the size of the file that is about to be read. The file is
located on the server.

Source File Size is of the long type and is defined as a header MIM
context type.
Source Host This MIM parameter contains the name of the host from which files are collected,
as defined in the Source or Advanced tabs.

Source Host is of the string type and is defined as a global MIM context
type.
Source Pathname This MIM parameter contains the path name, as defined in the Source tab.

699
Desktop 7.1

Source Pathname is of the string type and is defined as a global MIM


context type.
Source Username This MIM parameter contains the login user name, as defined in the Source
tab.

Source Username is of the string type and is defined as a global MIM


context type.

14.3.2.4.2. Accesses

The agent does not itself access any MIM resources.

14.3.2.5. Agent Message Events


An agent message is an information message sent from the agent, stated according to the configurations
made in the Event Notification Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with file: filename

Reported, along with the name of the source file, when the file has been collected and inserted into
the workflow.

• File cancelled: filename

Reported, along with the name of the current file, when a Cancel Batch message is received. This
assumes the workflow is not aborted when a Cancel Batch message is received, see Section 14.3.2.2,
“Transaction Behavior” for further information.

14.3.2.6. Debug Events


Debug messages are dispatched in debug mode. During execution the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Command trace

A printout of the control channel trace either in the Workflow Monitor or in a file.

14.3.3. FTP Forwarding Agent


The FTP Forwarding agent sends files to a remote host by using the standard FTP (RFC 959) protocol.
Files are created when a Begin Batch message is received and closed when an End Batch message is
received. In addition, the agent offers the possibility to compress (gzip) the files.

To ensure that downstream systems will not use the files until they are closed, they are maintained in
a temporary directory on the remote host until the End Batch message is received. This behavior is
also used for Cancel Batch messages. If a Cancel Batch is received, file creation is cancelled.

14.3.3.1. Configuration
The FTP Forwarding agent configuration window is displayed when the agent is right-clicked, selecting
Configuration... or double-clicked.

700
Desktop 7.1

14.3.3.1.1. Connection Tab

Figure 479. The FTP Forwarding Agent Configuration - Connection Tab

See description in Figure 476, “The FTP Collection Agent Configuration - Connection Tab”

14.3.3.1.2. Target Tab

Figure 480. The FTP Forwarding Agent Configuration - Target Tab

701
Desktop 7.1

Input Type The agent can act on two input types. Depending on which one the agent is
configured to work with, the behavior will differ.

The default input type is bytearray, that is the agent expects bytearrays. If
nothing else is stated the documentation refer to input of bytearray.

If the input type is MultForwardingUDR, the behavior is different. For further


information about the agent's behavior in MultiForwardingUDR input, refer to
Section 14.3.3.3, “MultiForwardingUDR Input”.
Directory Absolute pathname of the target directory on the remote host, where the forwarded
files will be placed. The pathname may also be given relative to the home direct-
ory of the user's account.

The files will be temporarily stored in the automatically created subdirectory


DR_TMP_DIR in the target directory. When an endBatch message is received,
the files are moved from the subdirectory to the target directory.
Create Directory Check to create the directory, or the directory structure, of the path that you
specify in Directory.

Note! The directories are created when the workflow is executed.

Compression Compression type of the target files. Determines if the agent will compress the
files before storage or not.

• No Compression - Agent does not compress the files.

• Gzip - agent compresses the files using gzip.

Note! No extra extension will be appended to the target filenames, even


if compression is selected.

Produce Empty If you require to create empty files, check this setting.
Files
Handling of Select the behavior of the agent when the file already exists, the alternatives are:
Already Existing
Files • Overwrite - The old file will be overwritten and a warning will be logged in
the System Log.

• Add Suffix - If the file already exists the suffix ".1" will be added. If this file
also exists the suffix ".2" will be tried instead and so on.

• Abort - This is the default selection and is the option used for upgraded con-
figurations, that is workflows from an upgraded system.

Use Temporary If this option is selected, the agent will move the file to a temporary directory
Directory before moving it to the target directory. After the whole file has been transferred
to the target directory, and the endBatch message has been received, the tem-
porary file is removed from the temporary directory.
Use Temporary If there is no write access to the target directory and, hence, a temporary directory
File cannot be created, the agent can move the file to a temporary file that is stored
directly in the target directory. After the whole file has been transferred, and the
endBatch message has been received, the temporary file will be renamed.

702
Desktop 7.1

The temporary filename is unique for every execution of the workflow. It consists
of a workflow and agent ID, and a file number.
Abort Handling Select how to handle the file in case of cancelBatch or rollback, either Delete
Temporary File or Leave Temporary File.

Note! When a workflow aborts, the file will not be removed until the next
time the workflow is run.

14.3.3.1.3. Advanced Tab

Figure 481. The FTP Forwarding Agent Configuration - Advanced Tab

Command Port The value in this field defines which port number the FTP service will use on
the remote host.
Timeout(s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Passive Mode Must be enabled if FTP passive mode is used for data connection.
(PASV)
In passive mode, the channel for data transfer between client and server is
initiated by the client instead of by the server. This is useful when a firewall
is situated between the client and the server.
Additional Hosts List of additional host names or IP addresses that may be used to access the
target directory for file storage. These hosts are tried, in sequence from top to
bottom, if the agent fails to connect to the remote host set in the Connection
tab.

Note! The FTP Agent supports both IPv4 and IPv6 addresses.

Use the Add, Edit, Remove, Move up and Move down buttons to configure
the host list.

703
Desktop 7.1

Note! The names of the created files are determined by the settings in the Filename Template
tab. The use and setting of private threads for an agent, enabling multi-threading within a
workflow, is configured in the Thread Buffer tab. For further information, see Section 4.1.6.2.1,
“Thread Buffer Tab”.

14.3.3.1.4. Backlog Tab

The Backlog tab contain configurations related to backlog functionality. If the backlog is not enabled,
the files will be moved directly to their final destination when an end batch message is received. If the
backlog however is enabled, the files will first be moved to a directory called DR_READY and then to
their final destination. Refer to Section 14.3.3.4.2, “Retrieves” for further information about transaction
behavior.

When backlog is initialized and when backlogged files are transferred a note is registered in the System
Log.

Figure 482. The FTP Forwarding Agent Configuration - Backlog Tab

Enable Backlog Check to enable backlog functionality.


Directory Base directory in which the agent will create sub directories to handle
backlogged files. Absolute or relative path names can be used.
Type Files is the maximum number of files allowed in the backlog folder. Bytes
is the total sum (size) of the files that resides in the backlog folder. If a limit
is exceeded the workflow will abort.
Size Enter the maximal number of files or bytes that the backlog folder can con-
tain.
Processing Order Determine the order by which the backlogged data will be processed once
connection is reestablished, select between First In First Out (FIFO) or Last
In First Out (LIFO).
Duplicate File Hand- Specifies the behavior if a file with the same file name as the one being
ling transferred is detected. The options are Abort or Overwrite and the action
is taken both when a file is transferred to the target directory or to the
backlog.

704
Desktop 7.1

14.3.3.2. Memory Management


A global memory buffer will be allocated per Execution Context. The size of the buffer is specified
by using a property in the Execution Context's configuration file located in mzhome/etc.

Note that this global backlog memory buffer is used and shared by this and any other forwarding agent
that transfers files to a remote server. The same memory buffer is used for all ongoing transactions on
the same execution context.

When several workflows are scheduled to run simultaneously, and the forwarding agents are assigned
with the backlog function, there is a risk that the buffer may be too small. In such case, it is recommen-
ded that you increase the size of this property.

Example 125.

A possible configuration for a maximum memory of 20 MB is shown here:

<property name="mz.forwarding.backlog.max_memory" value="20"/>

Note that the EC must be restarted for the property to apply.

If no property is set the default value of 10 MB will be used. The amount allocated will be printed out
in the Execution Context's log file. This memory will not affect the Java heap size and is used by the
agent when holding a copy of the file being transferred.

14.3.3.3. MultiForwardingUDR Input


When the agent is set to use MultiForwardingUDR input, it accepts input of the UDR type MultiFor-
wardingUDR declared in the package FNT. The declaration follows:

internal MultiForwardingUDR {
// Entire file content
byte[] content;
// Target filename and directory
FNTUDR fntSpecification;
};

Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.

A UDR of the type MultiForwardingUDR which target filename is not identical to its precedent is
saved in a new output file.

Note! After a target filename that is not identical to its precedent is saved, you cannot use the
first filename again. For example: Saving filename B after saving filename A, prevents you
from using A again. Instead, you should first save all the A filenames, then all the B filenames,
and so forth.

Non-existing directories will be created if the Create Non-Existing Directories check box on the Fi-
lename Template tab is checked.

705
Desktop 7.1

If not checked a runtime error will occur if a previously unknown directory exists in the FNTUDR of
an incoming MultiForwardingUDR. Every configuration option referring to bytearray input is ignored
when MultiForwardingUDRs are expected.

For further information about Filename Template, see Section 4.1.6.2.4, “Filename Template Tab”.

Example 126.

This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.

import ultra.FNT;

MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){

//Create the FNTUDR


FNTUDR fntudr = udrCreate(FNTUDR);
fntAddString(fntudr, dir);
fntAddDirDelimiter(fntudr);//Add a directory
fntAddString(fntudr, file);//Add a file

MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;

return multiForwardingUDR;
}

consume {

bytearray file1Content;
strToBA (file1Content, "file nr 1 content");

bytearray file2Content;
strToBA (file2Content, "file nr 2 content");

//Send MultiForwardingUDRs to the forwarding agent


udrRoute(createMultiForwardingUDR
("dir1", "file1", file1Content));
udrRoute(createMultiForwardingUDR
("dir2", "file2", file2Content));
}

The Analysis agent mentioned previous in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the root directory. The Create Non-Existing Directories check box under the Filename
Template tab in the configuration of the forwarding agent must be checked if the directories do
not previously exist.

14.3.3.4. Transaction Behavior


The transaction behavior for the FTP forwarding agent is presented here. For further information about
the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

706
Desktop 7.1

14.3.3.4.1. Emits

The agent emits nothing.

14.3.3.4.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Begin Batch When a Begin Batch message is received, the temporary directory DR_TMP_DIR is
first created in the target directory, if not already created. Then, a target file is created
and opened in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is closed
and, finally, the file is moved from the temporary directory to the target directory.
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.

14.3.3.5. Introspection
The agent consumes bytearray or MultiForwardingUDRtypes.

14.3.3.6. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.3.3.6.1. Publishes

MIM Parameter Description


MultiForwardingUDR's This MIM parameter is only set when the agent expects input of MultiFor-
FNTUDR wardingUDR type. The MIM value is a string representing the sub path from
the output root directory on the target file system. The path is specified by
the fntSpecification field of the last received MultiForwardingUDR.
For further information on using input of MultiForwardingUDR type, refer
to Section 14.3.3.3, “MultiForwardingUDR Input”.

This parameter is of the string type and is defined as a batch MIM context
type.
File Transfer This MIM parameter contains a timestamp, indicating when the target file is
Timestamp created in the temporary directory.

File Transfer Timestamp is of the date type and is defined as a


trailer MIM context type.
Target Filename This MIM parameter contains the target filename, as defined in Filename
Template.

Target Filename is of the string type and is defined as a trailer MIM


context type.
Target File Size This MIM parameter provides the size of the file that has been written. The
file is located on the server.

Target File Size is of the long type and is defined as a trailer MIM
context type.
Target Hostname This MIM parameter contains the name of the target host, as defined in the
Target or Advanced tab of the agent.

707
Desktop 7.1

Target Hostname is of the string type and is defined as a global MIM


context type.
Target Pathname This MIM parameter contains the path to the target file, as defined in the FTP
tab of the agent.

Target Pathname is of the string type and is defined as a global MIM


context type.
Target Username This MIM parameter contains the login name of the user connecting to the
remote host, as defined in the FTP tab of the agent.

Target Username is of the string type and is defined as a global MIM


context type.

14.3.3.6.2. Accesses

Various resources from the Filename Template configuration to construct the target filename.

14.3.3.7. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with file: filename

Reported, along with the name of the target file, when the file is successfully written to the target
directory.

14.3.3.8. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Command trace

A printout of the control channel trace either in the Workflow Monitor or in a file.

14.4. Hadoop File System Agents


14.4.1. Introduction
This section describes the Hadoop File System collection and forwarding agents. These agents are
extension agents for the DigitalRoute® MediationZone® Platform.

14.4.1.1. Prerequisites
The reader of this information must be familiar with:

• The MediationZone® Platform

• The Apache Hadoop Project and Hadoop Distributed File System

708
Desktop 7.1

• Amazon S3

14.4.2. Preparations
As there are several different distributions of Hadoop available, you may have to create your own mzp
package containing the specific Hadoop jar files to be used, and commit this package into your Medi-
ationZone® system in order to start using the Hadoop File System agents. This is required when you
are using a different distribution than the one that is available at hadoop.apache.org. The included mzp
package has been tested with Apache Hadoop version 2.2.0 and Amazon S3.

To create and commit the Hadoop mzp package:

1. Copy the set of jar files for the Hadoop version you want to use to the machine that MediationZone®
is running on.

The set of jar files comprises commons-configuration, commons-io, hadoop-auth,


hadoop-common, hadoop-hdfs and protobuf-java. If any of these files does not exist,
or does not work, contact DigitalRoute® .

Depending on the file structure, the files may be located in different folders, but typically they will
be located in a folder called hadoop, or hadoop-common, where the hadoop-common.jar
file is placed in the root directory, and the rest of the jar files are placed in a subdirectory called
/lib.

2. Set a variable called $FILES for all the different jars.

Example 127.

This example shows how this is done for the Cloudera Distribution of Hadoop 4.

FILES="\
file=commons-configuration-1.6.jar \
file=commons-io-2.1.jar \
file=hadoop-auth-2.0.0-cdh4.4.0.jar \
file=hadoop-common-2.0.0-cdh4.4.0.jar \
file=hadoop-hdfs-2.0.0-cdh4.4.0.jar \
file=protobuf-java-2.4.0a.jar"

Note! These files are version specific, which means that the list in the example will not work
for other versions of Hadoop.

3. Create the mzp package:

mzsh pcreate "Apache Hadoop" "<distribution>


apache_hadoop_<application>.mzp -level platform $FILES

Note! It is important that the package is called exactly "Apache Hadoop".

709
Desktop 7.1

Example 128.

This example shows how this could look like for the Cloudera Distribution of Hadoop 4.

mzsh pcreate "Apache Hadoop" "CDH4.4" apache_hadoop_cdh4.mzp


-level platform $FILES

4. Commit the new package:

mzsh mzadmin/<password> pcommit apache_hadoop_<application>.mzp

5. Restart the Platform and ECs:

mzsh shutdown platform <ec> <ec>


mzsh startup platform <ec> <ec>

14.4.3. Hadoop FS Collection Agent


The Hadoop FS collection agent collects files from a HDFS, which is the primary distributed storage
used by Hadoop applications, and inserts them into a MediationZone® workflow. A HDFS cluster
primarily consists of a NameNode that manages the file system meta data, and DataNodes that store
the actual data. Initially, the source directory is scanned for all files matching the current filter. In ad-
dition, the Filename Sequence and Sort Order services may be utilized to further manage the
matching of files, although they may not be used at the same time since it will cause the workflow to
abort. All files found will be fed one after the other into the workflow.

When a file has been successfully processed by the workflow, the agent offers the possibility of moving,
renaming, removing or ignoring the original file. The agent can also be configured to keep files for a
set number of days. In addition, the agent offers the possibility of decompressing compressed (gzip)
files after they have been collected. When all the files are successfully processed, the agent stops to
await the next activation, whether it is scheduled or manually initiated.

14.4.3.1. Configuration
The Hadoop FS collection agent configuration window is displayed when you double-click on the
agent, or if you right-click on the agent and select the Configuration... option. Part of the configuration
may be done in the Filename Sequence or Sort Order service tab described in the Filename Sequence
Tab and the Sort Order Tab sections in the Desktop user's guide.

14.4.3.1.1. Hadoop FS Tab

The Hadoop FS tab contains configurations related to the placement and handling of the source files
to be collected by the agent.

710
Desktop 7.1

Figure 483. Hadoop FS Collection Agent configuration, Hadoop FS tab.

File System Select the file system type used in this drop-down-list; Distributed File System or
Type Amazon S3. See Section 14.4.3.1.2, “File System Type Settings” for further information.
Replication The replication factor per file. See the Apache Hadoop Project documentation for inform-
ation about the replication factor. This setting has no effect in the Hadoop FS collection
agent.
Collection If there are more than one collection strategy available in the system a Collection Strategy
Strategy drop down list will also be visible. For more information about the nature of the collection
strategy please refer to Section 15, “Appendix VII - Collection Strategies”.
Directory Absolute pathname of the directory on the remote file system, where the source files
reside.
Filename Name of the source files on the local file system. Regular expressions according to Java
syntax applies. For further information, see:

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

Example 129.

To match all filenames beginning with TTFILE, type: TTFILE.*

711
Desktop 7.1

Compres- Compression type of the source files. Determines if the agent will decompress the files
sion before passing them on in the workflow.

• No Compression - agent does not decompress the files. Default setting.

• Gzip - agent decompresses the files using gzip.

Move to If enabled, the source files will be moved to the automatically created subdirectory
Temporary DR_TMP_DIR in the source directory, prior to collection. This option supports safe
Directory collection of a source file reusing the same name.
Append Enter the suffix that you want added to the file name prior to collecting it.
Suffix to
Filename
Important! Before you execute your workflow, make sure that none of the file
names in the collection directory include this suffix.

Inactive If the specified value is greater than zero, and if no file has been collected during the
Source specified number of hours, the following message is logged:
Warning
(hours) The source has been idle for more than <n> hours, the last
inserted file is <file>.

Move to If enabled, the source files will be moved from the source directory (or from the directory
DR_TMP_DIR, if using Move Before Collecting) to the directory specified in the Des-
tination field, after the collection.

If the Prefix or Suffix fields are set, the file will be renamed as well.

Note! It is possible to move collected files from one file system to another however
it causes negative impact on the performance. Also, the workflow will not be
transaction safe, because of the nature of the copy plus delete functionality.

If it is desired to move files between file systems it is strongly recommended to


route the Hadoop FS collection agent directly to a Hadoop FS forwarding agent,
configuring the output agent to store the files in the desired directory. Refer to the
Section 14.4.4, “Hadoop FS Forwarding Agent” for information.

This is because of the following reasons:

• It is not always possible to move collected files from one file system to another.

• Moving files between different file systems usually cause worse performance
than having them on the same file system.

• The workflow will not be transaction safe, because of the nature of the copy
plus delete functionality.

Rename If enabled, the source files will be renamed after the collection, remaining in the source
directory from which they were collected (or moved back from the directory
DR_TMP_DIR, if using Move Before Collecting).
Remove If enabled, the source files will be removed from the source directory (or from the direct-
ory DR_TMP_DIR, if using Move Before Collecting), after the collection.
Ignore If enabled, the source files will remain in the source directory after collection.

712
Desktop 7.1

Destination Absolute pathname of the directory on the local file system of the EC into which the
source files will be moved after collection. The pathname might also be given relative
to the $MZ_HOME environment variable.

This field is only enabled if Move to is selected.


Prefix/Suf- Prefix and/or suffix that will be appended to the beginning respectively the end of the
fix name of the source files, after the collection.

These fields are only enabled if Move to or Rename is selected.

Note! If Rename is enabled, the source files will be renamed in the current direct-
ory (source or DR_TMP_DIR). Be sure not to assign a Prefix or Suffix, giving
files new names, still matching the filename regular expression, or else the files
will be collected over and over again.

Search and
Replace Note! To apply Search and Replace, select either Move to or Rename.

• Search: Enter the part of the filename that you want to replace.

• Replace: Enter the replacement text.

Search and Replace operate on your entries in a way that is similar to the Unix sed
utility. The identified filenames are modified and forwarded to the following agent in
the workflow.

This functionality enables you to perform advanced filename modifications, as well:

• Use regular expression in the Search entry to specify the part of the filename that you
want to extract.

Note! A regular expression that fails to match the original file name will abort
the workflow.

• Enter Replace with characters and meta characters that define the pattern and content
of the replacement text.

713
Desktop 7.1

Example 130. Search and Replace Examples

To rename the file file1.new to file1.old, use:

• Search: .new

• Replace: .old

To rename the file JAN2011_file to file_DONE, use:

• Search: ([A-Z]*[0-9]*)_([a-z]*)

• Replace: $2_DONE

Note that the search value divides the file name into two parts by using brackets.
The replace value applies the second part by using the place holder $2.

Keep Number of days to keep source files after the collection. In order to delete the source
(days) files, the workflow has to be executed (scheduled or manually) again, after the configured
number of days.

Note, a date tag is added to the filename, determining when the file may be removed.
This field is only enabled if Move to or Rename is selected.

14.4.3.1.2. File System Type Settings

Depending on which type of file system you have selected there in the File System Type list, the settings
for the File System will vary.

14.4.3.1.2.1. Distributed File System

When you have selected Distributed File System, you have the following settings:

Figure 484.

Host Enter the IP address or hostname of the NameNode in this field. See the Apache Hadoop
Project documentation for further information about the NameNode.
Port Enter the port number of the NameNode in this field.

14.4.3.1.2.2. Amazon S3

When you have selected Amazon S3, you have the following settings:

714
Desktop 7.1

Figure 485.

Access Key Enter the access key for the user who owns the Amazon S3 account in this field.
Secret Key Enter the secret key for the stated access key in this field.
Bucket Enter the name of the Amazon S3 bucket in this field.

14.4.3.2. Transaction Behavior


The transaction behavior for the Hadoop FS collection agent is presented here. For more information
about general MediationZone® transaction behavior please refer to Section 4.1.11.8, “Transactions”.

14.4.3.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted before the first part of each collected file is fed into a workflow.
End Batch Emitted after the last part of each collected file has been fed into the system.

14.4.3.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

Note! If the Cancel Batch behavior defined on workflow level is configured


to abort the workflow, the agent will never receive the last Cancel Batch
message. In this situation ECS will not be involved, and the file will not be
moved, but left at its current place.

Hint End Batch If a Hint End Batch message is received, the collector splits the batch at the end of
the current block processed (32 kB), If the block end occurs within a UDR, the batch
will be split at the end of the preceding UDR.

After a batch split, the collector emits an End Batch message, followed by a Begin
Batch message (provided that there is data in the subsequent block).

14.4.3.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray types.

715
Desktop 7.1

14.4.3.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.4.3.4.1. Publishes

MIM Value Description


File Modified This MIM parameter contains a timestamp, indicating when the file is stored
Timestamp in the collection directory.

File Modified Timestamp is of the date type and is defined as a


header MIM context type.
File Retrieval This MIM parameter contains a timestamp, indicating when the file processing
Timestamp starts.

File Retrieval Timestamp is of the date type and is defined as a


header MIM context type.
Source File Count This MIM parameter contains the number of files, available to this instance
for collection at startup. The value is constant throughout the execution of the
workflow, even if more files arrive during the execution. The new files will
not be collected until the next execution.

Source File Count is of the long type and is defined as a global MIM
context type.
Source File Size This MIM parameter contains the file size, in bytes, of the source file.

Source File Size is of the long type and is defined as a header MIM
context type.
Source Filename This MIM parameter contains the name of the currently processed file, as
defined at the source.

Source Filename is of the string type and is defined as a header MIM


context type.
Source Filenames This MIM parameter contains a list of file names of the files that are about to
be collected from the current collection directory.

Note! When the agent collects from multiple directories, the MIM value
is cleared after collection of each directory. Then, the MIM value is
updated with the listing of the next directory.

Source Filenames is of the list<any> type and is defined as a Header


MIM context type.
Source Files Left This parameter contains the number of source files that are yet to be collected.
This is the number that appears in the Execution Manager backlog.

Source Files Left is of the long type and is defined as a header MIM
context type.
Source Pathname This MIM parameter contains the path to the directory where the file currently
under processing is located.

Source Pathname is of the string type and is defined as a global MIM


context type. The path is defined in the Hadoop FS tab.

716
Desktop 7.1

14.4.3.4.2. Accesses

The agent does not itself access any MIM resources.

14.4.3.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type please refer to Section 5.5.14, “Agent
Event”.

• Ready with file: filename

Reported along with the name of the source file that has been collected and inserted into the workflow.

• File cancelled: filename

Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted; refer to Section 14.4.3.2, “Transaction Behavior” for
further information.

14.4.3.6. Debug Events


There are no debug events for this agent.

14.4.4. Hadoop FS Forwarding Agent


The Hadoop FS forwarding agent creates files on the remote file system containing the received data.
Files are created when a Begin Batch message is received, and closed when an End Batch message is
received. In addition, the Filename Template service offers the possibility to compress (gzip) the
files, or to further process them, using commands.

To ensure that downstream systems will not use the files until they are closed, they are stored in a
temporary directory until the End Batch message is received. This behavior also applies to Cancel
Batch messages. If a Cancel Batch is received, file creation is cancelled.

14.4.4.1. Configuration
The Hadoop FS forwarding agent configuration window is displayed when the agent in a workflow is
double-clicked, or right-clicked, selecting Configuration....

717
Desktop 7.1

14.4.4.1.1. Hadoop FS Tab

Figure 486. Hadoop FS Forwarding agent configuration window, Hadoop FS tab.

Input Type The agent can act on two input types. Depending on which one the agent is con-
figured to work with, the behavior will differ.

The default input type is bytearray, that is the agent expects bytearrays. If nothing
else is stated the documentation refer to input of bytearray.

If the input type is MultForwardingUDR, the behavior is different. For further


information about the agent's behavior in MultiForwardingUDR input, refer to
Section 14.4.4.1.4, “MultiForwardingUDR Input”.
File System Select the file system type used in this drop-down-list; Distributed File System
Type or Amazon S3. See Section 14.4.4.1.2, “File System Type Settings” for further
information.
Replication Enter the replication factor per file. See the Apache Hadoop Project documentation
for information about the replication factor.
Directory Absolute pathname of the target directory on the remote file system, where the
forwarded files will be stored.

The files will be temporarily stored in the automatically created subdirectory


DR_TMP_DIR, in the target directory. When an End Batch message is received,
the files are moved from the subdirectory to the target directory.
Create Directory Check to create the directory, or the directory structure, of the path that you specify
in Directory.

Note! The directories are created when the workflow is executed.

718
Desktop 7.1

Compression Compression type of the target files. Determines if the agent will compress the files
or not.

• No Compression - agent does not compress the files. Default setting.

• Gzip - agent compresses the files using gzip.

Note! No extra extension will be appended to the target filenames, even if


compression is selected. The configuration of the filenames is managed in
the Filename Template tab only.

Command If a Command is supplied, it will be executed on each successfully closed temporary


file, using the parameter values declared in Arguments. Please refer to Section 1.3,
“Commands” for further information.

Note! At this point the temporary file is created and closed, however the final
filename has not yet been created.

The entered Command has to exist in the MediationZone® execution envir-


onment, either including an absolute path, or to be found in the PATH for
the execution environment.

Arguments This field is optional. Each entered parameter value has to be separated from the
preceding value with a space.

The temporary filename is inserted as the second last parameter, and the final file-
name is inserted as the last parameter, automatically. This means that if, for instance,
no parameter is given in the field, the arguments will be as follows:

$1=<temporary_filename> $2=<final_filename>

If three parameters are given in the field Arguments, the arguments are set as:

$1=<parameter_value_#1>
$2=<parameter_value_#2>
$3=<parameter_value_#3>
$4=<temporary_filename>
$5=<final_filename>

Produce Empty If enabled, files will be produced although containing no data.


Files

14.4.4.1.2. File System Type Settings

Depending on which type of file system you have selected there in the File System Type list, the settings
for the File System will vary.

14.4.4.1.2.1. Distributed File System

When you have selected Distributed File System, you have the following settings:

719
Desktop 7.1

Figure 487.

Host Enter the IP address or hostname of the NameNode in this field. See the Apache Hadoop
Project documentation for information about the NameNode.
Port Enter the port number of the NameNode in this field.

14.4.4.1.2.2. Amazon S3

When you have selected Amazon S3, you have the following settings:

Figure 488.

Access Key Enter the access key for the user who owns the Amazon S3 account in this field.
Secret Key Enter the secret key fo the stated access key in this field.
Bucket Enter the name of the Amazon S3 bucket in this field.

14.4.4.1.3. Filename Template Tab

The names of the created files are determined by the settings in the Filename Template tab.

For a detailed description of the Filename Template tab, see Section 4.1.6.2.4, “Filename Template
Tab”.

14.4.4.1.4. MultiForwardingUDR Input

When the agent is set to use MultiForwardingUDR input, it accepts input of the UDR type MultiFor-
wardingUDR declared in the package FNT. The declaration follows:

internal MultiForwardingUDR {
// Entire file content
byte[] content;
// Target filename and directory
FNTUDR fntSpecification;
};

Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.

A UDR of the type MultiForwardingUDR which target filename is not identical to its precedent is
saved in a new output file.

720
Desktop 7.1

Note! After a target filename that is not identical to its precedent is saved, you cannot use the
first filename again. For example: Saving filename B after saving filename A, prevents you
from using A again. Instead, you should first save all the A filenames, then all the B filenames,
and so forth.

Non-existing directories will be created if the Create Non-Existing Directories checkbox under the
Filename Template tab is checked. If not checked a runtime error will occur if a previously unknown
directory exists in the FNTUDR of an incoming MultiForwardingUDR. Every configuration option
referring to bytearray input is ignored when MultiForwardingUDRs are expected.

Example 131.

This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.

import ultra.FNT;

MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){

//Create the FNTUDR


FNTUDR fntudr = udrCreate(FNTUDR);
fntAddString(fntudr, dir);
fntAddDirDelimiter(fntudr);//Add a directory
fntAddString(fntudr, file);//Add a file

MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;

return multiForwardingUDR;
}

consume {

bytearray file1Content;
strToBA (file1Content, "file nr 1 content");

bytearray file2Content;
strToBA (file2Content, "file nr 2 content");

//Send MultiForwardingUDRs to the forwarding agent


udrRoute(createMultiForwardingUDR
("dir1", "file1", file1Content));
udrRoute(createMultiForwardingUDR
("dir2", "file2", file2Content));
}

The Analysis agent mentioned previous in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the root directory. The Create Non-Existing Directories check box in the Filename
Template tab in the configuration of the forwarding agent must be selected if the directories
do not previously exist.

721
Desktop 7.1

14.4.4.2. Transaction Behavior


The transaction behavior for the Hadoop FS forwarding agent is presented here. For more information
about general MediationZone® transaction behavior please refer to Section 4.1.11.8, “Transactions”.

14.4.4.2.1. Emits

The agent does not emit anything.

14.4.4.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Begin Batch When a Begin Batch message is received, the temporary directory DR_TMP_DIR is
first created in the target directory, if not already created. Then a target file is created
and opened in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is first closed
and then the Command, if specified in After Treatment, is executed. Finally, the file
is moved from the temporary directory to the target directory.
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.

14.4.4.3. Introspection
The agent consumes bytearray or MultiForwardingUDR types.

14.4.4.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see the
section Meta Information Model in the Desktop user's guide.

14.4.4.4.1. Publishes

MIM Value Description


File Transfer This MIM parameter contains a timestamp, indicating when the target file
Timestamp was created in the temporary directory.

File Transfer Timestamp is of the date type and is defined as a


trailer MIM context type.
MultiForwardingUDR's This MIM parameter is only set when the agent expects input of MultiFor-
FNTUDR wardingUDR type. The MIM value is a string representing the sub path from
the output root directory on the target file system. The path is specified by
the fntSpecification field of the last received MultiForwardingUDR.
For further information on using input of MultiForwardingUDR type, refer
to Section 14.4.4.1.4, “MultiForwardingUDR Input”.

This parameter is of the string type and is defined as a batch MIM context
type.
Target Filename This MIM parameter contains the name of the target filename, as defined in
Filename Template.

Target Filename is of the string type and is defined as a trailer MIM


context type.
Target Pathname This MIM parameter contains the path to the output directory, as defined in
the Hadoop FS tab.

722
Desktop 7.1

Target Pathname is of the string type and is defined as a global MIM


context type.

14.4.4.4.2. Accesses

This MIM parameter contains various resources from the Filename Template configuration are accessed
to construct the target filename.

14.4.4.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message

• Ready with file: name

Reported along with the name of the target file when it has been successfully stored in the target
directory. If an After Treatment Command is specified, the message also indicate that it has been
executed.

14.4.4.6. Debug Events


There are no debug events for this agent.

14.5. Inter Workflow Agents


14.5.1. Introduction
This section describes the Inter Workflow agents. These are standard agents of the MediationZone®
platform. The agents are used for temporary storage of stream data, merging stream data and as gateways
between real-time and batch workflows. Both agents can be part of both batch and real-time workflows.

14.5.1.1. Prerequisites
The reader of this document should be familiar with:

• The MediationZone® Platform

14.5.2. Overview
The Inter Workflow agents allow files to be distributed between workflows within the same Medi-
ationZone® system. It is especially useful when transferring data from real-time workflows to batch
workflows.

The Inter Workflow agents use an Inter Workflow storage server to manage the actual data storage.
The storage server can either run on an Execution Context or on the Platform. The storage server and
base directory to use is configured in an Inter Workflow profile.

723
Desktop 7.1

Figure 489. The Inter Workflow agents distribute files from one workflow to another.

Several forwarding workflows may be configured to distribute batches to the same profile, however
only one collection workflow at a time can be activated to collect from it.

14.5.2.1. Inter Workflow Related UDR Type


The UDR type created by default in the Inter Workflow agent can be viewed in the UDR Internal
Format Browser in the InterWorkflow folder. To open the browser, open an APL Editor, right-click
on the text pad in the editing area and select UDR Assistance.... The browser opens.

14.5.3. Inter Workflow Profile


The Inter Workflow profile enables you to configure the storage server that the Inter Workflow for-
warding and collection agents use for communication.

It is safe to accumulate a lot of data in the storage server directory. When the initial set of directories
has been populated with a predefined number of files, new directories are automatically created to
avoid problems with file system performance.

The Inter Workflow profile is loaded when you start a workflow that depends on it. Changes to the
profile become effective when you restart the workflow.

Note! Files collected by the Inter Workflow agent depend on, and are connected with, the Inter
Workflow profile in use. If an Inter Workflow profile is imported to the system, files left in the
storage connected to the old profile will be unreachable.

14.5.3.1. Inter Workflow Profile Menu


The main menu changes depending on which Configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all Configurations and these are
described in Section 3.1.1, “Configuration Menus”.

There is one menu item that is specific for Inter Workflow profile configurations, and it is described
in the coming section.

14.5.3.1.1. The Edit Menu

Item Description

724
Desktop 7.1

External References To Enable External References in an agent profile field. Please refer to Sec-
tion 9.5.3, “Enabling External References in an Agent Profile Field”.

14.5.3.2. Inter Workflow Profile Buttons


The toolbar changes depending on which Configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all Configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.

There are no additional buttons for Inter Workflow profile.

14.5.3.3. Profile Configuration


To open the configuration, click the New Configuration button in the upper left part of the Medi-
ationZone® Desktop window, and then select Inter Workflow Profile from the menu.

Figure 490. Inter Workflow profile configuration

Storage Host From the drop-down list, select either Automatic, Platform, or an activated Exe-
cution Context.

Using Automatic means that the storage will use the Execution Context where the
first workflow accessing this profile is started. Following workflows using the same
profile will use the same Execution Context for storage until the first workflow
acessing the profile is stopped. The Execution Context where the next workflow
accessing this profile is started will then be used for storage. The location of the
storage will therefore vary depending on the start order of the workflows.

725
Desktop 7.1

Example 132.

Below is an example of a scenario where Automatic is used as storage host


with the following setup:

• Workflow 1 is running on EC1 with Interworkflow Forwarding agent

• Workflow 2 is running on EC2 with Interworkflow Collection agent

1. Workflow 2 is started.

EC2 will be used for storage

2. Workflow 1 is started.

EC2 is still used for storage.

3. Workflow 1 is stopped.

EC2 is still used for storage.

4. Workflow 2 is stopped.

No EC is used for storage.

5. Workflow 1 is started.

EC1 is used for storage.

Note! The workflow must be running on the same Execution Context as its
storage resides. If the storage is configured to be Automatic, its corresponding
directory must be a shared file system between all the Execution Contexts.

Root Directory Absolute pathname of the directory on the storage handler where the temporary
files will be placed.
Max Bytes An optional parameter stating the limit of the space consumed by the files stored
in the Root Directory. If the limit is reached, any Inter Workflow forwarding agent
using this profile will abort.
Max Batches An optional parameter stating the maximum number of batches stored in the Root
Directory. If the limit is reached, any Inter Workflow forwarding agent using this
profile will abort.
Compress inter- Select this check box if you want to compress the data sent between the Inter
mediate data Workflow agents.

The data will be compressed into *.gzip format with compression level 5.
Named MIMs A list of user defined MIM names. These variables do not have any values assigned.
They are populated with existing MIM values from the Inter Workflow forwarding
agent. This way, MIMs from the forwarding workflow can be passed on to the
collecting workflow.

14.5.3.4. Enabling External Referencing


You enable External Referencing of profile fields from the profile view's main menu. For detailed in-
structions, see Section 9.5.3, “Enabling External References in an Agent Profile Field”.

726
Desktop 7.1

14.5.4. Inter Workflow Collection Agent


The collecting Inter Workflow agent collects batch files from a storage server. The data that it collects
has previously been submitted to the storage server by a forwarding Inter Workflow agent.

Note! An Inter Workflow profile cannot be used by more than one Inter Workflow collection
agent at the time. A workflow trying to use an already locked profile will abort.

Note! In a batch workflow, the collecting Inter Workflow agent will hand over the data, in UDR
form, to the next agent in turn, one at the time.

In a realtime workflow on the other hand, the collecting Inter Workflow agent routes the UDRs
into the workflow, one batch at a time.

It is possible to restrict memory consumption by setting mz.iwf.max_size_block in


execution.xml or platform.xml, on the EC or Platform that runs the Inter Workflow storage.
If the agent wants to allocate more memory than the given property value during collection, the collec-
tion will abort instead of suffering a possible "out of memory". The value representation should be in
bytes. See the following example:

<property name="mz.iwf.max_size_block" value="65535"/>

Note! The minimum value is 32000 bytes, and even if a lower value is configured, 32000 will
apply.

Every batch file that the agent routes to the workflow is preceded with a special UDR that is called
NewFileUDR, and contains the name of the batch file.

Figure 491. The Inter Workflow collection agent in Batch and Real-Time Workflows

14.5.4.1. Batch Workflow - Configuration


The Inter Workflow collection agent configuration window is displayed when you right-click on the
agent and select Configuration... or when you double-click on the agent.

727
Desktop 7.1

Figure 492. Inter Workflow collection agent configuration.

Profile The name and most recent version of the Inter Workflow profile (select Inter
Workflow Profile after clicking the New Configuration button in the
Desktop).

All workflows in the same workflow configuration can use separate Inter
Workflow profiles, if that is preferred. In order to do that the profile must
be set to Default in the Workflow Table tab found in the Workflow Prop-
erties dialog. After that each workflow in the table can be appointed different
profiles.
Deactivate on Idle If enabled, the agent will deactivate the workflow if it has no more batches
to collect.

Note! Make sure a proper scheduling criteria is defined for a workflow


with this feature turned on, since the agent will cause the workflow
to deactivate immediately if no data is available.

No Merge If enabled, each incoming batch will generate one outgoing batch.
Merge Batches Based If enabled, decides if the incoming batches will be merged into larger entities,
on Criteria as soon as any of the merge criteria defined in the Merge Definition, are
met.

Note! It is not possible to import MIM values for all batches in a


merge; header MIMs for the first batch will be selectable, as well as
trailer and batch MIMs for the last batch.

Merge All Available If enabled, all incoming batches will be inserted into one outgoing batch.
Batches
Number of Bytes The size of the batches produced by the collection agent. The incoming files
are never split. For instance, if 300 is entered and the source files are 200
bytes each, the produced batches will be 400 bytes.
Number of Batches The number of incoming batches to merge.
Age of Oldest Batch Indicates for how long (in seconds) the agent will wait after the first incoming
(sec) batch. When this time has expired, an outgoing batch will be produced, re-
gardless if the Number of Bytes/Batches criteria has been fulfilled or not.

728
Desktop 7.1

14.5.4.1.1. Transaction Behavior

This section includes information about the Inter Workflow collection agent transaction behavior. For
information about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transac-
tions”.

14.5.4.1.1.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Emitted before the first byte of the collected file(s) are fed into a workflow.
End Batch Emitted after the last byte of last collected file has been fed into the system.

14.5.4.1.1.2. Retrieves

The agent retrieves commands from other agents, and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If Never Abort has been configured on workflow level, in Workflow Properties, and
a Cancel Batch message is received, the agent sends the batch (the UDRs that have
been successfully read) to ECS. The batch will be closed and moved immediately, re-
gardless of the criteria defined in the Merge Definition and the Workflow will continue
executing.

If, on the other hand, the Cancel Batch behavior has been configured to abort the
workflow (default), the batch will not be sent to ECS.

For further information please refer to Section 4.1.8, “Workflow Properties”.

Note! If the Cancel Batch behavior defined on workflow level is configured to


abort the workflow, the agent will never receive the last Cancel Batch message.
In this situation ECS will not be involved, and the file will not be moved.

14.5.4.1.2. Introspection

The agent produces bytearray types.

14.5.4.1.3. Meta Information Model

For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.5.4.1.3.1. Publishes

MIM Parameter Description


Number of Source This MIM parameter contains the number of incoming batches to merge in
Batches the current outgoing batch. If no merge is selected this MIM is static and set
to 1 (one).

Number of Source Batches is of the int type and is defined as a


header MIM context type.
Outgoing Batch Size This MIM parameter contains the size of the batch produced by the agent.

729
Desktop 7.1

Outgoing Batch Size is of the long type and is defined as a header


MIM context type.
Source Files Left This MIM parameter contains the number of source files still to be collected.
This is the number presented in the Execution Manager backlog.

Source Files Left is of the long type and is defined as a header MIM
context type.
<any> Any named MIM in the Inter Workflow Profile Editor.

All imported MIMs are automatically converted to the type string, regardless
of the original type. APL provides functions to convert strings to other data
types. For further information about conversion functions, see the APL Refer-
ence Guide.

Note! MIMs of list and map type cannot be imported.

For information about how to add and map named MIMs, see Section 14.5.3.3, “Profile Configuration”
.

14.5.4.1.3.2. Accesses

The agent does not itself access any MIM resources.

14.5.4.1.4. Agent Message Events

• Ready with batch: name

Reported when a batch is finished and then sent on to the subsequent agent.

• Merging batch: name

Reported during batch merging.

• Last merged batch: name

Reported upon activation, and shortly before the merging of a new batch is started.

• Batch cancelled: name

Reported when a batch has been cancelled and routed to ECS.

14.5.4.1.5. Debug Events

There are no debug events for this agent.

14.5.4.2. Real-Time Workflow - Configuration


To configure the collecting Inter Workflow agent, in the workflow editor, either double-click the agent
icon, or right-click it; The Configuration dialog box opens.

Figure 493. Inter Workflow Collection Agent Real-Time Configuration

730
Desktop 7.1

Profile Click the Browse button and select a profile that you want assigned to the agent.

All workflows in the same workflow configuration can use separate Inter Workflow profiles,
if that is preferred. In order to do that the profile must be set to Default in the Workflow
Table tab found in the Workflow Properties dialog. After that each workflow in the table
can be appointed different profiles.

14.5.4.2.1. Transaction Behavior

Although the real-time Inter Workflow collecting agent is not transaction safe, it checks if the workflow
queue is empty prior to removing the current batch from the storage. If the agent is stopped while there
is still data in the workflow queue, the last batch will be collected again once the agent becomes active.

14.5.4.2.2. Introspection

The agent generates data types according to the decoder configuration.

14.5.4.2.3. Meta Information Model

For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.5.4.2.3.1. Publishes

<any> Any named MIM in the Inter Workflow Profile Editor.

All imported MIMs are automatically converted to the type string, regardless of the original
type. APL provides functions to convert strings to other data types. For further information
about conversion functions, see the APL Reference Guide.

Note! MIMs of list and map type cannot be imported.

For information about how to add and map named MIMs, see Section 14.5.3.3, “Profile
Configuration” .

14.5.4.2.3.2. Accesses

The agent does not access any MIM resources.

14.5.4.2.4. Agent Message Events

An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with batch: name

The real-time Inter Workflow collecting agent will not start to collect a batch that has been forwarded
by a batch workflow to the storage until the batch is completely forwarded. This message will be
sent when the batch has been completely forwarded.

14.5.4.2.5. Debug Events

There are no debug events for this agent.

731
Desktop 7.1

14.5.5. Inter Workflow Forwarding Agents


The Inter Workflow forwarding agent is responsible for sending data to the storage server. The agent
can be part of both batch and real-time workflows. In the latter case, the user has to define batch
closing criteria for the data to be saved in the storage directory. This is based on UDR count, byte
count, or elapsed time.

14.5.5.1. Batch Workflow - Configuration


The Inter Workflow Forwarding agent configuration window is displayed when you right-click on the
agent and select Configuration..., or when you double-click on the agent.

Figure 494. Inter Workflow forwarding agent configuration - Batch workflow.

Profile The name and most recent version of the Inter Workflow profile configuration
(select Inter Workflow Profile after clicking the New Configuration button
in the Desktop).

All workflows in the same workflow configuration can use separate Inter
Workflow profiles, if that is preferred. In order to do that the profile must be
set to Default in the Workflow Table tab found in the Workflow Properties
dialog. After that each workflow in the table can be appointed different profiles.
Named MIM The user defined MIM names according to the definitions in the selected profile.
MIM Resource Selected, existing MIM values of the workflow that the Named MIMs are
mapped to. This way, MIM values from this workflow are passed on to the
collection workflow.
Produce Empty If enabled, empty files will be created even if no UDRs are forwarded from a
Batches batch.

14.5.5.1.1. Transaction Behavior

14.5.5.1.1.1. Emits

The agent does not emit anything.

14.5.5.1.1.2. Retrieves

Begin Batch Creates and opens a target file in a temporary directory.


End Batch Moves the file from the temporary directory to the target directory.
Cancel Batch Deletes the current file from the temporary directory.

14.5.5.1.2. Introspection

The agent consumes bytearray types.

732
Desktop 7.1

14.5.5.1.3. Meta Information Model

For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.5.5.1.3.1. Publishes

The agent does not publish any MIM parameters.

14.5.5.1.3.2. Accesses

The agent accesses various resources from the workflow and all its agents to configure the mapping
to the Named MIMs (that is, what MIMs to refer to the collection workflow).

14.5.5.1.4. Agent Message Events

• Inserted batch: filename

Reported when a file has been closed in the target directory (hence ready for collection). The message
is only used in batch workflows.

14.5.5.1.5. Debug Events

There are no debug events for this agent.

14.5.5.2. Real-Time Workflow - Configuration


The Inter Workflow forwarding agent configuration window is displayed when you right-click on the
agent and select Configuration..., or when you double-click on the agent.

Figure 495. Inter Workflow forwarding agent configuration - Realtime workflow.

Profile The name and most recent version of the Inter Workflow profile configuration
(select Inter Workflow Profile after clicking the New Configuration button
in the Desktop.

All workflows in the same workflow configuration can use separate Inter
Workflow profiles, if that is preferred. In order to do that, the profile must be
set to Default in the Workflow Table tab found in the Workflow Properties
dialog. After that each workflow in the table can be appointed different profiles.
Named MIM The user defined MIM names according to the definitions in the selected profile.

733
Desktop 7.1

MIM Resource Selected, existing MIM values of the workflow that the Named MIMs are
mapped to. This way, MIM values from this workflow are passed on to the
collection workflow.
Volume (bytes) When the file size has reached the number of bytes entered in this field, the file
will be closed as soon as the current bytearray has been included, and stored in
the storage directory. This means that the file size may actually be larger than
the set value since MediationZone® will not cut off any bytearrays. If nothing
is entered, this file closing criteria will not be used.
Volume (UDRs) When the file contains the number of UDRs entered in this field, the file will
be closed and stored in the storage directory. If nothing is entered, this file
closing criteria will not be used.
Timer (sec) When the file has been open for the number of seconds entered in this field, the
file will be closed and stored in the storage directory. If nothing is entered, this
file closing criteria will not be used.
Enable Worker Select this check box to enable worker thread functionality, allowing you to
Thread configure a queue size in order to improve performance and reduce the risk for
blocking during heavy I/O.
Queue Size Enter the queue size you wish to have for the Worker Thread in this field.

Note! Since there are no natural batch boundaries within a real-time workflow, Volume and/or
Timer criteria must be set to enable the file outputting data to be closed and a new one opened.
If several file closing criteria have been selected, all will apply, using a logical OR.

If the workflow is deactivated before any of the file closing criteria has been fulfilled, the UDRs
currently stored in memory will be flushed, that is flushed to the current batch without being
processed. Hence, the size of the last file cannot be predicted. In case of a crash, the content of
the last batch cannot be predicted. The error handling is taken care of by the Inter Workflow
collection agent. If the file is corrupt, it will be thrown away and a message is logged in the
System Log. The collector will automatically continue with the batch next in order.

14.5.5.2.1. Transaction Behavior

For information about the general MediationZone® transaction behavior, see Section 4.1.11.8,
“Transactions”.

14.5.5.2.1.1. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Begin Batch Creates and opens a target file in a temporary directory.
End Batch Moves the file from the temporary directory to the target directory.
Cancel Batch Deletes the current file from the temporary directory.

14.5.5.2.2. Introspection

The agent consumes bytearray types.

14.5.5.2.3. Meta Information Model

For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

734
Desktop 7.1

14.5.5.2.3.1. Publishes

MIM Parameter Description


File start time This MIM parameter contains the start time of the interworkflow file, published
in a Unix timestamp format (the total number of milliseconds since 1 Jan, 1970).
File end time This MIM parameter contains the end time of the interworkflow file, published
in a Unix timestamp format (the total number of milliseconds since 1 Jan, 1970).

14.5.5.2.3.2. Accesses

The agent accesses various resources from the workflow and all its agents to configure the mapping
to the Named MIMs (that is, what MIMs to refer to the collection workflow).

14.5.5.2.4. Agent Message Events

An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor. For further information about the agent message event type, see Section 5.5.14, “Agent
Event”.

• Inserted batch: filename

Reported when a file has been closed in the target directory (hence ready for collection). The message
is only used in batch workflows.

14.5.5.2.5. Debug Events

There are no debug events for this agent.

14.6. SCP Agents


14.6.1. Introduction
This section describes the SCP Collection- and Forwarding agents. These are extension agents of the
DigitalRoute® MediationZone® Platform.

14.6.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• The SSH2 and SCP protocols

14.6.2. Overview
The SCP protocol is intended for use with SSH servers that do not support the SFTP protocol. SCP is
applied by issuing remote shell commands over the SSH connection with server systems that understand
standard shell commands such as the Unix command syntax.

14.6.3. Preparations
Prior to configuring an SCP agent, consider the following preparation notes:

• Server Identification

• Attributes

735
Desktop 7.1

• Authentication

• Server Keys

14.6.3.1. Server Identification


The SCP agent uses a file with known host keys to validate the server identity during connection setup.
The location and naming of this file is managed through the property:

mz.ssh.known_hosts_file

It is set in executioncontext.xml to manage where the file is saved. The default value is
${mz.home}/etc/ssh/known_hosts.

The SSH implementation uses JCE (Java Cryptography Exentsion), which means that there may be
limitations on key sizes for your Java distribution. This is usually not a problem. However, there may
be some cases where the unlimited strength cryptography policy is needed. For instance, if the host
RSA keys are larger than 2048 bits (depending on the SSH server configuration). This may require
that you update the Java Platform that runs the Execution Context.

For unlimited strength cryptography on the Oracle JRE, download the JCE Unlimited Strength Juris-
diction Policy Files from:

http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html.

Replace the jar files in $JAVA_HOME/jre/lib/security with the files in this package.

The OpenJDK JRE does not require special handling of the JCE policy files for unlimited strength
cryptography.

14.6.3.2. Attributes
The SCP collection agent and the SCP forwarding agent share a number of common attributes. They
are both supported by a number of algorithms:

blowfish-cbc, cast128-cbc, twofish192-cbc, twofish256-cbc, twofish128-cbc, aes128-cbc, aes256-cbc,


aes192-cbc, 3des-cbc.

14.6.3.3. Authentication
The SCP agents support authentication through either username/password or private key. Private keys
can optionally be protected by a Key password. Most commonly used private key files, can be imported
into MediationZone® .

Typical command line syntax (most systems):

ssh-keygen -t <keyType> -f <directoryPath>

keyType The type of key to be generated. Both RSA and DSA key types are supported.
directoryPath Where to save the generated keys.

736
Desktop 7.1

Example 133.

The private key may be created using the following command line:

> ssh-keygen -t rsa -f /tmp/keystore


Enter passphrase: xxxxxx
Enter same passphrase again: xxxxxx

Then the following is stated:

Your identification key has been saved in /tmp/keystore


Your public key has been saved in /tmp/keystore.pub

When the keys are created the private key may be imported to the SCP agent:

Finally, on the SCP server host, append /tmp/keystore.pub to


$HOME/.ssh/authorized_keys. If the $HOME/.ssh/authorized_keys is not
there it must be created.

14.6.3.4. Server Keys


The SSH protocol uses host verification to guard against attacks where an attacker manages to reroute
the TCP connection from the correct server to another machine. Since the password is sent directly
over the encrypted connection, it is critical for security that an incorrect public key is not accepted by
the client.

The agent uses a file with the known hosts and keys. It will accept the key supplied by the server if
either of the following is fulfilled:

1. The host is previously unknown. In this case the public key will be registered in the file.

2. The host is known and the public key matches the old data.

737
Desktop 7.1

3. The host is known however has a new key and the user has configured to accept the new key. For
further information, see description of the Advanced tab.

If the host key changes for some reason, the file will have to be removed (or edited) in order for the
new key to be accepted.

14.6.4. SCP Collection Agent


The SCP collection agent collects files from a remote host and inserts them into a MediationZone®
workflow, using the SCP protocol over SSH2.

Upon activation, the agent establishes an SSH2 connection and an SCP session towards the remote
host. If this fails, additional hosts are tried, if configured. On success, the source directory on the remote
host is scanned for all files matching the current filter. In addition, the Filename Sequence service
may be utilized for further control of the matching files. All files found, will be fed one after the other
into the workflow.

When a file has been successfully processed by the workflow, the agent offers the possibility of moving,
renaming, removing or ignoring the original file. The agent can also automatically delete moved or
renamed files after a configurable number of days. In addition, the agent offers the possibility of de-
compressing (gzip) files after they have been collected, before they are inserted into the workflow.

When all the files have been successfully processed the agent stops, awaiting the next activation,
scheduled or manually initiated.

14.6.4.1. Configuration
To open the SCP collection agent configuration view from the workflow editor, either double-click
the agent, or right-click the agent and then select Configuration.

You can configure part of the parameters can be done in the Filename Sequence or Sort Order
service tabs. For further information, see Section 4.1.6.2.2, “Filename Sequence Tab” and Sec-
tion 4.1.6.2.3, “Sort Order Tab”.

The Configuration view consists of the following tabs:

• Connection

• Source

• Advanced

14.6.4.1.1. Connection Tab

The Connection tab contains configuration settings related to the remote host and authentication.

738
Desktop 7.1

Figure 496. The SCP Collection Agent Configuration - Connection Tab

Server Information If your MediationZone® system is installed with the Multi Server functionality
Provider you can configure the SCP agent to collect from more than one server. For
further information, see the Multi Server File user's guide.
Host Primary host name or IP-address of the remote host to be connected. If a con-
nection cannot be established to this host, the Additional Hosts, specified in
the Advanced tab, are tried.
File System Type Type of file system on the remote host. This information is used to construct
the remote filenames.

• Unix - remote host using Unix file system. Default setting.

• Windows NT - remote host using Windows NT file system.

Authenticate With Choice of authentication mechanism. Both password and private key authentic-
ation are supported.
Username Username for an account on the remote host, enabling the SCP session to login.
Password Password related to the specified Username. This option only applies when
password authentication is enabled.
Private Key The Select... button will display a window where the private key may be inser-
ted. If the private key is protected by a passphrase, the passphrase must be
provided as well. This option only applies when private key authentication is
enabled. For further information, see Section 14.6.3.3, “Authentication”.
Enable Collection Select this check box to enable repetitive attempts to connect and start a file
Retries transfer.

When this option is selected, the agent will attempt to connect to the host as
many times as is stated in the Max Retries field described below. If the con-
nection fails, a new attempt will be made after the number of seconds entered
in the Retry Interval (s) field described below.
Retry Interval (s) Enter the time interval in seconds, between retries.

If a connection problem occurs, the actual time interval before the first attempt
to reconnect will be the time set in the Timeout field in the Advanced tab plus

739
Desktop 7.1

the time set in the Retry Interval (s) field. For the remaining attempts, the
actual time interval will be the number seconds entered in this field.
Max Retries Enter the maximum number of retries to connect.

In case more than one connection attempt has been made, the number of used
retries will be reset as soon as a file transfer is completed successfully.

Note! This number does not include the original connection attempt.

14.6.4.1.2. Source Tab

The Source tab contains configurations related to the remote host, source directories and source files.
The configuration available can be modified through the choice of a Collection Strategy. The following
text describes the configuration options available when no custom strategy has been chosen.

Figure 497. The SCP Collection Agent Configuration - Source tab

Collection If there are more than one collection strategy available in the system a Collection
Strategy Strategy drop down list will also be visible. For further information about the
collection strategy, see Section 15, “Appendix VII - Collection Strategies”.
Directory Absolute pathname of the source directory on the remote host, where the source
files reside. The pathname might also be given relative to the home directory of
the Username account.
Filename Name of the source files on the remote host. Regular expressions according to
Java syntax applies. For further information, see http://docs.or-
acle.com/javase/8/docs/api/java/util/regex/Pattern.html.

740
Desktop 7.1

Example 134.

To match all filenames beginning with TTFILES, type: TTFILES.*.

Compression Compression type of the source files. Determines whether the agent will decompress
the files before passing them on in the workflow or not.

• No Compression - the agent agent will not decompress the files.

• Gzip - the agent decompresses the files using gzip.

Move to Tempor- If enabled, the source files will be moved to the automatically created subdirectory
ary Directory DR_TMP_DIR in the source directory, prior to collection. This option supports
safe collection of a source file reusing the same name.
Append Suffix to Enter the suffix that you want added to the file name prior to collecting it.
Filename

Important! Before you execute your workflow, make sure that none of the
file names in the collection directory include this suffix.

Inactive Source If enabled, when the configured number of hours have passed without any file
Warning (h) being available for collection, a warning message (event) will appear in the System
Log and Event Area:

The source has been idle for more than <n>


hours, the last inserted file is <file>.

Move to If enabled, the source files will be moved from the source directory (or from the
directory DR_TMP_DIR, if using Move Before Collecting) after collection, to
the directory specified in the Destination field. If Prefix or Suffix are set, the file
will be renamed as well.

If a file with the same filename already exist in the target directory, this file
will be overwritten and the workflow will not abort.

Destination Absolute pathname of the directory on the remote host into which the source files
will be moved after the collection. This field is only available if Move to is enabled.

The Directory has to be located in the same file system as the collected
files at the remote host. Also, absolute pathnames must be defined. Relative
pathnames cannot be used.

Prefix and Suffix Prefix and/or suffix that will be appended to the beginning and/or the end, respect-
ively, of the source files after the collection. This field is only available if Move
to or Rename is enabled.

741
Desktop 7.1

If Rename is enabled, the source files will be renamed in the current direct-
ory (source or DR_TMP_DIR). Be sure not to assign a Prefix or Suffix,
giving files new names still matching the Filename Regular Expression.
That would cause the files to be collected over and over again.

Search and Re-


To apply Search and Replace, select either Move to or Rename.
place

• Search: Enter the part of the filename that you want to replace.

• Replace: Enter the replacement text.

Search and Replace operate on your entries in a way that is similar to the Unix
sed utility. The identified filenames are modified and forwarded to the following
agent in the workflow.

This functionality enables you to perform advanced filename modifications, as


well:

• Use regular expression in the Search entry to specify the part of the filename
that you want to extract.

A regular expresion that fails to match the orignal file name will abort
the workflow.

• Enter Replace with characters and metacharacters that define the pattern and
content of the replacement text.

Example 135. Search and Replace Examples

To rename the file file1.new to file1.old, use:

• Search: .new

• Replace: .old

To rename the file JAN2011_file to file_DONE, use:

• Search: ([A-Z]*[0-9]*)_([a-z]*)

• Replace: $2_DONE

Note that the search value divides the file name into two parts by using
brackets. The replace value applies the second part by using the place
holder $2.

Keep (days) Number of days to keep moved or renamed source files on the remote host after
the collection. In order to delete the source files, the workflow has to be executed
(scheduled or manually) again, after the configured number of days.

Note, a date tag is added to the filename, determining when the file may be re-
moved. This field is only available if Move to or Rename is selected.

742
Desktop 7.1

Rename If enabled, the source files will be renamed after the collection, remaining (or
moved back from the directory DR_TMP_DIR, if using Move Before Collecting)
in the source directory from which they were collected.
Remove If enabled, the source files will be removed from the source directory (or from the
directory DR_TMP_DIR, if using Move Before Collecting), after the collection.
Ignore If enabled, the source files will remain in the source directory after the collection.
This option is not available if Move Before Collecting is enabled.

14.6.4.1.3. Advanced Tab

The Advanced tab contains configurations related to more specific use of the SCP service.

Figure 498. The SCP Collection Agent Configuration - Advanced Tab

Port The port number the SCP service will use on the remote host.
Timeout (s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Accept New Host If selected, the agent overwrites the existing host key when the host is repres-
Keys ented with a new key. The default behavior is to abort when the key mis-
matches.

Selecting this option causes a security risk since the agent will accept
new keys regardless if they possibly belong to another machine.

Enable Key Re-Ex- Used to enable and disable automatic re-exchange of session keys during on-
change going connections. This can be useful if you have long lived sessions since
you may experience connection problems for some SFTP servers if one of the
sides initiates a key re-exchange during the session.
Additional Hosts List of additional host names or IP-addresses that may be used to establish a
connection. These hosts are tried, in sequence from top to bottom, if the agents
fail to connect to the remote host set in their Connection tabs.

Use the Add, Edit, Remove, Move up and Move down buttons to configure
the host list.

743
Desktop 7.1

14.6.4.2. Transaction Behavior


This section includes information about the SCP collection agent transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

14.6.4.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Will be emitted just before the first byte of each collected file is fed into a workflow.
End Batch Will be emitted just after the last byte of each collected file has been fed into the system.

14.6.4.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

If the Cancel Batch behavior defined on workflow level is configured to abort


the workflow, the agent will never receive the last Cancel Batch message. In
this situation, ECS will not be involved, and the file will not be moved, however
left at its current place.

APL code where Hint End Batch is followed by a Cancel Batch will always
result in workflow abort. Make sure to design the APL code to first evaluate
the Cancel Batch criteria to avoid this sort of behavior.

Hint End Batch If a Hint End Batch message is received, the collector splits the batch at the end of
the current processed block (as received from the server), provided that no UDR is
split. If the block end occurs within a UDR, the batch will be split at the end of the
preceding UDR.

After a batch split, the collector emits an End Batch Message, followed by a Begin
Batch message (provided that there is more data in the subsequent block).

14.6.4.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray types.

14.6.4.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.6.4.4.1. Publishes

MIM Parameter Description


Source Filenames This MIM parameter contains a list of names of the files that are about to be
collected from the current collection directory.

744
Desktop 7.1

When the agent collects from multiple directories, the value of this
parameter is cleared after collection of each directory. Then, the MIM
value is updated with the listing of the next directory.

Source Filenames is of the list<any> type and is defined as a header


MIM context type.
File Retrieval This MIM parameter contains a timestamp, indicating when the file transfer
Timestamp starts.

File Retrieval Timestamp is of the date type and is defined as a


header MIM context type.
Source File Count This MIM parameter contains the number of files that were available to this
instance for collection at startup. The value is static throughout the execution
of the workflow, even if more files arrive during the execution. The new files
will not be collected until the next execution.

Source File Count is of the long type and is defined as a global MIM
context type.
Source Filename This MIM parameter contains the name of the currently processed file, as
defined at the source.

Source Filename is of the string type and is defined as a header MIM


context type.
Source Files Left This parameter contains the number of source files that are yet to be collected.
This is the number that appears in the Execution Manager backlog.

Source Files Left is of the long type and is defined as a header MIM
context type.
Source Host This MIM parameter contains the name of the host from which files are collec-
ted, as defined in the Host field in the Connection tab.

Source Host is of the string type and is defined as a global MIM context
type.
Source Pathname This MIM parameter contains the path from where the currently processed file
was collected, as defined in the Source tab.

Source Pathname is of the string type and is defined as a global MIM


context type.
Source Username This MIM parameter contains the login username to the host from which the
file was collected, as defined in the Connection tab.

Source Username is of the string type and is defined as a global MIM


context type.

14.6.4.4.2. Accesses

The agent does not itself access any MIM resources.

14.6.4.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

745
Desktop 7.1

• Ready with file: filename

Reported along with the name of the source file that has been collected and inserted into the workflow.

• File cancelled: filename

Reported along with the name of the current file, each time a cancelBatch message is received. This
assumes the workflow has not aborted. For further information, see Section 14.6.5.4.2, “Retrieves”.

14.6.4.6. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

14.6.5. SCP Forwarding Agent


The SCP forwarding agent forwards files to a remote host using the SCP protocol over SSH2. Upon
activation, the agent establishes an SSH2 connection and an SCP session towards the remote host. On
failure, additional hosts are tried, if configured.

To ensure that downstream systems will not use the files until they are closed, they are maintained in
a temporary directory on the remote host until the endBatch message is received. This behavior is
also used for cancelBatch messages. If a Cancel Batch is received, file creation is cancelled.

14.6.5.1. Configuration
The SCP forwarding agent configuration window is displayed when the agent in a workflow is right-
clicked, selecting Configuration... or double-clicked. Part of the configuration may be done in the
Filename Template service tab described in Section 4.1.6.2.4, “Filename Template Tab”.

14.6.5.1.1. Connection Tab

Figure 499. The SCP Forwarding Agent Configuration - Connection Tab

For information about the Connection tab see Figure 496, “The SCP Collection Agent Configuration
- Connection Tab”.

746
Desktop 7.1

14.6.5.1.2. Target Tab

Figure 500. The SCP Forwarding Agent Configuration - Target Tab

The Target tab contains configuration settings related to the remote host, target directories and target
files.

Input Type The agent can act on two input types. Depending on which one the agent is
configured to work with, the behavior will differ.

The default input type is bytearray, that is the agent expects bytearrays. If
nothing else is stated the documentation refers to input of bytearray.

If the input type is MultiForwardingUDR, the behavior is different. For further


information about the agent's behavior in MultiForwardingUDR input, see Sec-
tion 14.6.5.3, “MultiForwardingUDR Input”.
Directory Absolute pathname of the target directory on the remote host, where the forwarded
files will be placed. The pathname may also be given relative to the home direct-
ory of the user's account.

The files will be temporarily stored in the automatically created subdirectory


DR_TMP_DIR in the target directory. When an End Batch message is received,
the files are moved from the subdirectory to the target directory.
Create Directory Check to create the directory, or the directory structure, of the path that you
specify in Directory.

The directories are created when the workflow is executed.

Compression Compression type of the destination files. Determines whether the agent will
compress the output files as it writes them.

• No Compression - the agent agent will not compress the files.

747
Desktop 7.1

• Gzip - the agent will compress the files using gzip.

Note that no extra extension will be appended to the target filenames, even
if compression is selected.

Produce Empty If enabled, the agent will create empty output files for empty batches rather than
Files omitting those batches.
Handling of Select the behavior of the agent when the file already exists, the alternatives are:
Already Existing
Files • Overwrite - The old file will be overwritten and a warning will be logged in
the System Log.

• Add Suffix - If the file already exists the suffix ".1" will be added. If this file
also exists the suffix ".2" will be tried instead and so on.

• Abort - This is the default selection and is the option used for upgraded con-
figurations, that is workflows from an upgraded system.

Use Temporary If this option is selected, the agent will move the file to a temporary directory
Directory before moving it to the target directory. After the whole file has been transferred
to the target directory, and the endBatch message has been received, the tem-
porary file is removed from the temporary directory.
Use Temporary If there is no write access to the target directory and, hence, a temporary directory
File cannot be created, the agent can move the file to a temporary file that is stored
directly in the target directory. After the whole file has been transferred, and the
endBatch message has been received, the temporary file will be renamed.

The temporary filename is unique for every execution of the workflow. It consists
of a workflow and agent ID, and a file number.
Abort Handling Select how to handle the file in case of cancelBatch or rollback, either Delete
Temporary File or Leave Temporary File.

When a workflow aborts, the file will not be removed until the next time
the workflow is run.

748
Desktop 7.1

14.6.5.1.3. Advanced Tab

Figure 501. The SCP Forwarding Agent Configuration - Advanced Tab

The Advanced tabs contain configurations related to more specific use of the SCP service, which
might not be frequently utilized.

Port The port number the SCP service will use on the remote host.
Timeout (s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Accept New Host If selected, the agent overwrites the existing host key when the host is represented
Keys with a new key. The default behaviour is to abort when the key mismatches.

Selecting this option causes a security risk since the agent will accept new
keys regardless if they possibly belong to another machine.

Enable Key Re-Ex- Used to enable and disable automatic re-exchange of session keys during ongoing
change connections. This can be useful if you have long lived sessions since you may
experience connection problems for some SFTP servers if one of the sides initiates
a key re-exchange during the session.
Additional Hosts List of additional host names or IP-addresses that may be used to establish a
connection. These hosts are tried, in sequence from top to bottom, if the agents
fail to connect to the remote host set in the Connection tab.

Use the Add, Edit, Remove, Move up and Move down buttons to configure
the host list.
Execute Select between the two options:

• Before Move: Execute the following command and its arguments prior to
transfer.

749
Desktop 7.1

• After Move: Execute the following command and its arguments on the local
copy of the transferred UDR, after transfer.

Command Enter a command or a script


Arguments This field is optional. Each entered parameter value has to be separated from the
preceding value with a space.

The temporary filename is inserted as the second last parameter, and the final
filename is inserted as the last parameter, automatically. This means that if, for
instance, no parameter is given in the field, the arguments will be as follows:

$1=<temporary_filename> $2=<final_filename>

If three parameters are given in the Arguments field, the arguments are set as:

$1=<parameter_value_#1>
$2=<parameter_value_#2>
$3=<parameter_value_#3>
$4=<temporary_filename>
$5=<final_filename>

14.6.5.1.4. Backlog Tab

The Backlog tab contain configurations related to backlog functionality. If the backlog is not enabled,
the files will be moved directly to their final destination when an end batch message is received. If the
backlog however is enabled, the files will first be moved to a directory called DR_READY and then to
their final destination. For further information about transaction behavior, see Section 14.6.5.4.2,
“Retrieves”.

When backlog is initialized and when backlogged files are transferred a note is registered in the System
Log.

Figure 502. The SCP Forwarding Agent Configuration - Backlog Tab

Enable Backlog Enables backlog functionality. When not selected the agent's behavior is
similar to the standard SFTP forwarding agent.
Directory Base directory in which the agent will create sub directories to handle back-
logged files. Absolute or relative path names can be used.

750
Desktop 7.1

Type Files is the maximum number of files allowed in the backlog folder. Bytes
is the total sum (size) of the files that resides in the backlog folder. If a limit
is exceeded the workflow will abort.
Size Enter the maximal number of files or bytes that the backlog folder can contain.
Processing Order Determine the order by which the backlogged data will be processed once
connection is reestablished, select between First In First Out (FIFO) or Last
In First Out (LIFO).
Duplicate File Hand- Specifies the behavior if a file with the same file name as the one being
ling transferred is detected. The options are Abort or Overwrite and the action
is taken both when a file is transferred to the target directory or to the backlog.

14.6.5.2. Memory Management


A global memory buffer will be allocated per Execution Context. The size of the buffer is specified
by using a property in the Execution Context's configuration file located in mzhome/etc.

Note that this global backlog memory buffer is used and shared by this and any other forwarding agent
that transfers files to a remote server. The same memory buffer is used for all ongoing transactions on
the same execution context.

When several workflows are scheduled to run simultaneously, and the forwarding agents are assigned
with the backlog function, there is a risk that the buffer may be too small. In such case, it is recommen-
ded that you increase the size of this property.

Example 136.

A possible configuration for a maximum memory of 20 MB is shown here:

<property name="mz.forwarding.backlog.max_memory" value="20"/>

Note that the EC must be restarted for the property to apply.

If no property is set the default value of 10 MB will be used. The amount allocated will be printed out
in the Execution Context's log file. This memory will not affect the Java heap size and is used by the
agent when holding a copy of the file being transferred.

14.6.5.3. MultiForwardingUDR Input


When the agent is set to use MultiForwardingUDR input, it accepts input of the UDR type MultiFor-
wardingUDR declared in the FNT folder. The declaration follows:

internal MultiForwardingUDR {
// Entire file content
bytearray content;
// Target filename and directory
FNTUDR fntSpecification;
};

Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written
to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.

A UDR of the type MultiForwardingUDR which target filename is not identical to its precedent is
saved in a new output file.

751
Desktop 7.1

After a target filename that is not identical to its precedent is saved, you cannot use the first fi-
lename again. For example: Saving filename B after saving filename A, prevents you from using
A again. Instead, you should first save all the A filenames, then all the B filenames, and so forth.

Non-existing directories will be created if the Create Non-Existing Directories checkbox under the
Filename Template tab is checked, if not a runtime error will occur.

When MultiForwardingUDRs are expected configuration options in the Filename Template referring
to bytearray input will be ignored. For information about Filename Template see Section 4.1.6.2.4,
“Filename Template Tab”.

Example 137.

This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.

import ultra.FNT;

MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){
//Create the FNTUDR
FNTUDR fntudr = udrCreate(FNTUDR);
fntAddString(fntudr, dir);
fntAddDirDelimiter(fntudr);//Add a directory
fntAddString(fntudr, file);//Add a file

MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;

return multiForwardingUDR;
}

consume {

bytearray file1Content;
strToBA (file1Content, "file nr 1 content");

bytearray file2Content;
strToBA (file2Content, "file nr 2 content");

//Send MultiForwardingUDRs to the forwarding agent


udrRoute(createMultiForwardingUDR
("dir1", "file1", file1Content));
udrRoute(createMultiForwardingUDR
("dir2", "file2", file2Content));
}

The Analysis agent mentioned previous in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the user defined directory. The Create Non-Existing Directories check box under
the Filename Template tab in the configuration of the forwarding agent must be checked if the
directories don't exist.

752
Desktop 7.1

14.6.5.4. Transaction Behavior


This section includes information about the SCP forwarding agent transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

14.6.5.4.1. Emits

None.

14.6.5.4.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Begin Batch When a Begin Batch message is received, the temporary directory DR_TMP_DIR is
first created in the target directory, if not already created. Then, a target file is created
and opened in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is closed
and, finally, the file is moved from the temporary directory to the target directory.

If backlog functionality is enabled an additional step is taken where the file is moved
from DR_TMP_DIR to DR_READY and then to the target directory. If the last step
failed the file will be left in DR_READY.
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.

14.6.5.5. Introspection
The agent consumes bytearray or MultiForwardingUDR types.

14.6.5.6. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.6.5.6.1. Publishes

MIM Parameter Description


MultiForwardingUDR's This MIM parameter is only set when the agent expects input of MultiFor-
FNTUDR wardingUDR type. The MIM value is a string representing the sub path from
the output root directory on the target file system. The path is specified by
the fntSpecification field of the last received MultiForwardingUDR.
For further information about how to use input of MultiForwardingUDR type,
see Section 14.6.5.3, “MultiForwardingUDR Input”.

This parameter is of the string type and is defined as a batch MIM context
type.
File Transfer This MIM parameter contains a timestamp, indicating when the target file is
Timestamp created in the temporary directory.

File Transfer Timestamp is of the date type and is defined as a


trailer MIM context type.
Target Filename This MIM parameter contains the target filename, as defined in Filename
Template.

753
Desktop 7.1

Target Filename is of the string type and is defined as a trailer MIM


context type.
Target File Size This MIM parameter provides the size of the file that has been written. The
file is located on the server.

Target File Size is of the long type and is defined as a trailer MIM
context type.
Target Hostname This MIM parameter contains the name of the target host, as defined in the
Connection tab of the agent.

Target Hostname is of the string type and is defined as a global MIM


context type.
Target Pathname This MIM parameter contains the path to the target file, as defined in the
Target tab of the agent.

Target Pathname is of the string type and is defined as a global MIM


context type.
Target Username This MIM parameter contains the login name of the user connecting to the
remote host, as defined in the Connection tab of the agent.

Target Username is of the string type and is defined as a global MIM


context type.
Connection Retries This MIM parameter contains the number of connection attempts made.

Connection Retries is of the integer type and is defined as a batch


MIM context type.

14.6.5.6.2. Accesses

Various resources from the Filename Template configuration to construct the target filename.

14.6.5.7. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with file: filename

Reported, along with the name of the target file, when the file is successfully written to the target
directory.

14.6.5.8. Debug Events


Debug messages are dispatched when debug is used. The messages are during execution shown in the
Workflow Monitor and can also be stated according to the configuration done in the Event Notification
Editor.

For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.

754
Desktop 7.1

14.7. SFTP Agents


14.7.1. Introduction
This section describes the SFTP Collection and Forwarding agents. These agents are extension agents
of the DigitalRoute® MediationZone® Platform.

14.7.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• SSH2 and SFTP (http://tools.ietf.org/html/draft-ietf-secsh-filexfer-03)

• APL code

14.7.2. Preparations
Prior to configuring an SFTP agent, consider the following preparation notes:

• Server Identification

• Attributes

• Authentication

• Server Keys

14.7.2.1. Server Identification


The SFTP agent uses a file with known host keys to validate the server identity during connection
setup. The location and naming of this file is managed through the property:

mz.ssh.known_hosts_file

It is set in executioncontext.xml to manage where the file is saved. The default value is
${mz.home}/etc/ssh/known_hosts.

The SSH implementation uses JCE (Java Cryptography Exentsion), which means that there may be
limitations on key sizes for your Java distribution. This is usually not a problem. However, there may
be some cases where the unlimited strength cryptography policy is needed. For instance, if the host
RSA keys are larger than 2048 bits (depending on the SSH server configuration). This may require
that you update the Java Platform that runs the Execution Context.

For unlimited strength cryptography on the Oracle JRE, download the JCE Unlimited Strength Juris-
diction Policy Files from:

http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html

Replace the jar files in $JAVA_HOME/jre/lib/security with the files in this package.

The OpenJDK JRE does not require special handling of the JCE policy files for unlimited strength
cryptography.

14.7.2.2. Attributes
The SFTP collection agent and the SFTP forwarding agent share a number of common attributes. They
are both supported by a number of algorithms:

755
Desktop 7.1

blowfish-cbc, cast128-cbc, twofish192-cbc, twofish256-cbc, twofish128-cbc, aes128-cbc, aes256-cbc,


aes192-cbc, 3des-cbc.

14.7.2.3. Authentication
The SFTP agents support authentication through either username/password or private key. Private
keys can optionally be protected by a Key password. Most commonly used private key files, can be
imported into MediationZone® .

Typical command line syntax (most systems):

ssh-keygen -t <keyType> -f <directoryPath>

keyType The type of key to be generated. Both RSA and DSA key types are supported.
directoryPath The directory in which you want to save the generated keys.

756
Desktop 7.1

Example 138.

The private key may be created using the following command line:

> ssh-keygen -t rsa -f /tmp/keystore


Enter passphrase: xxxxxx
Enter same passphrase again: xxxxxx

Then the following is stated:

Your identification key has been saved in /tmp/keystore


Your public key has been saved in /tmp/keystore.pub

When the keys are created the private key may be imported to the SFTP agent:

Finally, on the SFTP server host, append /tmp/keystore.pub to


$HOME/.ssh/authorized_keys. If the $HOME/.ssh/authorized_keys is not
there it must be created.

14.7.2.4. Server Keys


The SSH protocol uses host verification as protection against attacks where an attacker manages to
reroute the TCP connection from the correct server to another machine. Since the password is sent
directly over the encrypted connection, it is critical for security that an incorrect public key is not ac-
cepted by the client.

The agent uses a file with the known hosts and keys. It will accept the key supplied by the server if
either of the following is fulfilled:

1. The host is previously unknown. In this case the public key will be registered in the file.

2. The host is known and the public key matches the old data.

757
Desktop 7.1

3. The host is known however has a new key and the user has been configured to accept the new key.
For further information, see the Advanced tab.

If the host key changes for some reason, the file will have to be removed (or edited) in order for the
new key to be accepted.

14.7.3. SFTP Collection Agent


The SFTP collection agent collects files from a remote host and inserts them into a MediationZone®
workflow, using the SFTP protocol over SSH2.

Upon activation, the agent establishes an SSH2 connection and an SFTP session towards the remote
host. If this fails, additional hosts are tried, if configured. On success, the source directory on the remote
host is scanned for all files matching the current filter. In addition, the Filename Sequence service
may be utilized for further control of the matching files. All files found, will be fed one after the other
into the workflow.

When a file has been successfully processed by the workflow, the agent offers the possibility of moving,
renaming, removing or ignoring the original file. The agent can also automatically delete moved or
renamed files after a configurable number of days. In addition, the agent offers the possibility of de-
compressing (gzip) files after they have been collected, before they are inserted into the workflow.

When all the files have been successfully processed, the agent stops, awaiting the next activation,
scheduled or manually initiated.

14.7.3.1. Configuration
To open the SFTP collection agent configuration view from the workflow editor, either double-click
on the agent, or right-click on the agent and select the Configuration option.

Note!You can configure part of the parameters in the Filename Sequence or Sort Order service
tabs. For further information, see Section 4.1.6.2.2, “Filename Sequence Tab” and Sec-
tion 4.1.6.2.3, “Sort Order Tab”.

The Configuration view consists of the following tabs:

• Connection

• Source

• Advanced

14.7.3.1.1. Connection Tab

The Connection tab contains configuration settings related to the remote host and authentication.

758
Desktop 7.1

Figure 503. The SFTP Collection Agent Configuration - Connection Tab

Server Informa- If your MediationZone® system is installed with the Multi Server functionality,
tion Provider you can configure the SFTP agent to collect from more than one server. For
further information, see the Multi Server File user's guide.
Host Primary host name or IP-address of the remote host to be connected. If a connec-
tion cannot be established to this host, the Additional Hosts, specified in the
Advanced tab, are tried.
File System Type Type of file system on the remote host. This information is used to construct the
remote filenames.

• Unix - remote host using Unix file system. Default setting.

• Windows NT - remote host using Windows NT file system.

Authenticate With Choice of authentication mechanism. Both password and private key authentica-
tion are supported.
Username Username for an account on the remote host, enabling the SFTP session to login.
Password Password related to the specified Username. This option only applies when
password authentication is enabled.
Private Key When you select this option, a Select... button will appear, which opens a window
where the private key may be inserted. If the private key is protected by a pass-
phrase, the passphrase must be provided as well. This option only applies when
private key authentication is enabled. For further information, see Section 14.7.2.3,
“Authentication”.
Enable Collection Select this check box to enable repetitive attempts to connect and start a file
Retries transfer.

When this option is selected, the agent will attempt to connect to the host as
many times as is stated in the Max Retries field described below. If the connec-
tion fails, a new attempt will be made after the number of seconds entered in the
Retry Interval (s) field described below.

759
Desktop 7.1

Retry Interval (s) Enter the time interval in seconds, between retries.

If a connection problem occurs, the actual time interval before the first attempt
to reconnect will be the time set in the Timeout field in the Advanced tab plus
the time set in the Retry Interval (s) field. For the remaining attempts, the actual
time interval will be the number seconds entered in this field.
Max Retries Enter the maximum number of retries to connect.

In case more than one connection attempt has been made, the number of used
retries will be reset as soon as a file transfer is completed successfully.

Note! This number does not include the original connection attempt.

Enable RESTART Select this check box to enable the agent to send a RESTART command if the
Retries connection has been broken during a file transfer. The RESTART command
contains information about where in the file you want to resume the file transfer.

When this option is selected, the agent will attempt to re-establish the connection,
and resume the file transfer from the point in the file stated in the RESTART
command, as many times as is entered in the Max Retries field described below.
When a connection has been re-established, a RESTART command will be sent
after the number of seconds entered in the Retry Interval (s) field described
below.

Note! The RESTART Retries settings will not work if you have selected
to decompress the files in the Source tab, see Section 14.7.3.1.2, “Source
Tab”.

Retry Interval (s) Enter the time interval, in seconds, you want to wait before initiating a restart in
this field. This time interval will be applied for all restart retries.

If a connection problem occurs, the actual time interval before the first attempt
to send a RESTART command will be the time set in the Timeout field in the
Advanced tab plus the time set in the Retry Interval (s) field. For the remaining
attempts, the actual time interval will be the number seconds entered in this field.
Max Retries Enter the maximum number of restarts per file you want to allow.

In case more than one attempt to send the RESTART command has been made,
the number of used retries will be reset as soon as a file transfer is completed
successfully.

14.7.3.1.2. Source Tab

The Source tab contains configurations related to the remote host, source directories and source files.
The configuration available can be modified by creating and selecting a customized Collection Strategy.
The following text describes the configuration options available when no customized Collection
Strategy has been selected.

760
Desktop 7.1

Figure 504. The SFTP Collection Agent Configuration - Source Tab

Collection If there are more than one collection strategy available in the system a Collection
Strategy Strategy drop down list will also be visible containing the available strategies.
For further information about the nature of the collection strategy, see Section 15,
“Appendix VII - Collection Strategies”.
Directory Absolute pathname of the source directory on the remote host, where the source
files reside. The pathname might also be given relative to the home directory of
the Username account.
Filename Name of the source files on the remote host. Regular expressions according to
Java syntax applies. For further information, see

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

Example 139.

To match all filenames beginning with TTFILES, type: TTFILES.*.

Compression Compression type of the source files. Determines whether the agent will decom-
press the files before passing them on in the workflow or not.

• No Compression - the agent will not decompress the files.

• Gzip - the agent decompresses the files using gzip.

Move to Tempor- If enabled, the source files will be moved to the automatically created subdirectory
ary Directory DR_TMP_DIR in the source directory, prior to collection. This option supports
safe collection of a source file reusing the same name.
Append Suffix to Enter the suffix that you want added to the file name prior to collecting it.
Filename

761
Desktop 7.1

Important! Before you execute your workflow, make sure that none of the
file names in the collection directory include this suffix.

Inactive Source If enabled, when the configured number of hours have passed without any file
Warning (h) being available for collection, a warning message (event) will appear in the System
Log and Event Area:

The source has been idle for more than <n>


hours, the last inserted file is <file>.

Move to If enabled, the source files will be moved from the source directory (or from the
directory DR_TMP_DIR, if using Move Before Collecting) after collection, to
the directory specified in the Destination field. If Prefix or Suffix are set, the file
will be renamed as well.

Note! If a file with the same filename already exist in the target directory,
this file will be overwritten and the workflow will not abort.

Destination Absolute pathname of the directory on the remote host into which the source files
will be moved after the collection. This field is only available if Move to is en-
abled.

Note! The Directory has to be located in the same file system as the collec-
ted files at the remote host. Also, absolute pathnames must be defined.
Relative pathnames cannot be used.

Prefix and Suffix Prefix and/or suffix that will be appended to the beginning and/or the end, respect-
ively, of the source files after the collection. These fields are only available if
Move to or Rename is enabled.
Search and Re-
Note! To apply Search and Replace, select either Move to or Rename.
place

• Search: Enter the part of the filename that you want to replace.

• Replace: Enter the replacement text.

Search and Replace operate on your entries in a way that is similar to the Unix
sed utility. The identified filenames are modified and forwarded to the following
agent in the workflow.

This functionality enables you to perform advanced filename modifications, as


well:

• Use regular expression in the Search entry to specify the part of the filename
that you want to extract.

Note! A regular expresion that fails to match the orignal file name will
abort the workflow.

762
Desktop 7.1

• Enter Replace with characters and metacharacters that define the pattern and
content of the replacement text.

Example 140. Search and Replace Examples

To rename the file file1.new to file1.old, use:

• Search: .new

• Replace: .old

To rename the file JAN2011_file to file_DONE, use:

• Search: ([A-Z]*[0-9]*)_([a-z]*)

• Replace: $2_DONE

Note that the search value divides the file name into two parts by using
brackets. The replace value applies to the second part by using the place
holder $2.

Keep (days) Number of days to keep moved or renamed source files on the remote host after
the collection. In order to delete the source files, the workflow has to be executed
(scheduled or manually) again, after the configured number of days.

Note! A date tag is added to the filename, determining when the file may
be removed. This field is only available if Move to or Rename is selected.

Rename If enabled, the source files will be renamed after the collection, remaining (or
moved back from the directory DR_TMP_DIR, if using Move Before Collecting)
in the source directory from which they were collected.

Note! You must avoid creating new file names still matching the criteria
for what files to be collected by the agent, or else the files will be collected
over and over again.

Remove If enabled, the source files will be removed from the source directory (or from
the directory DR_TMP_DIR, if using Move Before Collecting), after the collec-
tion.
Ignore If enabled, the source files will remain in the source directory after the collection.
This option is not available if Move Before Collecting is enabled.
Route FileRefer- Select this check box if you want to forward the data to an SQL Loader agent.
enceUDR See the description of the SQL Loader agent for further information.

14.7.3.1.3. Advanced Tab

The Advanced tab contains configurations related to more specific use of the SFTP Advanced service.

763
Desktop 7.1

Figure 505. The SFTP Collection Agent Configuration - Advanced Tab

Port The port number the SFTP service will use on the remote host.
Timeout (s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Accept New Host If selected, the agent overwrites the existing host key when the host is repres-
Keys ented with a new key. The default behavior is to abort when the key mis-
matches.

Warning! Selecting this option causes a security risk since the agent
will accept new keys regardless if they might belong to another machine.

Enable Key Re-Ex- Used to enable and disable automatic re-exchange of session keys during on-
change going connections. This can be useful if you have long lived sessions since
you may experience connection problems for some SFTP servers if one of the
sides initiates a key re-exchange during the session.
Additional Hosts List of additional host names or IP-addresses that may be used to establish a
connection. These hosts are tried, in sequence from top to bottom, if the agents
fail to connect to the remote host set in their Connection tabs.

Use the Add, Edit, Remove, Move up and Move down buttons to configure
the host list.

14.7.3.2. Transaction Behavior


This section includes information about the SFTP collection agent transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

14.7.3.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch Will be emitted just before the first byte of each collected file is fed into a workflow.
End Batch Will be emitted just after the last byte of each collected file has been fed into the system.

14.7.3.2.2. Retrieves

The agent retrieves commands from other agents and, based on them, generates a state change of the
file currently processed.

764
Desktop 7.1

Command Description
Cancel Batch If a Cancel Batch message is received, the agent sends the batch to ECS.

Note! If the Cancel Batch behavior defined on workflow level is configured


to abort the workflow, the agent will never receive the last Cancel Batch
message. In this situation, ECS will not be involved, and the file will not be
moved, however left at its current place.

APL code where Hint End Batch is followed by a Cancel Batch will always
result in workflow abort. Make sure to design the APL code to first evaluate
the Cancel Batch criteria to avoid this sort of behavior.

Hint End Batch If a Hint End Batch message is received, the collector splits the batch at the end of
the current processed block (as received from the server), provided that no UDR is
split. If the block end occurs within a UDR, the batch will be split at the end of the
preceding UDR.

After a batch split, the collector emits an End Batch Message, followed by a Begin
Batch message (provided that there is more data in the subsequent block).

14.7.3.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces bytearray types.

14.7.3.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.7.3.4.1. Publishes

MIM Parameter Description


Source Filenames This MIM parameter contains a list of file names of the files that are about to
be collected from the current collection directory.

Note! When the agent collects from multiple directories, the MIM value
is cleared after collection of each directory. Then, the MIM value is up-
dated with the listing of the next directory.

Source Filenames is of the list<any> type and is defined as a header


MIM context type.
File Retrieval This MIM parameter contains a timestamp, indicating when the file transfer
Timestamp starts.

File Retrieval Timestamp is of the date type and is defined as a


header MIM context type.
Source File Count This MIM parameter contains the number of files that were available for collec-
tion at startup in this instance. The value is static throughout the execution of
the workflow, even if more files arrive during the execution. The new files will
not be collected until the next execution.

765
Desktop 7.1

Source File Count is of the long type and is defined as a global MIM
context type.
Source Filename This MIM parameter contains the name of the currently processed file, as
defined at the source.

Source Filename is of the string type and is defined as a header MIM


context type.
Source Files Left This MIM parameter contains the number of source files that are yet to be col-
lected. This is the number that appears in the Execution Manager backlog.

Source Files Left is of the long type and is defined as a header MIM
context type.
Source File Size This MIM parameter provides the size of the file that is about to be read. The
file is located on the server.

Source File Size is of the long type and is defined as a header MIM
context type.
Source Host This MIM parameter contains the name of the host from which files are collec-
ted, as defined in the Host field in the Connection tab.

Source Host is of the string type and is defined as a global MIM context
type.
Source Pathname This MIM parameter contains the path from where the currently processed file
was collected, as defined in the Directory field in the Source tab.

Source Pathname is of the string type and is defined as a global MIM


context type.
Source Username This MIM parameter contains the login username to the host from which the
file was collected, as defined in the Username field in the Connection tab.

Source Username is of the string type and is defined as a global MIM


context type.

14.7.3.4.2. Accesses

The agent does not itself access any MIM resources.

14.7.3.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with file: filename

Reported along with the name of the source file that has been collected and inserted into the workflow.

• File cancelled: filename

Reported along with the name of the current file, each time a Cancel Batch message is received.
This assumes the workflow is not aborted. For further information, see Section 14.7.4.4.2, “Retrieves”.

14.7.3.6. Debug Events


Debug messages are dispatched when debug is used. During exection, the messages are shown in the
Workflow Monitor and can also be stated according to the configuration done in the Event Notification
Editor.

766
Desktop 7.1

For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.

14.7.4. SFTP Forwarding Agent


The SFTP forwarding agent forwards files to a remote host using the SFTP protocol over SSH2. Upon
activation, the agent establishes an SSH2 connection and an SFTP session towards the remote host.
On failure, additional hosts are tried, if configured.

To ensure that downstream systems will not use the files until they are closed, they are maintained in
a temporary directory on the remote host until the endBatch message is received. This behavior is
also used for cancelBatch messages. If a Cancel Batch is received, file creation is cancelled.

14.7.4.1. Configuration
The SFTP forwarding agent configuration window is displayed when right-clickin on the agent in a
workflow and selecting the Configuration... option, or double-clicking on the agent. Part of the con-
figuration can be made in the Filename Template service tab described in Section 4.1.6.2.4, “Filename
Template Tab”.

14.7.4.1.1. Connection Tab

Figure 506. The SFTP Forwarding Agent Configuration - Connection Tab

See description of the Connection tab in Figure 503, “The SFTP Collection Agent Configuration -
Connection Tab”.

767
Desktop 7.1

14.7.4.1.2. Target Tab

Figure 507. The SFTP Forwarding Agent Configuration - Target Tab

The Target tab contains configuration settings related to the remote host, target directories and target
files.

Input Type The agent can act on two input types. Depending on which one the agent is
configured to work with, the behavior will differ.

The default input type is bytearray, that is the agent expects bytearrays. If
nothing else is stated, the documentation refer to input of bytearray.

If the input type is MultiForwardingUDR, the behavior is different. For further


information about the agent's behavior in MultiForwardingUDR input, see Sec-
tion 14.7.4.3, “MultiForwardingUDR Input”.
Directory Absolute pathname of the target directory on the remote host, where the forwarded
files will be placed. The pathname may also be given relative to the home direct-
ory of the user's account.

The files will be temporarily stored in the automatically created subdirectory


DR_TMP_DIR in the target directory. When an End Batch message is received,
the files are moved from the subdirectory to the target directory.
Create Directory Select this check box to create the directory, or the directory structure, of the
path that you specify in Directory.

Note! The directories are created when the workflow is executed.

Compression Compression type of the destination files. Determines whether the agent will
compress the output files as it writes them.

768
Desktop 7.1

• No Compression - the agent will not compress the files.

• Gzip - the agent will compress the files using gzip.

Note! No extra extension will be appended to the target filenames, even


if compression is selected.

Produce Empty If you require to create empty files, check this setting.
Files
Handling of Select the behavior of the agent when the file already exists, the alternatives are:
Already Existing
Files • Overwrite - The old file will be overwritten and a warning will be logged in
the System Log.

• Add Suffix - If the file already exists, suffix ".1" will be added. If this file
also exists, suffix ".2" will be tried instead and so on.

• Abort - This is the default selection and is the option used for upgraded con-
figurations, that is workflows from an upgraded system.

Use Temporary If this option is selected, the agent will move the file to a temporary directory
Directory before moving it to the target directory. After the whole file has been transferred
to the target directory, and the endBatch message has been received, the tem-
porary file is removed from the temporary directory.
Use Temporary If there is no write access to the target directory and, hence, a temporary directory
File cannot be created, the agent can move the file to a temporary file that is stored
directly in the target directory. After the whole file has been transferred, and the
endBatch message has been received, the temporary file will be renamed.

The temporary filename is unique for every execution of the workflow. It consists
of a workflow and agent ID, and a file number.
Abort Handling Select how to handle the file in case of cancelBatch or rollback, either Delete
Temporary File or Leave Temporary File.

Note! When a workflow aborts, the file will not be removed until the next
time the workflow is started.

769
Desktop 7.1

14.7.4.1.3. Advanced Tab

Figure 508. The SFTP Forwarding Agent Configuration - Advanced Tab

The Advanced tabs contain configurations related to more specific use of the SFTP service, which
might not be frequently utilized.

Port The port number the SFTP service will use on the remote host.
Timeout (s) The maximum time, in seconds, to wait for response from the server. 0 (zero)
means to wait forever.
Accept New Host If selected, the agent overwrites the existing host key when the host is represented
Keys with a new key. The default behaviour is to abort when the key mismatches.

Selecting this option causes a security risk since the agent will accept new
keys regardless if they might belong to another machine.

Enable Key Re- Used to enable and disable automatic re-exchange of session keys during ongoing
Exchange connections. This can be useful if you have long lived sessions since you may
experience connection problems for some SFTP servers if one of the sides initiates
a key re-exchange during the session.
Additional Hosts List of additional host names or IP-addresses that may be used to establish a
connection. These hosts are tried, in sequence from top to bottom, if the agents
fail to connect to the remote host set in their Connection tabs.

Use the Add, Edit, Remove, Move up and Move down buttons to configure the
host list.
Execute During transfer a temporary file is written, which is then moved to the final file.
Select if the script should be executed on the transferred working copy or the final
file with the following two options:

770
Desktop 7.1

• Before Move: Execute the following command and its arguments on the tem-
porary file.

• After Move: Execute the following command and its arguments on the final
file.

Command Enter a command or a script. The script will be executed on the remote system
from it's working directory.
Argument This field is optional. Each entered parameter value has to be separated from the
preceding value with a space.

The temporary filename is inserted as the second last parameter, and the final fi-
lename is inserted as the last parameter, automatically. This means that if, for in-
stance, no parameter is given in the field, the arguments will be as follows:

$1=<temporary_filename> $2=<final_filename>

If three parameters are given in the Arguments field, the arguments are set as:

$1=<parameter_value_#1>
$2=<parameter_value_#2>
$3=<parameter_value_#3>
$4=<temporary_filename>
$5=<final_filename>

If After Move: has been selected, the argument with <temporary filename> is
excluded.

14.7.4.1.4. Backlog Tab

The Backlog tab contains configurations related to backlog functionality. If the backlog is not enabled,
the files will be moved directly to their final destination when an endBatch message is received. If the
backlog however is enabled, the files will first be moved to a directory called DR_READY and then to
their final destination. For further information about the transaction behavior, see Section 14.7.4.4.2,
“Retrieves”.

When backlog is initialized, and when backlogged files are transferred, a note is registered in the
System Log.

Figure 509. The SFTP Forwarding Agent Configuration - Backlog Tab

771
Desktop 7.1

Enable Backlog Enables backlog functionality.


Directory Base directory in which the agent will create sub directories to handle back-
logged files. Absolute or relative path names can be used.
Type If you select the Files option, the Size field below will determine the maximum
number of files allowed in the backlog folder. If you select the Bytes option,
the Sizefield below will determine the total sum (size) of the files that resides
in the backlog folder. If a limit is exceeded the workflow will abort.
Size Enter the maximal number of files or bytes that the backlog folder can contain.
Processing Order Determines the order by which the backlogged data will be processed once
connection is reestablished. Select between First In First Out (FIFO) or Last
In First Out (LIFO).
Duplicate File Hand- Specifies the behavior if a file with the same file name as the one being
ling transferred is detected. The options are Abort or Overwrite and the action
is taken both when a file is transferred to the target directory or to the backlog.

14.7.4.2. Memory Management


A global memory buffer will be allocated per Execution Context. The size of the buffer is specified
by using a property in the Execution Context's configuration file located in mzhome/etc.

Note! This global backlog memory buffer is used and shared by this and any other forwarding
agent that transfers files to a remote server. The same memory buffer is used for all ongoing
transactions on the same execution context.

When several workflows are scheduled to run simultaneously, and the forwarding agents are assigned
with the backlog function, there is a risk that the buffer may be too small. In that case, it is recommended
that you increase the size of this property.

Example 141.

A possible configuration for a maximum memory of 20 MB is shown here:

<property name="mz.forwarding.backlog.max_memory" value="20"/>

Note that the EC must be restarted for the property to apply.

If no property is set the default value of 10 MB will be used. The amount allocated will be printed out
in the Execution Context's log file. This memory will not affect the Java heap size and is used by the
agent when holding a copy of the file being transferred.

14.7.4.3. MultiForwardingUDR Input


When the agent is set to use MultiForwardingUDR input, it accepts input of the UDR type MultiFor-
wardingUDR declared in the FNT folder. The declaration follows:

internal MultiForwardingUDR {
// Entire file content
byte[] content;
// Target filename and directory
FNTUDR fntSpecification;
};

Every received MultiForwardingUDR ends up in its filename-appropriate file. The output filename
and path is specified by the fntSpecification field. When the files are received they are written

772
Desktop 7.1

to temp files in the DR_TMP_DIR directory situated in the root output folder. The files are moved to
their final destination when an end batch message is received. A runtime error will occur if any of the
fields has a null value or the path is invalid on the target file system.

A UDR of the type MultiForwardingUDR with a target filename that is not identical to its precedent
will be saved in a new output file.

Note! After a target filename that is not identical to its precedent has been saved, you cannot
use the first filename again. For example: Saving filename B after saving filename A, prevents
you from using A again. Instead, you should first save all the A filenames, then all the B file-
names, and so forth.

Non-existing directories will be created if the Create Non-Existing Directories checkbox under the
Filename Template tab is selected, if not, a runtime error will occur. When MultiForwardingUDRs
are expected configuration options referring to bytearray input are ignored. For further information
about Filename Template, see Section 4.1.6.2.4, “Filename Template Tab”.

773
Desktop 7.1

Example 142.

This example shows the APL code used in an Analysis agent connected to a forwarding agent
expecting input of type MultiForwardingUDRs.

import ultra.FNT;

MultiForwardingUDR createMultiForwardingUDR
(string dir, string file, bytearray fileContent){
//Create the FNTUDR
FNTUDR fntudr = udrCreate(FNTUDR);
fntAddString(fntudr, dir);
fntAddDirDelimiter(fntudr);//Add a directory
fntAddString(fntudr, file);//Add a file

MultiForwardingUDR multiForwardingUDR =
udrCreate(MultiForwardingUDR);
multiForwardingUDR.fntSpecification = fntudr;
multiForwardingUDR.content = fileContent;

return multiForwardingUDR;
}

consume {

bytearray file1Content;
strToBA (file1Content, "file nr 1 content");

bytearray file2Content;
strToBA (file2Content, "file nr 2 content");

//Send MultiForwardingUDRs to the forwarding agent


udrRoute(createMultiForwardingUDR
("dir1", "file1", file1Content));
udrRoute(createMultiForwardingUDR
("dir2", "file2", file2Content));
}

The Analysis agent mentioned previously in the example will send two MultiForwardingUDRs
to the forwarding agent. Two files with different contents will be placed in two separate sub
folders in the root directory. If the directories do not exist, the Create Non-Existing Directories
check box in the forwarding agent Configuration dialog under the Filename Template tab must
be selected.

14.7.4.4. Transaction Behavior


This section includes information about the SFTP forwarding agent transaction behavior. For inform-
ation about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

14.7.4.4.1. Emits

This agent does not emit anything.

14.7.4.4.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

774
Desktop 7.1

Command Description
Begin Batch When a Begin Batch message is received, the temporary directory DR_TMP_DIR is
first created in the target directory, if not already created. Then, a target file is created
and opened in the temporary directory.
End Batch When an End Batch message is received, the target file in DR_TMP_DIR is closed
and, finally, the file is moved from the temporary directory to the target directory.

If backlog functionality is enabled, an additional step is taken where the file is moved
from DR_TMP_DIR to DR_READY and then to the target directory. If the last step
failed, the file will be left in DR_READY and marked as backlogged.
Cancel Batch If a Cancel Batch message is received, the target file is removed from the DR_TMP_DIR
directory.

14.7.4.5. Introspection
The agent consumes bytearray or MultiForwardingUDR types.

14.7.4.6. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

14.7.4.6.1. Publishes

MIM Value Description


MultiForwardin- The MIM resource is only set when the agent expects input of MultiForwardin-
gUDR's FNTUDR gUDR type. The MIM value is a string representing the sub path from the
output root directory on the target file system. The path is specified by the
fntSpecification field of the last received MultiForwardingUDR. For
further information about using input of MultiForwardingUDR type, see
Section 14.7.4.3, “MultiForwardingUDR Input”.

This parameter is of the string type and is defined as a batch MIM context
type.
File Transfer This MIM parameter contains a timestamp, indicating when the target file is
Timestamp created in the temporary directory.

File Transfer Timestamp is of the date type and is defined as a


trailer MIM context type.
Target Filename This MIM parameter contains the target filename, as defined in Filename
Template.

Target Filename is of the string type and is defined as a trailer MIM


context type.
Target File Size This MIM parameter contains the size of the file that has been written. The
file is located on the server.

Target File Size is of the long type and is defined as a trailer MIM
context type.
Target Hostname This MIM parameter contains the name of the target host, as defined in the
Target or Advanced tab of the agent.

Target Hostname is of the string type and is defined as a global MIM


context type.
Target Pathname This MIM parameter contains the path to the target file, as defined in the
SFTP tab of the agent.

775
Desktop 7.1

Target Pathname is of the string type and is defined as a global MIM


context type.
Target Username This MIM parameter contains the login name of the user connecting to the
remote host, as defined in the SFTP tab of the agent.

Target Username is of the string type and is defined as a global MIM


context type.

14.7.4.6.2. Accesses

Various resources from the Filename Template configuration to construct the target filename.

14.7.4.7. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with file: filename

Reported, along with the name of the target file, when the file is successfully written to the target
directory.

14.7.4.8. Debug Events


Debug messages are dispatched when debug is used. During execution, the messages are shown in the
Workflow Monitor and can also be stated according to the configuration done in the Event Notification
Editor.

For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.

14.8. SQL Agents


14.8.1. Introduction
This section describes the SQL Collection and Forwarding agents available as part of DigitalRoute®
MediationZone® Platform.

14.8.1.1. Prerequisites
MediationZone® supports a number of different database types for example Oracle, SQL Server and
Derby. In this user guide the user is assumed to know the specifics of the SQL syntax needed to retrieve
the information from the database.

The reader of this information should also be familiar with:

• The MediationZone® Platform

• Structured Query Language (SQL)

• UDR structure and contents

14.8.2. SQL Collection Agent


The SQL collection agent collects rows from any database table and inserts them as UDRs into a Me-
diationZone® workflow.

776
Desktop 7.1

When the workflow is executed the agent will execute a query in SQL, based on the user configuration
and retrieve all rows matching the statement. For each row a UDR is created and populated according
to the assignments in the configuration window.

Supported database commands depends on the JDBC driver of the database.

14.8.2.1. Configuration
The SQL collection agent configuration window is displayed when a database agent in a workflow is
right-clicked, selecting Configuration... or double-clicked.

14.8.2.1.1. SQL Tab

Figure 510. SQL Collection agent configuration window, SQL tab.

The SQL tab contains configurations related to the SQL query to use to retrieve information from the
source database, as well as the UDR type to be created and how the UDRs are populated by the agent.

Database Profile name of the database that the agent will connect to and retrieve data from. For
further information about database profile setup, see Section 9.3, “Database Profile”.
SQL Expres- The user enters a SQL statement specifying the query MediationZone® should send
sion to the database.

By right clicking in the pane, selecting MIM Assistance..., the MIM Browser appears.

The user can select a MIM value to be used in the SQL query. The value of the MIM
will be used in the SQL query during execution. The name of the MIM Value for ex-
ample "Workflow.Batch Count" will be displayed in blue color as "$(Workflow.Batch
Count)" in the text field.

There is support for Stored Procedures. When using the collection agent to produce
output from the procedure the JDBC support for output parameters is used.

The character "?" is used to mark an output parameter in the SQL statement in the
agent. An example of a procedure with one input argument and one output argument
could have a SQL statement looking like this:

777
Desktop 7.1

CALL test_proc( $(Analysis_1.TestMIM), ? )

The procedure will be called and the value of the output parameter ("?") will be assigned
to the configured UDR field. When using output parameters only one UDR will be
produced in the batch.

The exact supported syntax for stored procedures varies between databases. For example
calling an Oracle function can be done via:

begin ? := test_func( ); end;

The statement syntax of the statement will not be validated in the GUI, but ref-
erences to MIM values are validated. If a incorrect SQL statement is entered this
will generate an exception during runtime.

UDR Type Type of UDR mapped from the result set and routed into the workflow.

When selecting the Browse button next to the field the UDR Internal Format Browser
will open and one and only one UDR type can be selected.
UDR Fields The table represents the mapping from the result set, returned when executing the
statement entered in the SQL field to a specified Value in the UDR.

14.8.2.2. Transaction Behavior


14.8.2.2.1. Emits

The agent emits commands changing the state of the file currently processed.

Command Description
Begin Batch The agent will emit beginBatch before the first UDR from the result set is routed into
the workflow.
End Batch The agent will emit endBatch after the last row in the result set has been mapped to a
UDR and routed into the workflow.

14.8.2.2.2. Retrieves

The agent retrieves commands from other agents and based on them generates a state change of the
file currently processed.

Command Description
Hint End Batch When hintEndBatch is called the agent will call endBatch followed by beginBatch
(if more records exists in the result set). It will then continue to process the result
set.
Cancel Batch Cancel Batch is not supported by the agent.

14.8.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces the UDR type selected from the UDR Type.

14.8.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

778
Desktop 7.1

The agent does not publish nor access any MIM parameters.

14.8.2.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with batch after X UDRs.

Reported when a complete batch is collected.

14.8.2.6. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• SQL: XXX

The debug message is sent when the SQL agent creates its SQL string to send to the database.

14.8.3. SQL Forwarding Agent


The SQL forwarding agent inserts UDR data into a database table according to your definitions of
mapping between UDR fields and database table columns.

The agent also enables you to populate database columns with MIM values either by using a plain
SQL statement, or by invoking a stored procedure that inserts the data.

Supported database commands depend on the JDBC driver of the database.

14.8.3.1. Configuration
You open the SQL forwarding agent configuration view from the workflow editor. In the workflow
template, either double-click the agent icon, or right-click it and select Configuration.

14.8.3.1.1. SQL Tab

The SQL tab contains configuration data that is related to the target database and the UDR Type.

779
Desktop 7.1

Figure 511. The SQL Forwarding Agent Configuration View - SQL Tab

Database Profile defining the database that the agent is supposed to connect and forward data
to.

When selecting the Browse button next to the field it will open a browser where one
and only one database profile can be selected. For further information about database
profile setup, see Section 9.3, “Database Profile”.
UDR Type The UDR type the agent accesses as input type.

When selecting the Browse button next to the field it will open the UDR Internal
Format Browser and one and only one UDR type can be selected.
SQL State- The user enters a SQL statement MediationZone® should send to the database.
ment
By right clicking in the pane, selecting MIM Assistance..., the MIM Browser appears.

The user can select a MIM value to be used in the SQL query. The value of the MIM
will be used in the SQL query during execution. The name of the MIM Value for ex-
ample "Workflow.Batch Count" will be displayed in blue color as "$(Workflow.Batch
Count)" in the text field.

By right clicking in the pane, selecting UDR Assistance..., the UDR Internal Format
Browser appears.

The user can select a field from the UDR specified in the UDR Type selector. The
name of the UDR field name for example "UDR.Fieldname" will be displayed in
green color as "$(UDR.Fieldname)" in the text field. If the input type UDR is changed
after writing the SQL syntax the GUI validation will fail (unless the different UDR's
have identical field names). The field value will be used as an input variable in the
SQL Statement in the same way as MIM values do.

There is support for Stored Procedures. When using the forward agent use JDBC to
call a stored procedure in the same way as a normal call.

The exact supported syntax for stored procedures varies between databases. An ex-
ample of a procedure with two input arguments could have a SQL statement looking
like this:

Example 143.

CALL test_proc($(UDR.field1), $(UDR.field2))

780
Desktop 7.1

The statement syntax of the statement will not be validated in the GUI, but
references to MIM values are validated. If an incorrect SQL statement is entered
this will generate an exception during runtime.

Commit Win- The number of UDRs to be processed between each database commit command.
dow Size This value may be used to tune the performance. If tables are small and contain no
Binary Objects, the value may be set to a higher value than the default. Default is
1000.

A number field where it is possible to enter an integer number. If the check box is
enabled the agent will call commit on the database after reaching the specified number
of successful executions. It will also call commit when receiving endBatch. If the
check box is disabled the agent will only do a commit when receiving endBatch.
Route on SQL Check to prevent the workflow from aborting when selected exceptions occur. Such
Exception exceptions are filtered by the rule that you specify in the Regular Expression Criteria
editing pane. Instead of aborting the workflow due to these exceptions, the workflow
proceeds to the agent that you now can route the selected exceptions to.

Note! Since the error message contains linefeed, the regular expression has to
adjusted according to this.

Start the regular expression with "(?s)" to ignore linefeed, for example:

(?s).*ORA-001.*

Clear to abort the workflow on the occurrence of any exception.


Regular Ex- Use the Java Regular Expression syntax convention when you enter the expression
pression Cri- that selects the SQL error messages. The SQL errors that match this criteria enable
teria the agent to identify the data that should be routed further along the workflow.

When the agent identifies erroneous data it generates an Agent Message Event. For
further information, see Section 5.5.14, “Agent Event”.

MediationZone® specific database tables from the Platform database should never be utilized
as targets for output as this might cause severe damage to the system in terms of data corruption
that in turn might make the system unusable.

14.8.3.2. Handling Erroneous UDRs


The SQL forwarding agent encapsulates en erroneous UDR along with the error message that describes
the error, in a new UDR. This in turn, enables you to process the faulty UDR, adjust the processing
according to the error type, and prevent the workflow from aborting due to sected errors types.

781
Desktop 7.1

Example 144.

Consider the following workflow:

Figure 512. An SQL Forwarding Workflow

The SQL forwarding agent identifies the asciiSEQ_TI UDR as erroneous, crates an er-
rorUDR UDR that wraps together the original UDR with the error message that was generated:

Figure 513. The Erroneous UDR Before and After SQL Forwarding

14.8.3.3. Transaction Behavior


14.8.3.3.1. Emits

None.

14.8.3.3.2. Retrieves

The database transaction in the SQL forwarding agent is not consistent with the MediationZone®
batch transaction behavior, that is the normal batch transaction safety is not guaranteed for this agent.

If a workflow aborts, the database transaction may have been partly or completely done, however the
input file will be reprocessed and consequently can cause duplication of data if an INSERT statement
is used in the forwarding agent.

14.8.3.4. Introspection
The introspection is the type of data an agent expects and delivers.

The agent consumes the selected UDR type.

14.8.3.5. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

The agent does not publish nor access any MIM parameters.

14.8.3.6. Agent Message Event


There are no message events for this agent.

782
Desktop 7.1

14.8.3.7. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• SQL: sql-statement

Example 145.

SQL: INSERT INTO test_table (NUM, DATA) VALUES (?, ?)

The debug message is sent when the SQL agent creates its SQL string to send to the database.

For further information about the agent debug event type, see Section 5.5.22, “Debug Event”.

14.9. TCP/IP Agents


14.9.1. Introduction
This section describes the TCP/IP agents. These agents are realtime extensions of the DigitalRoute®
MediationZone® Platform. The TCP/IP forwarding agent is listed among the processing agents in
Desktop while the TCP/IP collection agent is listed among the collection agents.

14.9.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

• TCP/IP

14.9.2. TCP/IP Forwarding Agent


14.9.2.1. Overview
The TCP/IP forwarding agent allows data to be distributed from a workflow, using the standard TCP/IP
protocol. Several connections at a time are allowed. The agent can also send various UDRs back into
the workflow. All handling of these UDRs are done through APL commands.

Figure 514. A TCP/IP forwarding workflow example.

14.9.2.1.1. TCPIP_FORW UDR Types

The UDR types used by the TCP/IP forwarding agent can be viewed in the UDR Internal Format
Browser. To open the browser open an APL Editor, in the editing area right-click and select UDR
Assistance.... The browser will then open.

783
Desktop 7.1

Figure 515. TCPIPUDR

14.9.2.1.1.1. RemoteHostConfig

The RemoteHostConfig UDR contains the connection details for the remote host. This UDR is included
in all the other UDR types.

The following fields are included in the RemoteHostConfig:

Field Description
host (string) This field contains the hostname or IP address to the remote host.
port (int) This field contains the port to the remote host.

14.9.2.1.1.2. ConnectionRequestUDR

When the agent receives this UDR, it will try to establish a new connection, or close a connection to
a a remote host. The agent will then return the ConnectionStateUDR containing information about the
current state of the connection.

The following fields are included in the ConnectionRequestUDR:

Field Description
closeConnection This field determines whether the request is for opening or closing
(boolean) a connection. If a new connection is to be made, this field will be
set to false, and if a connection is to be closed, this field will
be set to true.
remoteHost (RemoteHost- This is the RemoteHostConfig UDR containing the connection
Config (TPCIP)) details.

14.9.2.1.1.3. ConnectionStateUDR

The agent returns the ConnectionStateUDR when a ConnectionRequestUDR has been sent, as well as
in the event a connection goes down for some reason.

The following fields are included in the ConnectionReqeustUDR:

784
Desktop 7.1

Field Description
connectionOpen (boolean) In case you have a valid connection (see the validAddress
field below, this field indicates whether the connection is open
or not, true for open and false for closed.
remoteHost (RemoteHost- This is the RemoteHostConfig UDR containing the connection
Config (TPCIP)) details.
validAddress (boolean) This field indicates if the connection details in the RemoteHost-
Config UDR were valid or not, true for valid and false for
invalid.

14.9.2.1.1.4. RequestUDR

When the agent receives a RequestUDR it will try to send the included bytearray to the remote host.

The following fields are included in the RequestUDR:

Field Description
data (bytearray) This field contains the actual data to be sent.
remoteHost (RemoteHostConfig This is the RemoteHostConfig UDR containing the con-
(TPCIP)) nection details.

14.9.2.1.1.5. ResponseUDR

If the TCP/IP forwarding agent has been configured to handle responses, it will return ResponseUDRs
to the workflow.

The following fields are included in the ResponseUDR:

Field Description
data (bytearray) This field contains the response.
remoteHost (RemoteHostConfig This is the RemoteHostConfig UDR containing the
(TPCIP)) connection details.

14.9.2.1.1.6. ErrorUDR

The agent will return the ErrorUDR in case a RequestUDR fails.

The following fields are included in the ErrorUDR:

Field Description
data (bytearray) This is original data from the RequestUDR. This can be used
for storing the data.
ErrorReason (string) This field contains the reason for the failure.
ErrorStackTrace (string) This field contains the stack trace from the failure. Note! This
field should only be read if absolutely necessary, since it re-
quires a large amount of CPU.
remoteHost (RemoteHostCon- This is the RemoteHostConfig UDR containing the connection
fig (TPCIP)) details.

14.9.2.2. Configuration
The TCP/IP forwarding agent configuration window is displayed when double-clicking on the agent
in a workflow, or when right-clicking o the agent and selecting Configuration...

785
Desktop 7.1

14.9.2.2.1. General Tab

Figure 516. TCP/IP forwarding agent configuration window, General tab

Host The IP address or hostname to which the agent will bind.


Port The port number to which the data will be sent. Make sure the port is not used
by other applications.
Receive Response Select this check box if you want the agent to be able to receive responses for
requests.

Note! The visual string containing the Host and Port will act as an identifier for the connection.

14.9.2.2.2. Advanced properties Tab

In the Advanced tab you can configure additional properties for optimizing the performance of the
TCP/IP forwarding agent.

Figure 517. TCP/IP forwarding agent configuration window, Advanced properties tab

See the text in the tab for further information about the properties.

14.9.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

786
Desktop 7.1

The agent expects ConnectionRequestUDRs and RequestUDRs.

The agent returns ConnectionStateUDRs, ResponseUDRs and ErrorUDRs.

14.9.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

The agent does not publish nor access any MIM parameters.

14.9.2.5. Agent Message Events


There are no message events for this agent.

14.9.2.6. Agent Debug Events


There are no debug events for this agent.

14.9.3. TCP/IP Collection Agent


14.9.3.1. Overview
The TCP/IP Collection agent allows data to be collected and inserted into a workflow, using the
standard TCP/IP protocol. Several connections at a time are allowed. It is also possible to send responses
back to the source in form of a bytearray or in the case with several connections as a UDR containing
a response field. All response handling is done through APL commands.

Figure 518. A TCP/IP workflow may be configured to send responses to the source.

Upon activation, the collector binds to the defined port and awaits connections to be accepted. Note
the absence of a Decoder in the workflows. The collector has built-in decoding functionality, supporting
any format as defined in the Ultra Format Editor.

14.9.3.1.1. TCPIP Related UDR Type

The UDR type created by default in the TCPIP agent can be viewed in the UDR Internal Format
Browser. To open the browser open an APL Editor, in the editing area right-click and select UDR
Assistance...; the browser opens.

787
Desktop 7.1

Figure 519. TCPIPUDR

Field Description
RemoteIP(ipaddress) The IP address of the client.
RemotePort(int) The port through which the agent connects
to the client.
response(bytearray) The data that the agent sends back to the
client.
SequenceNumber(long) A per-connection unique number that is
generated by the TCPIP agent. This num-
ber enables you to follow the order by
which the UDRs are collected. The agent
counter is reset whenever connection with
the agent is established.

• The UDR fields RemoteIP, RemotePort, and SequenceNumber are accessible from
the workflow configuration only if the TCPIP agent is configured with a decoder that extends
the built-in TCPIP format. For further information see Decoder in Section 14.9.3.2.2, “Decoder
Tab”.

• The TCPIP UDR cannot be cloned and the socket connection will not be initialized if a
cloning is attempted. It is therefore recommended that you initialize every UDR from the
decoder, and then route it into the workflow.

14.9.3.2. Configuration
The TCP/IP Collection agent configuration window is displayed when the agent in a workflow is
double-clicked or right-clicked, selecting Configuration...

788
Desktop 7.1

14.9.3.2.1. TCP/IP Tab

Figure 520. TCP/IP Collection agent configuration window, TCP/IP tab.

Host The IP address or hostname to which the TCP collector will bind. If the host is
bound the port must also be bound. If left empty TCP collector binds to all IP ad-
dresses available on the system.

Note! This can be dynamically updated.

Port The port number from which the data is received. Make sure the port is not used
by other applications.

The port can also be dynamically updated while the agent is running. Double-
click the agent in the Workflow Editor in monitor mode and modify it. To
trigger the agent to use the new port the workflow must be saved. For further
information about updating agent configurations while the workflow is
running, see Section 2.2.2, “Dynamic Update”.

Allow Multiple If enabled, several TCP/IP connections are allowed simultaneously. If disabled,
Connections only one at a time is allowed.
Number of Con- If Allow Multiple Connections is enabled, the maximum number of simultaneous
nections Allowed connections is specified as a number between 2 and 65000.
Send Response If enabled, the collector will be able to send a response back to the source. If Allow
Multiple Connections is enabled, the collector expects a UDR extended with the
default TCPIPUDR as reply. If disabled, it expects a bytearray.

Drag and release in the opposite direction in the workflow to create a re-
sponse route between the agents. The TCP/IP agent must be connected to
an agent utilizing APL. This since responses are created with APL com-
mands.

Figure 521.

For a description of the differnces between a single- and a multiple connection


setup, see Section 14.9.4, “An Example”.

789
Desktop 7.1

14.9.3.2.2. Decoder Tab

Figure 522. TCP/IP Collection agent configuration window, Decoder tab.

Decoder List holding available decoders introduced via the Ultra Format Editor. The decoders
are named according to the following syntax:

<decoder> (<module>)

The option MZ Format Tagged UDRs indicate that the expected UDRs are stored in
one of the built-in MediationZone® formats. If the compressed format is used, the
decoder will automatically detect this. Select this option to make the Tagged UDR
Type list accessible for configuration. If this option is selected, the Tagged UDR Type
list will be enabled.
Tagged UDR List of available internal UDR formats stored in the Ultra and Code servers. The formats
Type are named according to the following syntax:

<internal> (<module>)

If the decoder is to reprocess UDRs of an internal format, the Decoder MZ format


tagged UDRs has to be selected to enable this list. Once enabled, the internal format
may be selected.
Full Decode If enabled, the UDR will be fully decoded before inserted into the workflow. This may
have a negative impact on performance since all fields may not be accessed in the
workflow, making decoding of all fields unnecessary.

If disabled (default), the amount of work needed for decoding is minimized, using a
"lazy" method decoding sub fields. This means the actual decoding work may not be
done until later in the workflow, when the field values are accessed for the first time.
Corrupt data (that is, data for which decoding fails) may not be detected during the
decoding stage and could cause the UDR to be discarded at a later processing stage.

14.9.3.3. The TCPIP Format


In case Allow Multiple Connections and Send Response are selected, UDRs are expected as reply
back to the collector from the APL agent. Extend the internal format to contain the built-in TCPIP
format.

external my_ext sequential {


// field definitions
int type : static_size(1);
ascii Anum : static_size(8);
ascii Bnum : terminated_by(0xA);
};

internal TCP_Int :
extends_class( "com.digitalroute.wfc.tcpipcoll.TCPIPUDR" ) {
};

790
Desktop 7.1

in_map TCP_InMap :
external( my_ext ),
internal( TCP_Int ),
target_internal( my_TCP_TI ) {
automatic;
};

decoder myDecoder : in_map( TCP_InMap );

14.9.3.4. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces UDRs in accordance with the Decoder tab. If Send Response is enabled, the agent
consumes bytearray types for single connections and TCPIPUDR for multiple connections.

14.9.3.5. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

The agent does not publish nor access any MIM parameters.

14.9.3.6. Agent Message Events


There are no message events for this agent.

14.9.3.7. Agent Debug Events


There are no debug events for this agent.

14.9.4. An Example
A workflow containing a TCP/IP Collection agent can be set up to send responses back to the source
from which the incoming data was received. This requires an APL agent (Analysis or Aggregation)
to be part of the workflow.

Figure 523. A TCP/IP workflow can be configured to send responses to the source.

To illustrate how such a workflow is defined, an example is given where an incoming UDR is validated,
resulting in either the field anum or a sequence number being sent back as a reply message to the
source. Depending on if one or several TCP/IP connections are allowed, the format of the reply message
sent from the Analysis agent differs:

Single Connection A bytearray is sent back as reply.


Multiple Connections A UDR, extended with the built-in TCPIPUDR format. The reply message
must be inserted in the response field (a bytearray).

791
Desktop 7.1

To keep the example as simple as possible, the valid records are not processed. Usually, no reply
is sent back until the UDRs are fully validated and processed. The example aims to focus on
the response handling only.

14.9.4.1. Single TCP/IP Connection


Disabling the Allow Multiple Connections option in the TCP/IP Collection agent, will allow only
one TCP/IP session at a time. If another attempt to create a connection is made while a connection
already exists, the new connection will be rejected.

14.9.4.1.1. The TCP/IP Collection Agent

In order to be able to send reply messages, Send Response must be enabled in the configuration window
of the agent. Drop an Analysis agent in the workflow and connect it to the TCP/IP agent. Drag and
release in the opposite direction to create a response route in the workflow.

Also, an Ultra format for decoding of incoming data must be defined. Note, no format has to be defined
for the response - it will be sent as a bytearray from the Analysis agent.

Figure 524. TCP/IP agent configuration.

14.9.4.1.2. The Analysis Agent

The Analysis agent handles both the validation of the incoming records, as well as sending the response.
If the field duration is less than or equal to zero, the UDR is discarded and the field anum, in form
of a bytearray, is sent back as response. All other UDRs are routed to the next agent in turn, and
instead a sequence number is sent as response.

Note the use of the synchronized keyword. Updating a global variable within a real-time workflow
must be done within a synchronized function. This to assure consistency between threads. By
default, a real-time workflow utilizes several threads.

int seqNum;

synchronized int createSeqNum() {


seqNum = seqNum + 1;
return seqNum;
}

consume {
bytearray reply;

if ( input.duration <= 0 ) {
strToBA( reply, input.anum );
udrRoute( reply, "response" );
} else {
strToBA( reply, (string)createSeqNum() );
udrRoute( reply, "response" );

792
Desktop 7.1

udrRoute( input, "UDRs" );


}

14.9.4.2. Multiple TCP/IP Connections


Enabling the Allow Multiple Connections option in the TCP/IP Collection agent, will allow several
simultaneous TCP/IP sessions at a time. If an attempt to open a new connection is made when the
maximum number of connections are already open, the new connection will be refused.

14.9.4.2.1. The TCP/IP Collection Agent

In order to be able to send reply messages, Send Response must be enabled in the configuration window
of the agent. An additional connection point will appear on the agent, to which an Analysis agent is
to be linked. Also, an Ultra format for the decoding of the incoming data must be defined. This format
must contain the built-in TCPIP format . See the section below.

Figure 525. TCP/IP agent configuration.

14.9.4.2.2. The Format Definition

The incoming external format must be extended with the TCPIPUDR format.

external asciiSEQ_ext sequential {


ascii callId: terminated_by(":");
int seqNum: terminated_by(",");
ascii anum: terminated_by(",");
ascii bnum: terminated_by(",");
ascii causeForOutput: terminated_by(",");
int duration: terminated_by(0xA);
};

internal TCP_Int :
extends_class( "com.digitalroute.wfc.tcpipcoll.TCPIPUDR" ) {
};

in_map TCP_InMap :
external( asciiSEQ_ext ),
internal( TCP_Int ),
target_internal( ascii_TCP_TI ) {
automatic;
};

out_map ascii_TCP_outMap : external( asciiSEQ_ext ),


internal( ascii_TCP_TI ) {
automatic;

793
Desktop 7.1

};

decoder TCPData : in_map( TCP_InMap );

encoder TCPData : out_map( ascii_TCP_outMap );

14.9.4.2.3. The Analysis Agent

The Analysis agent handles both the validation of the incoming records, as well as sending the response.
If the field duration is less than or equal to zero, the UDR is discarded and the field anum is inserted
into the response field, and the complete UDR is sent back as response. All other UDRs are routed to
the next agent in turn and a sequence number is inserted into the response field before any routing.

Note the use of the synchronized keyword. Updating a global variable within a real-time workflow
must be done within a synchronized function. This to assure consistency between threads. By
default, a real-time workflow utilizes several threads.

int seqNum;

synchronized int createSeqNum() {


seqNum = seqNum + 1;
return seqNum;
}

consume {
bytearray reply;

if ( input.duration <= 0 ) {
strToBA( reply, input.anum );
input.response = reply;
udrRoute( input, "response" );
} else {
strToBA( reply, (string)createSeqNum() );
input.response = reply;
udrRoute( input, "response" );
udrRoute( input, "UDRs" );
}

794
Desktop 7.1

15. Appendix VII - Collection Strategies


This appendix describes the different collection strategies available in MediationZone® :

• APL collection strategy

• Control File Collection Strategy

• Duplicate Filter Collection Strategy

• Multi Directory Collection Strategy

15.1. APL Collection Strategy


This document describes the APL Collection Strategy. APL Collection Strategy is standard function-
ality on the DigitalRoute® MediationZone® platform that is applied with the Disk, FTP, SFTP, and
SCP collection agents.

15.1.1. Prerequisites
The reader of this document should be familiar with:

• The MediationZone® Platform

• Analysis Programming Language. For further information, see the APL Reference Guide.

15.1.2. Overview
Collection Strategies are used to setup rules for handling collection of files from the Disk, FTP, SFTP,
and SCP Collection agents.

The APL Collection Strategy is created on top of one of the pre-defined Collection Strategies, to cus-
tomize the way of collecting files by using the APL language.

15.1.3. APL Collection Strategy Editor


To open the editor, click the New Configuration button in the upper left part of the MediationZone®
Desktop window, and then select APL Collection Strategy from the menu.

15.1.3.1. APL Collection Strategy Editor Menu


The main menu changes depending on which Configuration type that has been opened in the currently
active tab. There is a set of standard menu items that are visible for all Configurations and these are
described in Section 2.3.2.1, “Desktop Standard Menus”.

The menu items that are specific for APL Collection Strategy Editor are described in the following
sections:

15.1.3.1.1. The File Menu

Item Description
Import... Select this option to import code from an external file. Note that the file has to reside on
the host where the client is running.

795
Desktop 7.1

Export... Select this option to export your code to an *.apl file that can be edited in other code ed-
itors, or be used by other MediationZone® systems.

15.1.3.1.2. The Edit Menu

Item Description
Validate Compiles the current APL Collection Strategy code, checking for grammatical and
syntactical errors. The status of the compilation is displayed in a dialog. Upon failure,
the erroneous line is highlighted and a message, including the line number, is displayed.
Undo Select this option to undo your last action.

Redo Select this option to redo the last action you "undid" with the Undo option.

Cut Cuts selections to the clipboard buffer.

Copy Copies selections to the clipboard buffer.

Paste Pastes the clipboard contents.

Find... Displays a dialog where chosen text may be searched for and, optionally, replaced.

Find Again Repeats the search for the last string entered in the Find dialog.

15.1.3.2. APL Collection Strategy Editor Buttons


The toolbar changes depending on which Configuration type that is currently open in the active tab.
There is a set of standard buttons that are visible for all Configurations and these buttons are described
in Section 3.1.2, “Configuration Buttons”.

The additional buttons that are specific for APL Collection Strategy Editor are described in the following
sections:

Button Description
Validate Compiles the current APL Collection Strategy code, checking for grammatical and
syntactical errors. The status of the compilation is displayed in a dialog. Upon failure,
the erroneous line is highlighted and a message, including the line number, is displayed.
Undo Select this option to undo your last action.

Redo Select this option to redo the last action you "undid" with the Undo option.

Cut Cuts selections to the clipboard buffer.

796
Desktop 7.1

Copy Copies selections to the clipboard buffer.

Paste Pastes the clipboard contents.

Find... Displays a dialog where chosen text may be searched for and, optionally, replaced.

Find Again Repeats the search for the last string entered in the Find dialog.

Zoom Out Zoom out the code area by modifying the font size. The default value is 12(pt). Clicking
the button between the Zoom Out and Zoom In buttons will reset the zoom level to
the default value. Changing the view scale does not affect the configuration.
Zoom In Zoom in the code area by modifying the font size. The default value is 12(pt). Clicking
the button between the Zoom Out and Zoom In buttons will reset the zoom level to
the default value. Changing the view scale does not affect the configuration.

15.1.4. Configuration
You create your APL Collection Strategy in the APL Collection Strategy Editor.

15.1.4.1. Create APL Collection Strategy

Figure 526. The APL Collection Strategy Editor

Base Collec- From the drop-down list, select a pre-defined collection strategy. The Default Collec-
tion Strategy tion Strategy is the standard collection strategy that is used by default by the Disk
and FTP agents.

797
Desktop 7.1

The Base Collection Strategy is the collection strategy that your APL Exten-
sion will be based on.

When saving your new collection strategy, make sure to use a descriptive name
since it will be added to the list of available strategies in the agent's Configura-
tion.

APL Exten- The code that you see in the APL Extension coding pad is a default 'skeleton' set of
sion procedures that are already defined in the Base Collection Strategy. By adding APL
code within these procedures, you customize the way by which the collection is going
to be handled by the workflow. For further information about the different APL pro-
cedures, refer to the APL Reference Guide.

1. During run-time, when each of the procedures are invoked, the workflow
first runs the procedure's Base part and then it executes your APL Extension
code.

2. The following APL functions cannot be used within an APL Collection


Strategy:

• udrRoute

• mimSet

• mimPublish

• cancelBatch

• hintEndBatch

3. In the following APL functions you cannot assign a persistent variable with
a value. For information about persistent values, see the MediationZone®
APL Reference Guide.

• initialize

• deinitialize

• commit

• rollback

15.1.5. The FileInfo UDR Type


The FileInfo UDR type includes the properties of the file to collect or a directory where files are stored.

The FileInfo UDR type can be viewed in the UDR Internal Format Browser. To open the browser
right-click in the editing area of an APL Editor and select UDR Assistance.... The browser opens.

15.1.5.1. Format
The following fields are included in the FileInfo UDR:

798
Desktop 7.1

Field Description
isDirectory(boolean) Set to True if FileInfo represents a directory.
isFile(boolean) Set to True if FileInfo represents a file.
name(string) The name of the file or directory.
size(long) The size of the file or directory.
timestamp(long) The timestamp for when the file or directory was last modified.

15.1.6. APL Functions


To customize the way of collecting files, new code is added in the APL Collection Strategy Editor
as an extension to a Base Collection Strategy.

The following functions are available for APL Collection Strategy:

• initialize

• deinitialize

• prepareBaseDirList

• accept

• filterFiles

• preFileCollection

• postFileCollection

• begin

• commit

• rollback

For further information about APL functions, see the APL Reference Guide.

15.2. Control File Collection Strategy


This section includes a description of the complementary control File Collection Strategy that is applied
in MediationZone® for the Disk, FTP, SFTP, and SCP Collection agents. The Collection Strategy
Control File is a standard functionality in the DigitalRoute® MediationZone® Platform.

15.2.1. Overview
The collection strategy makes it possible to collect files for which a corresponding control file exist.
If the control file does not exist, the file is ignored.

The Control File Collection Strategy controls which further configuration options that are available
in the Source tab. If no strategy is selected, the default strategy is used.

799
Desktop 7.1

Figure 527. Collection Strategy - Control File Tab

The Collection Strategy drop down list will only be visible if there are other collection strategies
available in the system, apart from the default collection strategy available.

Collection Select the Control File option in this list.


Strategy
Directory Enter the absolute path name of the source directory on the remote host, where the
source files reside. The path name may also be entered relatively to the home dir-
ectory of the User Name account.
Filename Enter the name of the source files on the remote host.

Regular expressions according to Java syntax can be used.

For further information, see:

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

Example 146.

To match all file names beginning with TTFILE, enter: TTFILE.*

Compression Select compression type for the source files. This selection determines if the agent
will decompress the files before passing them on in the workflow.

• No Compression - the agent will not decompress the files.

• Gzip - the agent will decompress the files using gzip.

800
Desktop 7.1

Position The control filename consists of an extension added either before or after the shared
filename part. Select one ofh the choices: Prefix or Suffix.

Prefix means that the text entered inte the Control File Extension field will be
searched for before the shared filename part and Suffix means that the text entered
in the Control File Extension field, will be searched for after the shared filename
part.
Control File Ex- The Control File Extension is used to define when the data file should be collected.
tension A data file with filename FILE will only be collected if the corresponding control
file exists. A possible control filename can be FILE.ok.

The text entered in this field is the expected extension of the shared filename. The
Control File Extension will be attached in the beginning or the end of the shared
filename, depending on the selection made in the Position list, above.
Data File Exten- The Data File Extension will only be applicable if Position is set to Suffix.
sion
There can be cases where a more strict definition of which files should be collected
is needed. This is defined in the Data File Extension field.

Consider a data file called FILE.dat. If .dat is entered in the Data File Exten-
sion field the corresponding Control file will be called FILE.ok if .ok is entered
in the Control File Extension field.

Consider a directory containing 5 files:

• FILE1.dat

• FILE2.dat

• FILE1.ok

• ok.FILE1

• FILE1

1. The Position field is set to Prefix and the Control File Extension field
is set to .ok.

The control file is ok.FILE1 and FILE1 will be the file collected.

2. The Position field is set to Suffix and the Control File Extension field
is set to .ok.

The control file is FILE1.ok and FILE1 will be be the file collected.

3. The Position field is set to Suffix and the Control File Extension field
is set to .ok and the Data File Extension field is set to .dat.

The control file is FILE1.ok and FILE1.dat will be the file collected.

After collection, the control file is handled in the same way as the collected
file is configured to be handled, that is the system should delete/re-
name/move/ignore it.

Move to Tem- If this option is selected, the source files will be moved to the automatically created
porary Direct- subdirectory DR_TMP_DIR in the source directory, before collection. This option
ory supports safe collection when source files repeatedly use the same name.

801
Desktop 7.1

Inactive Source If this option is selected, a warning message (event) will appear in the System Log
Warning (h) and Event Area when the configured number of hours have passed without any file
being available for collection:

The source has been idle for more than <n> hours,
the last inserted file is <file>.

Move to If this option is selected, the source files will be moved from the source directory
(or from the directory DR_TMP_DIR if using Move to Temporary Directory), to
the directory specified in the Destination field, after collection.

The Destination must be located in the same file system as the collected files
at the remote host. Additionally, absolute path names must be defined (relative
path names cannot be used).

Rename If this option is selected, the source files will be renamed after the collection, and
remain (or moved back from the directory DR_TMP_DIR if using Move Before
Collecting) in the source directory from which they were collected.
Remove If this option is selected, the source files will be removed from the source directory
(or from the directory DR_TMP_DIR, if using Move Before Collecting), after the
collection.
Ignore If this option is selected, the source files will remain in the source directory after
the collection. This field is not available if Move Before Collecting is enabled.
Destination If the Move to option has been selected, enter the full path name of the directory
on the remote host into which the source files will be moved after the collection in
this field. If any of the other After Collection options have been selected, this option
will not be available.
Prefix and Suf- If any of the Move to or Rename options have been selected, enter the prefix and/or
fix suffix that will be appended to the beginning and/or end of the name of the source
files, respectively, after the collection, in these fields. If any of the other After
Collection options have been selected, this option will not be available.

If Rename is enabled, the source files will be renamed in the current (source
or DR_TMP_DIR) directory. Be sure not to assign a Prefix or Suffix, giving
files new names still matching the Filename regular expression. That will
cause the files to be collected over and over again.

Keep (days) If any of the Move to or Rename options have been selected, enter the number of
days to keep moved or renamed source files on the remote host after the collection
in this field. In order to delete the source files, the workflow has to be executed
(scheduled or manually) again, after the configured number of days. If any of the
other After Collection options have been selected, this option will not be available.

A date tag is added to the filename, determining when the file may be re-
moved.

802
Desktop 7.1

15.3. Duplicate Filter Collection Strategy


This section includes a description of the Duplicate Filter Collection Strategy that is applied in Medi-
ationZone® with the Disk Advanced, FTP, SFTP, and SCP agents.

15.3.1. Overview
The Duplicate Filter Collection Strategy enables you to configure a collection agent to collect files
from a directory without having the same files being collected again.

15.3.2. Configuration
You configure the Duplicate Filter Collection Strategy from the Source tab in the agent configuration
view.

15.3.2.1. To Configure the Duplicate Filter Collection Strategy:


In the MediationZone® workflow editor, either double-click the agent icon, or right-click it and select
Configuration; the Configuration view opens.

Figure 528. The Duplicate Filter Configuration View

Collection Strategy From the drop-down list select Duplicate Filter.


Directory Absolute pathname of the source directory on the remote host, where the source
files reside. The pathname might also be given relative to the home directory
of the User Name account.
Filename Name of the source files on the remote host.

Regular expressions according to Java syntax applies. For further information,


see:

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

Example 147.

To match all file names beginning with TTFILE, type: TTFILE.*

803
Desktop 7.1

Compression Compression type of the source files. Determines if the agent will decompress
the files before passing them on in the workflow.

• No Compression - agent does not decompress the files.

• Gzip - agent decompresses the files using gzip.

Duplicate Criteria - Select this option to have only the filename compared for the duplicate check.
Filename If the filename is in the list of files which have already been collected once,
the file is ignored by the agent.
Duplicate Criteria - Select this option to have both the filename and the time stamp of the last
Filename and modification, compared when checking for duplicates. If the file has already
Timestamp been collected once, it is collected again only if the duplicate check reveals that
the file has been updated since the previous collection.

Files that have the same name and are older than the last collected file
by the same name, are ignored. Only files which time stamp is more re-
cent are collected.

File List Size Enter a value to specify the maximum size of the list of already collected files.
This list of files is compared to the input files in order to detect duplicates and
prevent them from being collected by the agent.

When this collection strategy is used with multiple server connection strategy,
each host has its own duplicate list. If a server is removed from the multiple
server configuration the collection strategy will automatically drop the list of
duplicates for that host in the next successful collection.

If the number of files to be collected is greater than the file list size, files
older than the oldest file in the list are not collected.

15.4. Multi Directory Collection Strategy


This section includes a description of the Multi Directory Collection Strategy that is applied in Medi-
ationZone® with the Disk, FTP, SFTP, and SCP agents.

15.4.1. Overview
The Multi Directory Collection Strategy enables you to configure a collection agent to collect data
from a series of directories that are listed in a control file. The collection agent reads the control file
and collects from the specified directories.

15.4.2. Configuration
You configure the Multi Directory Collection Strategy from the first tab in the agent configuration
view.

15.4.2.1. To Configure the Multi Directory Collection Strategy:


In the MediationZone® workflow editor, either double-click the agent icon, or right-click it and select
Configuration; the Configuration view opens.

804
Desktop 7.1

Figure 529. The Collection Agent Configuration View

Collection From the drop-down list select Multi Directory.


Strategy
Control file Enter the path and the name of the control TXT-file.
Name

If the control file is missing, it is empty or if the file is not readable, the
workflow aborts.

Example 148. A Control File

controlfile.txt:
directory1
directory1/subdir1
directroy1/subdir2
directory2
/home/user/directroy3
...

Example 149. A Control File for VMS

Note that relative paths are not supported for VMS!

controlfile_vms.txt:
DISK$USERS:[USERS.USER1.TESTDIR1]
DISK$USERS:[USERS.USER1.TESTDIR2]
DISK$USERS:[USERS.USER1.TESTDIR2.SUBDIR1]
DISK$USERS:[USERS.USER1.TESTDIR3]
DISK$USERS:[USERS.USER1.TESTDIR4]
...

Filename The regular expression of the names of the source files on the local file system.
Regular expressions according to Java syntax applies. For further information, see:

http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

805
Desktop 7.1

Example 150.

To match all filenames beginning with TTFILES, type: TTFILES.*.

If you leave Filename empty, or if you specify .*, the agent collects all the
files.

Abort on Miss- Check to abort the workflow if a directory, that is specified on the control file list,
ing Directory is missing on the server. Otherwise, the workflow continues to execute (default).
Enable Duplicate Check to prevent collection of the same file more than once.
Filter
Files are considered to be duplicate if the absolute filename is the same.

This check box is by default checked.

The workflow holds an internal data structure with information about which files
the collector has collected in previous executions. The data structure is purged by
the collection strategy based on the contents of the collection directories. If files
collected in the past are no longer found in the collection directory they are removed
from the data structure.

The internal data structure is stored in the workflow state. Since workflow
state is only updated when files are collected the purged internal data structure
will be stored the next time a successful file collection is performed.

It is possible to manually purge the internal duplicate data structure if needed. To


do this, disable duplicate filter and run the workflow. The next time duplicate filter
is enabled the internal data structure will be empty.
Enable Debug Check to enable generation of error- or debug messages.

If you choose to enable messaging, make sure to enable debug on the


workflow monitor, as well. For further information see Section 4.1.11,
“Workflow Monitor”.

Since debugging has a negative impact on performance the debug option


should never be enabled in production environment.

806
Desktop 7.1

16. Appendix IX - Error Correction System

16.1. Error Correction System


16.1.1. Introduction
This section describes the Error Correction System (ECS), part of the standard MediationZone®
Platform. The ECS is used when UDRs fail validation and manual intervention is needed before they
can be successfully processed. Batches are sent to ECS using an APL command, while sending UDRs
requires the ECS Forwarding agent. To collect data from the ECS, the ECS Collection agent is used.

Using the ECS Inspector, UDRs may be examined, deleted or updated.

16.1.1.1. Prerequisites
The reader of this information should be familiar with:

• The MediationZone® Platform

16.1.2. ECS Forwarding Agent


In order to send UDRs to ECS, a workflow must contain an ECS Forwarding agent and the invalid
UDRs have to be routed to it. It is recommended to use a preceding Analysis (or Aggregation) agent
to associate an Error Code with the UDR.

The ECS Forwarding agent is applicable for UDRs only. Batches are handled through the cancel-
Batch functionality.

Note! From the ECS Forwarding agent, it is possible to pass on MIM values to be associated
with the UDRs in the ECS Inspector.

16.1.2.1. Configuration
The ECS Forwarding agent configuration window is displayed when right clicking on an ECS Forward-
ing agent and selecting Configuration... or when double clicking on the agent.

Figure 530. ECS Forwarding agent configuration window

16.1.2.1.1. ECS Tab

Logged MIMs MIM values to be associated with a UDR when sent to ECS.

807
Desktop 7.1

16.1.2.1.2. Thread Buffer Tab

See the description in Section 4.1.6.2.1, “Thread Buffer Tab”.

16.1.2.2. Transaction Behavior


The agent does not emit or retrieve any commands.

16.1.2.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent consumes selected UDR types.

16.1.2.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

16.1.2.4.1. Publishes

MIM Value Description


Agent Name The name of the agent.
Inbound UDRs The number of UDRs routed to the agent.

16.1.2.4.2. Accesses

The agent does not itself access any MIM resources.

16.1.2.5. Agent Message Events


There are no agent message events for this agent.

16.1.2.6. Debug Events


There are no debug events for this agent.

16.1.3. ECS Collection Agent


The Error Correction System (ECS) Collection agent fetches data sent to the internal MediationZone®
ECS by workflows configured to do so. Data is sent to the ECS as UDRs or batches. In the latter case,
an error UDR may be associated with a batch containing the relevant information. When collected by
an ECS Collection agent, the fields in this UDR will be included as MIM values in the workflow.

Figure 531. A typical workflow collecting UDRs from ECS.

If batches are collected, the ECS Collection agent produces bytearray data. Which UDRs/batches
to collect is determined by selecting a reprocessing group that has been defined in the ECS Inspector.

It is only possible to have one active ECS collection workflow per reprocessing group at a time.

808
Desktop 7.1

Note! Collecting UDRs from the ECS does not mean that they are physically removed from
ECS, only that their state is changed. Automatic UDR removal can be managed by the predefined
task ECS_Maintenance. For further information, see Section 16.1.4, “ECS_Maintenance System
Task”. Manual deletion directly from the ECS Inspector is also possible.

16.1.3.1. Configuration
The ECS Collection agent configuration window is displayed either when right clicking on an ECS
Collection agent in a workflow and selecting Configuration... or double clicking on the agent. You
can select to either collect data from a Reprocessing Group that has been defined in the ECS Inspector
or from a filter that has been saved in the ECS Inspector. The available settings in the configuration
dialog depends on which option you choose.

Note! The default directory, used for storage of UDRs and batches routed to the ECS, is
$MZ_HOME/ecs.

Figure 532. ECS Collection Configuration Dialog

Reprocessing Select this option if you want to collect data from a Reprocessing Group. In order
Group for data to be collectible, it must belong to a predefined reprocessing group. The
groups in the Reprocessing Group list are suffixed with their type - batch or
UDR.

Reprocessing groups are defined in the ECS Inspector. Collected UDRs/batches


will automatically be marked as Reprocessed. Reprocessed data can be collected
again, if the state is manually changed back to New in the ECS Inspector.
Saved Filter (Read Select this option if you want to collect data that has been defined by a filter
Only) saved in the ECS Search dialog. This option can only be used for UDRs. In order
for UDRs to be collectible, they must match the search criteria in the filter selected
in the Saved Filter list. Filters are created and saved in the ECS Inspector.
Collection Size This field is enabled when a UDR reprocessing group has been selected. The
value defines how many UDRs will be collected before the ECS collection agent
finishes the current batch and starts a new one. Valid range is 1,000 to 100,000
UDRs. A higher value will require more memory usage and can have impact on
the performance.

809
Desktop 7.1

SQL Bulk Size To improve performance, data records are retrieved in bulk. The SQL Bulk Size
value specifies how many records that will be included in each bulk. Valid range
is 1 to 1000, with a default value of 20.
Routed Types A group of UDRs can consist of several format types. Selecting Add... will display
the UDR Internal Format Browser. Select the type/types to collect. Any UDRs
in the reprocessing group not matching the selected types will be ignored.

16.1.3.2. Transaction Behavior


This section includes information about the ECS Collection agent's transaction behavior. For information
about the general MediationZone® transaction behavior, see Section 4.1.11.8, “Transactions”.

16.1.3.2.1. Emits

The agent emits commands that changes the state of the file currently processed.

Command Description
Begin Batch • UDRs - Emitted prior to the routing of the first UDR in the batch created by the
UDRs matching the collection definitions.

• Batches - Emitted prior to the routing of a batch.

End Batch • UDRs - Emitted when all UDRs have been collected or when a Hint End Batch re-
quest is received. The UDRs are then marked as Reprocessed in ECS.

• Batches - Emitted after each batch has been processed. The batch is then marked as
Reprocessed.

16.1.3.2.2. Retrieves

The agent retrieves commands from other agents and, based on them, generates a state change of the
file currently processed.

Command Description
Cancel Batch No Cancel Batches are retrieved.

Note! If any agent in the workflow emits a cancelBatch, the workflow will
abort immediately (regardless of the workflow configuration).

16.1.3.3. Introspection
The introspection is the type of data an agent expects and delivers.

The agent produces data depending on the data type of the reprocessing group.

UDRs The selected types under Routed Types. If no types are selected, the generic drudr type will
be produced.
Batch Produces bytearray types.

16.1.3.4. Meta Information Model


For information about the MediationZone® MIM and a list of the general MIM parameters, see Sec-
tion 2.2.10, “Meta Information Model”.

810
Desktop 7.1

16.1.3.4.1. Publishes - UDR

MIM Parameter Description


Search Filter Used This MIM parameter contains the name of the selected filter when having selected
to collect data from a saved filter.

16.1.3.4.2. Publishes - Batch

MIM Parameter Description


Originating Workflow This MIM parameter contains the name of the workflow that canceled the
batch.
Originating Agent This MIM parameter contains the name of the agent that canceled the batch.
Database ID This MIM parameter contains the ID of the batch as assigned in the ECS In-
spector. See the Db ID column.
Source File Size This MIM parameter contains the original size of the canceled batch.
<error UDR fields> This MIM parameter contains all field values in the error UDR associated
with the reprocessing group.

16.1.3.4.3. Accesses

The agent does not itself access any MIM resources.

16.1.3.5. Agent Message Events


An information message from the agent, stated according to the configuration done in the Event Noti-
fication Editor.

For further information about the agent message event type, see Section 5.5.14, “Agent Event”.

• Ready with file: <filename>

This event is logged for batch collection only, and is reported at end batch stating the name of the
file currently processed.

• Ready with recovered file

Reported when a file is recovered in case of a crash.

16.1.3.6. Debug Events


Debug messages are dispatched in debug mode. During execution, the messages are displayed in the
Workflow Monitor.

You can configure Event Notifications that are triggered when a debug message is dispatched. For
further information about the debug event type, see Section 5.5.22, “Debug Event”.

The agent produces the following debug events:

• Start collecting

This event is logged for UDR collection only, and is reported when the collection from ECS starts.

• Commit started

This event is logged for UDR collection only, and is reported at end batch when starting to commit
changes to the database.

• Commit done, consumed <count> UDRs

811
Desktop 7.1

This event is logged for UDR collection only, and is reported at end batch upon a successful commit.

16.1.4. ECS_Maintenance System Task


The ECS_Maintenance system task removes outdated ECS data, provided that the state is Reprocessed.

The number of days to keep data is set in the ECS_Maintenance configuration dialog. It is also possible
to fully turn off the cleanup of UDRs, Batches, Statistics or all of them. The Statistics can be reported
by email, if so configured in the Report tab of the task.

When the ECS_Maintenance System Task is executed, a number of things will happen:

• UDRs, batches and ECS statistics will be removed from the ECS according to the configurations
in the Cleanup tab. See Section 16.1.4.1.1, “Cleanup Tab” for further information.

If the number of days after which data should be removed has been configured to 0 (zero) days, data
will be removed every time the ECS_Maintenance System Task is executed, with a minimum time
interval of one hour.

• An ECS Statistics Event will be generated containing information about the number of UDRs asso-
ciated with every error code.

This will happen everytime the ECS_Maintenance System Task is executed according to the exact
time interval with which the ECS_Maintenance task is configured.

See Section 5.5.21, “ECS Statistics Event” for further information about how to configure notifications
for the ECS Statistics Event.

• Statistical information will be sent to the ECS Statistics, according to your configurations in the
Report tab in the configuration dialog for the ECS_Maintenance system task. See Section 16.1.4.1.2,
“Report Tab” for further information.

The statistical information will be sent every time the ECS_Maintenance system task is executed,
with a minimum time interval of one hour.

• An email containing statistical information will be sent to the mail recipient stated in the Report
tab in the configuration dialog for the ECS_Maintenance system task. See Section 16.1.4.1.2, “Report
Tab” for further information.

The email will be sent every time the ECS_Maintenance system task is executed, with a minimum
time interval of one hour.

Note! The ECS is designed to store a fairly limited amount of erroneous UDRs and batches. It
is therefore important that the data is extracted, reprocessed or deleted from ECS on a regular
basis.

16.1.4.1. Configuration
To open the ECS_Maintenance system task configuration:

1. Click the Show Configuration Browser button in the upper left part of MediationZone® Desktop
to show the Configuration Browser pane.

2. In the SystemTask folder, double-click the ECS_Maintenance workflow.

A workflow containing the ECS_Maintenance agent is opened. Double click on the agent to open the
configuration.

812
Desktop 7.1

16.1.4.1.1. Cleanup Tab

Figure 533. ECS_Maintenance configuration dialog - Cleanup tab

UDRs If this check box is selected, UDRs will be deleted from ECS when they are older than
the number of days stated (maximum 999 days). If disabled, the UDRs will remain until
manually cleaned out via the ECS Inspector. If 0 (zero) is entered, all UDRs with state
Reprocessed will be removed whenever the cleanup task is performed, with a minimum
time interval of one hour.

Default setting is 7 days.


Batches If this check box is selected, batches will be deleted from ECS when they are older than
the number of days stated (maximum 999 days). If disabled, the batches will remain until
manually cleaned out via the ECS Inspector. If 0 (zero) is entered, all batches with state
Reprocessed will be removed whenever the cleanup task is performed, with a minimum
time interval of one hour.

Default setting is 7 days.


Statistics If this check box is selected, statistical data will be deleted from ECS when it is older
than the number of days stated (1 - 999 days). If disabled the statistics will remain in the
system. There is no other way through the GUI to remove the ECS statistics.

Default setting is 21 days.

16.1.4.1.2. Report Tab

Figure 534. ECS_Maintenance configuration dialog - Report tab

UDR Statistics Create UDR Statistics report data.


Batch Statistics Create Batch Statistics report data.
Email The email address to the receiver of the report. For information about email ad-
dresses, see Section 7.2, “Access Controller”.

16.1.5. ECS Inspection


For information about the ECS Inspector, see Section 6.7, “ECS Inspection”.

16.1.6. ECS Statistics


For information about the ECS Inspector, see Section 6.8, “ECS Statistics”.

813
Desktop 7.1

16.1.7. Example - ECS handling of UDRs


This example will show the necessary configurations for a workflow sending UDRs to ECS, and for
a workflow configured to collect this data. The UDRs sent to ECS will have an Error Case, that is, a
string associated with the Error Code.

Typically, a UDR may be sent to ECS as a result of a failing table lookup evaluation. To make sure
that the error was not temporary and that the tables simply needed to be updated first, these UDRs are
recycled. A new workflow is created, collecting them from ECS and reevaluating the same table
lookup. If the problem still exists, the UDR is sent back to ECS.

16.1.7.1. ECS Forwarding Workflow


In order to send a UDR to ECS, a workflow must contain an ECS Forwarding agent. To perform a
table lookup for all UDRs, an Analysis agent is used. If the lookup succeeded, the UDR is sent on the
OK route to be saved on disk, while the failing UDRs are sent to the ECS forwarding agent.

Figure 535. A workflow sending UDRs to ECS

UDRs may be sent to ECS without any Error Code or MIM values associated with it. However, this
will make browsing in the ECS Inspector more difficult, and no auto-assignment to reprocessing groups
using the Error Code is possible.

16.1.7.1.1. ECS Inspector

Error Codes can be associated with reprocessing groups via the ECS Inspector window (accessed
from the Edit menu, selecting Reprocessing Groups...). Then all UDRs with an Error Code will be
automatically assigned to the respective reprocessing group. Otherwise, the UDRs will have to be as-
signed manually in order to be available for collection.

Figure 536. Add ECS Error Code window - where a reprocessing group can be selected.

16.1.7.1.2. Analysis Agent

The Analysis agent is used for validation and routing of the UDRs, and association to a valid (existing)
Error Code. The following example appends an Error Code and an Error Case to the UDR prior to
sending it on to the ECS Forwarding agent.

814
Desktop 7.1

Example 151.

udrAddError( input, "AreaCode_ERROR",


"Complete anumber: " + input.anum);
udrRoute( input, "error" );

16.1.7.1.3. ECS Forwarding Agent

In the ECS Forwarding agent, the MIM values you want to associate with the UDR are appended. This
is optional, however, it makes it easier to search for data and get additional information about the UDR
from the ECS Inspector.

Figure 537. The ECS Forwarding agent

16.1.7.2. ECS Collection Workflow


In the collection workflow the same evaluation is tried again. If it fails, the UDR is sent back to ECS
with the same configuration.

The prerequisites for being able to collect ECS data is that the UDRs or batches must each belong to
an existing reprocessing group, and have the reprocessing state set to New.

Figure 538. A workflow collecting and validating ECS data.

Since we want to redo the processing made in the forwarding workflow, we keep the configurations
of the ECS Inspector and ECS Forwarding agents the same as in the previous workflow.

16.1.7.2.1. Workflow Properties

The Error tab in the Workflow Properties must not be configured to handle cancelBatch beha-
vior, since it will never be valid for ECS collection workflows. No calls to cancelBatch are allowed
from any agent since it will cause the workflow to immediately abort.

815
Desktop 7.1

16.1.7.2.2. ECS Collection Agent

All UDRs conforming to the collection criteria will be selected and processed as a batch.

16.1.7.2.3. Analysis Agent

The Analysis agent only needs to validate and route the UDRs. The Error Code and Error Case is
already associated with the UDR.

Example 152.

udrRoute( input, "error" );

Example 153. Reassigning to a Different Reprocessing Group

Suppose there is a workflow collecting and validating UDRs from ECS. If the validation fails,
the UDRs will be sent back to ECS with an associated Error Code. UDRs assigned to a new or
a different Error Code will directed to a new reprocessing group. If desired to associate these
UDRs with a different reprocessing group, udrClearErrors must be called prior to
udrAddError.

The exception is if the new Error Code is associated with the same reprocessing group.

Case 1 - same reprocessing group

If the new Error Code belongs to the same reprocessing group:

• Using udrClearErrors will result in a new Error Code and reprocessing group being
associated with the UDR in ECS. It will also avoid several Error Codes pointing at different
reprocessing groups which makes automatic group assignment impossible.

• Leaving out udrClearErrors will result in old as well as new Error Codes (including the
reprocessing group) being associated with the UDR in ECS.

Case 2 - different reprocessing group

If the new Error Code belongs to a different reprocessing group:

• Using udrClearErrors will result in a new Error Code and reprocessing group being
associated with the UDR in ECS.

• Leaving out udrClearErrors will not result in any association to a reprocessing group,
however both Error Codes are associated with the UDR in ECS.

Note! All UDRs collected at one activation of the workflow will be processed as one batch.

Any call to cancelBatch will cause the workflow to abort immediately.

16.1.8. Example - ECS handling of batches


The ECS Forwarding agents are not used for batches. Instead, a batch is sent to ECS directly from a
collection agent when receiving a cancelBatch. There is also a possibility to associate an error
UDR with a batch. This UDR can in turn be assigned error information with udrAddError.

816
Desktop 7.1

16.1.8.1. ECS Forwarding Workflow


The forwarding workflow contains an Analysis agent which validates batches and sends them to ECS
in case of failure.

Figure 539. The Analysis agent can call the cancelBatch function.

16.1.8.1.1. Workflow Properties

MIM values to be associated with the batch are mapped in the Workflow Properties window. Also,
the number of allowed cancelled batches is set here. Note that Abort Immediately is enabled, no batch
will be sent to ECS if the workflow aborts.

Figure 540. Workflow Properties - Error tab.

The error UDR is handled from the Analysis agent. For further information, see Section 16.1.7.2.3,
“Analysis Agent”. APL code always overrides any Desktop settings. Hence, the set Error Code will
have no effect on this.

16.1.8.1.2. ECS Inspection

Automatic assignment to reprocessing groups is done exactly the same way as for UDRs Sec-
tion 16.1.7.1.1, “ECS Inspector” via the ECS Inspector window (accessed from the Edit menu, selecting
Reprocessing Groups...). Make sure to select the appropriate Error UDR Type. Then the UDR fields
will be included as MIMs in the collection workflow.

817
Desktop 7.1

Figure 541. ECS Error Codes - where a reprocessing groups can be selected.

16.1.8.1.3. Analysis Agent

The Error UDR may be mapped from the Workflow Properties window as well, however in this case
APL code must be used, since it is desired to insert other values than MIM values in the error UDR
fields. Also, an Error Case will be assigned, and this is not possible from the Workflow Properties
window. For further information, see Section 6.7.7, “Error Codes”, and Section 6.7.8, “Reprocessing
Groups”.

Example 154.

E.myErrorUDR eUDR = udrCreate( E.myErrorUDR );


eUDR.FileSize = (long)mimGet( "IN", "Source File Size");
eUDR.TS = (date)mimGet( "IN", "File Modified Timestamp");
eUDR.message = "PROCESSED ONCE.";
udrAddError( eUDR, "switch_ERROR", "Switch not found.");
cancelBatch( "Incorrect source.", eUDR );

Note! to send error UDRs with the batch is optional. However, it is necessary if access to any
application specific information is wanted when reprocessing the batch. Error UDR fields will
appear as MIM values in the reprocessing workflow. Also, the only possibility to associate an
Error Code with the batch is by appending an Error UDR.

16.1.8.2. ECS Collection Workflow


The collection workflow needs a Decoder agent, since batches are saved in their original format when
sent to ECS.

Figure 542. A workflow collecting batches from ECS.

16.1.8.2.1. Workflow Properties

The Error tab may be configured to handle cancelBatch behavior, however, it will never be valid
for ECS batch collection workflows. Any call to cancelBatch will cause the workflow to abort
immediately.

16.1.8.2.2. ECS Collection Agent

All batches conforming to the collection criteria will be selected. If a batch contains historic UDRs,
that is UDRs belonging to old, not used format definitions, they will by default be converted automat-
ically to the latest format. If this behavior is not desired, the automatic conversion may be disabled
from the Ultra Format Converter. In this case the workflow will abort, logging an informative message
in the System Log.

818
Desktop 7.1

16.1.8.2.3. Analysis Agent

Calls to cancelBatch shall not be made in APL because it will cause the workflow to abort imme-
diately and nothing will be sent to ECS.

819

You might also like