0% found this document useful (0 votes)
201 views164 pages

Dhis2 Developer Manual

This document is the developer manual for DHIS2, an open source health management information system. It provides an overview of DHIS2's technical architecture, including its data model, persistence layer, business layer, presentation layer, and framework stack. It also describes DHIS2's web API for working with metadata and data values, including authentication, CRUD operations, filtering, importing/exporting, and more. The document is licensed under the GNU FDL and provided without warranty.

Uploaded by

Hoàng Mậu Huy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
201 views164 pages

Dhis2 Developer Manual

This document is the developer manual for DHIS2, an open source health management information system. It provides an overview of DHIS2's technical architecture, including its data model, persistence layer, business layer, presentation layer, and framework stack. It also describes DHIS2's web API for working with metadata and data values, including authentication, CRUD operations, filtering, importing/exporting, and more. The document is licensed under the GNU FDL and provided without warranty.

Uploaded by

Hoàng Mậu Huy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 164

DHIS2 Developer Manual

2.17
© 2006-2015
DHIS2 Documentation Team

Revision 1431
Version 2.17 2015-02-28 11:51:41

Warranty: THIS DOCUMENT IS PROVIDED BY THE AUTHORS ''AS IS'' AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS MANUAL AND PRODUCTS MENTIONED HEREIN,
EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

License: Permission is granted to copy, distribute and/or modify this document under the terms of the
GNU Free Documentation License, Version 1.3 or any later version published by the Free Software
Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy
of the license is included in the source of this documentation, and is available here online: http://
www.gnu.org/licenses/fdl.html.

ii
DHIS2 Developer Manual Contents

DHIS 2 Technical Architecture .............................................................................................................. 1


1. Overview ................................................................................................................................ 1
2. Technical Requirements ............................................................................................................ 1
3. Project Structure ...................................................................................................................... 1
4. Project Dependencies ................................................................................................................ 2
5. The Data Model ...................................................................................................................... 3
6. The Persistence Layer ............................................................................................................... 4
7. The Business Layer .................................................................................................................. 5
7.1. The JDBC Service Project .............................................................................................. 5
7.2. The Import-Export Project .............................................................................................. 7
7.3. The Data Mart Project .................................................................................................... 8
7.4. The Reporting Project ................................................................................................... 10
7.4.1. Report table ...................................................................................................... 10
7.4.2. Chart ............................................................................................................... 11
7.4.3. Data set completeness ........................................................................................ 12
7.4.4. Document ........................................................................................................ 13
7.4.5. Pivot table ........................................................................................................ 13
7.4.6. The External Project .......................................................................................... 13
7.5. The System Support Project .......................................................................................... 13
7.5.1. DeletionManager ............................................................................................... 13
8. The Presentation Layer ............................................................................................................ 14
8.1. The Portal .................................................................................................................. 14
8.1.1. Module Assembly ............................................................................................. 14
8.1.2. Portal Module Requirements ............................................................................... 14
8.1.3. Common Look-And-Feel .................................................................................... 14
8.1.4. Main Menu ...................................................................................................... 15
9. Framework Stack ................................................................................................................... 15
9.1. Application Frameworks ............................................................................................... 15
9.2. Development Frameworks ............................................................................................. 15
10. Definitions ........................................................................................................................... 15
1. Web API ...................................................................................................................................... 17
1.1. Introduction ........................................................................................................................ 17
1.2. Authentication ..................................................................................................................... 17
1.3. Date and period format ......................................................................................................... 18
1.4. Browsing the Web API ......................................................................................................... 18
1.4.1. Translation ............................................................................................................... 19
1.5. Working with the meta-data API ............................................................................................ 19
1.5.1. Content types ........................................................................................................... 19
1.5.2. Query parameters ...................................................................................................... 20
1.5.3. Available strategies for import ..................................................................................... 21
1.5.4. Examples ................................................................................................................. 21
1.6. Meta-data filtering ............................................................................................................... 22
1.7. Meta-data field filter ............................................................................................................ 23
1.7.1. Field transformers ..................................................................................................... 24
1.7.2. Field converters ........................................................................................................ 24
1.8. Meta-data create, read, update, delete, validate .......................................................................... 25
1.8.1. Creating and updating objects ...................................................................................... 25
1.8.2. Deleting objects ........................................................................................................ 26
1.8.3. Adding and removing objects to/from collections ............................................................ 26
1.8.4. Validating payloads ................................................................................................... 26
1.8.5. Partial updates .......................................................................................................... 27
1.9. CSV meta-data import .......................................................................................................... 27
1.10. Data values ....................................................................................................................... 30
1.10.1. Sending data values ................................................................................................. 30
1.10.2. Sending bulks of data values ..................................................................................... 32
1.10.2.1. Identifier schemes ......................................................................................... 34
1.10.3. CSV data value format ............................................................................................. 34

iii
DHIS2 Developer Manual Contents

1.10.4. Generating data value set template .............................................................................. 35


1.10.5. Sending, reading and deleting individual data values ...................................................... 35
1.10.6. Reading data values ................................................................................................. 36
1.10.7. Reading large bulks of data values ............................................................................. 38
1.11. Events .............................................................................................................................. 38
1.11.1. Sending events ........................................................................................................ 38
1.11.2. CSV Import / Export ................................................................................................ 41
1.11.3. Querying and reading events ..................................................................................... 42
1.11.3.1. Examples ..................................................................................................... 43
1.12. Forms ............................................................................................................................... 43
1.13. Validation ......................................................................................................................... 44
1.14. Indicators .......................................................................................................................... 45
1.15. Complete data set registrations ............................................................................................. 45
1.15.1. Completing and un-completing data sets ...................................................................... 45
1.15.2. Sending bulks of complete data set registrations ............................................................ 46
1.15.3. Reading complete data set registrations ....................................................................... 46
1.16. Data approval .................................................................................................................... 47
1.17. Messages .......................................................................................................................... 49
1.17.1. Writing and reading messages .................................................................................... 49
1.17.2. Managing messages ................................................................................................. 51
1.18. Interpretations .................................................................................................................... 52
1.18.1. Reading interpretations ............................................................................................. 52
1.18.2. Writing interpretations .............................................................................................. 53
1.18.3. Creating, updating and removing interpretation comments ............................................... 53
1.19. Viewing analytical resource representations ............................................................................ 54
1.20. Plugins ............................................................................................................................. 55
1.20.1. Embedding pivot tables with the Pivot Table plug-in ...................................................... 55
1.20.2. Embedding charts with the Visualizer chart plug-in ....................................................... 58
1.20.3. Embedding maps with the GIS map plug-in ................................................................. 62
1.20.4. Creating a chart carousel with the carousel plug-in ........................................................ 66
1.21. SQL views ........................................................................................................................ 67
1.21.1. Criteria .................................................................................................................. 67
1.21.2. Variables ................................................................................................................ 67
1.22. Dashboard ......................................................................................................................... 68
1.22.1. Browsing dashboards ................................................................................................ 68
1.22.2. Searching dashboards ............................................................................................... 69
1.22.3. Creating, updating and removing dashboards ................................................................ 70
1.22.4. Adding, moving and removing dashboard items and content ............................................ 70
1.23. Analytics .......................................................................................................................... 71
1.23.1. Request query parameters ......................................................................................... 73
1.23.2. Response formats .................................................................................................... 74
1.23.3. Constraints ............................................................................................................. 76
1.24. Event analytics .................................................................................................................. 76
1.24.1. Request query parameters ......................................................................................... 77
1.24.2. Event query analytics ............................................................................................... 78
1.24.2.1. Filtering ....................................................................................................... 79
1.24.2.2. Ranges / legend sets ...................................................................................... 79
1.24.2.3. Response formats .......................................................................................... 80
1.24.3. Event aggregate analytics .......................................................................................... 82
1.24.3.1. Response formats .......................................................................................... 82
1.25. Geo features ...................................................................................................................... 84
1.25.1. GeoJSON ............................................................................................................... 85
1.26. Generating resource, analytics and data mart tables .................................................................. 85
1.27. Maintenance ...................................................................................................................... 86
1.28. System resource ................................................................................................................. 86
1.28.1. Generate identifiers .................................................................................................. 86
1.28.2. View system information .......................................................................................... 86

iv
DHIS2 Developer Manual Contents

1.28.3. Check if username and password combination is correct ................................................. 87


1.29. Users ............................................................................................................................... 87
1.29.1. User query .............................................................................................................. 88
1.29.2. User account invitations ............................................................................................ 88
1.29.3. User replication ....................................................................................................... 90
1.30. Current user information and associations ............................................................................... 90
1.31. System settings .................................................................................................................. 91
1.32. User settings ..................................................................................................................... 91
1.33. Configuration .................................................................................................................... 92
1.34. Translations ....................................................................................................................... 92
1.35. SVG conversion ................................................................................................................. 93
1.36. Tracked entity management ................................................................................................. 93
1.37. Tracked entity instance management ..................................................................................... 93
1.37.1. Creating a new tracked entity instance ......................................................................... 93
1.37.2. Updating a tracked entity instance .............................................................................. 94
1.37.3. Deleting a tracked entity instance ............................................................................... 94
1.37.4. Enrolling a tracked entity instance into a program ......................................................... 94
1.37.5. Update strategies ..................................................................................................... 95
1.38. Tracked entity instance query ............................................................................................... 95
1.38.1. Request syntax ........................................................................................................ 95
1.38.2. Response format ...................................................................................................... 98
1.39. Email ............................................................................................................................... 99
1.39.1. System notification .................................................................................................. 99
1.39.2. Test message .......................................................................................................... 99
1.40. Sharing ........................................................................................................................... 100
1.41. Scheduling ...................................................................................................................... 100
1.42. Schema Resource ............................................................................................................. 101
1.43. UI Customization ............................................................................................................. 101
1.44. FRED API ...................................................................................................................... 102
2. Apps in DHIS2 ............................................................................................................................ 103
2.1. Purpose of Packaged Apps .................................................................................................. 103
2.2. Creating Apps ................................................................................................................... 103
2.3. Configuring DHIS2 for Apps Installation ............................................................................... 104
2.4. Installing Apps into DHIS 2 ................................................................................................ 104
2.5. Launching Apps ................................................................................................................. 105
2.6. Web-API for Apps ............................................................................................................. 105
2.7. Adding the DHIS 2 menu to your app ................................................................................... 106
3. Setting up report functionality ........................................................................................................ 109
3.1. Data sources for reporting ................................................................................................... 109
3.1.1. Types of data and aggregation ................................................................................... 109
3.1.1.1. Terminology ................................................................................................. 109
3.1.1.2. Basic rules of aggregation .............................................................................. 109
3.1.1.3. Dimensions of aggregation .............................................................................. 110
3.1.1.4. Aggregation operators, methods for aggregation ................................................. 110
3.1.1.5. Advanced aggregation settings (aggregation levels) ............................................. 111
3.1.2. Data mart ............................................................................................................... 111
3.1.2.1. The data mart export process .......................................................................... 111
3.1.3. Resource tables ....................................................................................................... 112
3.1.4. Report tables ........................................................................................................... 113
3.2. How to create report tables .................................................................................................. 113
3.2.1. General options ....................................................................................................... 113
3.2.2. Selecting data ......................................................................................................... 114
3.2.3. Selecting report parameters ....................................................................................... 114
3.2.4. Data element dimension tables ................................................................................... 116
3.2.5. Report table - best practices ...................................................................................... 117
3.3. Report table outcome .......................................................................................................... 118
3.4. Standard reports ................................................................................................................. 118

(2.17) v
DHIS2 Developer Manual Contents

3.4.1. What is a standard report? ......................................................................................... 118


3.4.2. Designing Standard reports in iReport ......................................................................... 119
3.4.2.1. Download and open the design file ................................................................... 119
3.4.2.2. Editing the report .......................................................................................... 119
3.4.2.3. Text ............................................................................................................ 120
3.4.2.4. Filtering the table rows .................................................................................. 122
3.4.2.5. Sorting ........................................................................................................ 124
3.4.2.6. Changing indicator/data element names ............................................................. 126
3.4.2.7. Adding horizontal totals ................................................................................. 127
3.4.2.8. Groups of tables ........................................................................................... 127
3.4.2.9. Charts ......................................................................................................... 130
3.4.2.10. Adding the Report to DHIS 2 ........................................................................ 142
3.4.2.11. Some final guidelines ................................................................................... 142
3.4.3. Designing SQL based standard reports ........................................................................ 142
3.4.4. Designing HTML based standard reports ..................................................................... 143
4. Infrastructure ............................................................................................................................... 145
4.1. Release process .................................................................................................................. 145
A. R and DHIS 2 Integration ............................................................................................................. 147
A.1. Introduction ...................................................................................................................... 147
A.2. Using ODBC to retrieve data from DHIS2 into R ................................................................... 147
A.3. Using R with MyDatamart .................................................................................................. 149
A.4. Mapping with R and PostgreSQL ......................................................................................... 151
A.5. Using R, DHIS2 and the Google Visualization API ................................................................. 154
A.6. Using PL/R with DHIS2 ..................................................................................................... 156
A.7. Using this DHIS2 Web API with R ...................................................................................... 157

vi
DHIS 2 Technical Architecture Overview

DHIS 2 Technical Architecture

1. Overview
This document outlines the technical architecture for the District Health Information Software 2 (DHIS 2). The DHIS
2 is a routine data based health information system which allows for data capture, aggregation, analysis, and reporting
of data.

DHIS 2 is written in Java and has a three-layer architecture. The presentation layer is web-based, and the system can
be used on-line as well as stand-alone.

Fig. Overall architecture

2. Technical Requirements
The DHIS 2 is intended to be installed and run on a variety of platforms. Hence the system is designed for industry
standards regarding database management systems and application servers. The system should be extensible and
modular in order to allow for third-party and peripheral development efforts. Hence a pluggable architecture is needed.
The technical requirements are:
• Ability to run on any major database management system
• Ability to run on any J2EE compatible servlet container
• Extensibility and modularity in order to address local functional requirements
• Ability to run on-line/on the web
• Flexible data model to allow for a variety of data capture requirements

3. Project Structure
DHIS 2 is made up of 42 Maven projects, out of which 18 are web modules. The root POM is located in /dhis-2
and contains project aggregation for all projects excluding the /dhis-2/dhis-web folder. The /dhis-2/dhis-web

1
DHIS 2 Technical Architecture Project Dependencies

folder has a web root POM which contains project aggregation for all projects within that folder. The contents of the
modules are described later on.

Fig. Project structure

4. Project Dependencies
Dependencies between the projects are structured in five layers. The support modules provide support functionality
for the core and service modules, related to Hibernate, testing, JDBC, and the file system. The core module provides
the core functionality in the system, like persistence and business logic for the central domain objects. The service
modules provide business logic for services related to reporting, import-export, mapping, and administration. The web
modules are self-contained web modules. The portal is a wrapper web module which assembles all the web modules.
Modules from each layer can only have dependencies to modules at the same layer or the layer right below.

2
DHIS 2 Technical Architecture The Data Model

The internal structure of the service layer is divided in five layers.

5. The Data Model


The data model is flexible in all dimensions in order to allow for capture of any item of data. The model is based on
the notion of a DataValue. A DataValue can be captured for any DataElement (which represents the captured item,
occurrence or phenomena), Period (which represents the time dimension), and Source (which represents the space
dimension, i.e. an organisational unit in a hierarchy).

Figure 1. Data value structure

3
DHIS 2 Technical Architecture The Persistence Layer

A central concept for data capture is the DataSet. The DataSet is a collection of DataElements for which there is entered
data presented as a list, a grid and a custom designed form. A DataSet is associated with a PeriodType, which represents
the frequency of data capture.

A central concept for data analysis and reporting is the Indicator. An Indicator is basically a mathematical formula
consisting of DataElements and numbers. An Indicator is associated with an IndicatorType, which indicates the factor
of which the output should be multiplied with. A typical IndicatorType is percentage, which means the output should
be multiplied by 100. The formula is split into a numerator and denominator.

Most objects have corresponding group objects, which are intended to improve and enhance data analysis. The data
model source code can be found in the API project and could be explored in entirety there. A selection of the most
important objects can be view in the diagram below.

Fig. Core diagram

6. The Persistence Layer


The persistence layer is based on Hibernate in order to achieve the ability to run on any major DBMS. Hibernate
abstracts the underlying DBMS away and let you define the database connection properties in a file called
hibernate.properties.

DHIS 2 uses Spring-Hibernate integration, and retrieves a SessionFactory through Spring’s LocalSessionFactoryBean.
This LocalSessionFactoryBean is injected with a custom HibernateConfigurationProvider instance which fetches
Hibernate mapping files from all modules currently on the classpath. All store implementations get injected with a
SessionFactory and use this to perform persistence operations.

Most important objects have their corresponding Hibernate store implementation. A store provides methods for
CRUD operations and queries for that object, e.g. HibernateDataElementStore which offers methods such as
addDataElement( DataElement ), deleteDataElement( DataElement ), getDataElementByName( String ), etc.

4
DHIS 2 Technical Architecture The Business Layer

Fig. Persistence layer

7. The Business Layer


All major classes, like those responsible for persistence, business logic, and presentation, are mapped as Spring
managed beans. “Bean” is Spring terminology and simply refers to a class that is instantiated, assembled, and otherwise
managed by the Spring IoC container. Dependencies between beans are injected by the IoC container, which allows for
loose coupling, re-configuration and testability. For documentation on Spring, please refer to springframework.org.

The services found in the dhis-service-core project basically provide methods that delegate to a corresponding method
in the persistence layer, or contain simple and self-explanatory logic. Some services, like the ones found in the dhis-
service-datamart, dhis-service-import-export, dhis-service-jdbc, and dhis-service-reporting projects are more complex
and will be elaborated in the following sections.

7.1. The JDBC Service Project


The JDBC service project contains a set of components dealing with JDBC connections and SQL statements.

5
DHIS 2 Technical Architecture The JDBC Service Project

Fig. JDBC BatchHandler diagram

The BatchHandler interface provides methods for inserting, updating and verifying the existence of objects. The
purpose is to provide high-performance operations and is relevant for large amounts of data. The BatchHandler object
inserts objects using the multiple insert SQL syntax behind the scenes and can insert thousands of objects on each
database commit. A typical use-case is an import process where a class using the BatchHandler interface will call
the addObject( Object, bool ) method for every import object. The BatchHandler will after an appropriate number of
added objects commit to the database transparently. A BatchHandler can be obtained from the BatchHandlerFactory
component. BatchHandler implementations exist for most objects in the API.

The JdbcConfiguration interface holds information about the current DBMS JDBC configuration, more specifically
dialect, driver class, connection URL, username and password. A JdbcConfiguration object is obtained from the
JdbcConfigurationProvider component, which currently uses the internal Hibernate configuration provider to derive
the information.

The StatementBuilder interface provides methods that represents SQL statements. A StatementBuilder object is
obtained from the StatementBuilderFactory, which is able to determine the current runtime DBMS and provide an
appropriate implementation. Currently implementations exist for PostgreSQL, MySQL, H2, and Derby.

The IdentifierExtractor interface provides methods for retrieving the last generated identifiers from the DBMS. An
IdentifierExtractor is obtained from the IdentifierExtractorFactory, which is able to determine the runtime DBMS and
provide an appropriate implementation.

6
DHIS 2 Technical Architecture The Import-Export Project

Fig. JDBC StatementManager diagram

The StatementHolder interface holds and provides JDBC connections and statements. A StatementHolder object can
be obtained from the StatementManager component. The StatementManager can be initialized using the initalise()
method closed using the destroy() method. When initialized, the StatementManager will open a database connection
and hold it in a ThreadLocal variable, implying that all subsequent requests for a StatementHolder will return the same
instance. This can be used to improve performance since a database connection or statement can be reused for multiple
operations. The StatementManager is typically used in the persistence layer for classes working directly with JDBC,
like the DataMartStore.

7.2. The Import-Export Project


The import-export project contains classes responsible for producing and consuming interchange format files. The
import process has three variants which are import, preview and analysis. Import will import data directly to the
database, preview will import to a temporary location, let the user do filtering and eventually import, while the analysis
will reveal abnormalities in the import data. Currently supported formats are:
• DXF (DHIS eXchange Format)
• IXF (Indicator eXchange Format)
• DHIS 1.4 XML format
• DHIS 1.4 Datafile format
• CSV (Comma Separated Values)
• PDF (Portable Document Format)

Fig. Import-export service diagram

The low-level components doing the actual reading and writing of interchange format files are the converter
classes. The most widely used is the XmlConverter interface, which provides a write( XmlWriter, ExportParams )
and a read( XmlReader, ImportParams ) method. Most objects in the API have corresponding XmlConverter
implementations for the DXF format. Writing and reading for each object is delegated to its corresponding
XmlConverter implementation.

The ExportParams object is a specification which holds the identifiers of the objects to be exported. The converter
retrieves the corresponding objects and writes content to the XmlWriter. XmlConverter implementations for the DXF
format exist for most objects in the API. For instance, the write method of class DataElementConverter will write data
that represents DataElements in DXF XML syntax to the XmlWriter.

7
DHIS 2 Technical Architecture The Data Mart Project

The ExportService interface exposes a method InputStream exportData( ExportParams ). The ExportService is
responsible for instantiating the appropriate converters and invoke their export methods. To avoid long requests prone
to timeout-errors in the presentation layer, the actual export work happens in a separate thread. The ExportService
registers its converters on the ExportThread class using its registerXmlConverter( XmlConverter ) method, and then
starts the thread.

The ImportParams obect contains directives for the import process, like type and strategy. For instance, the read method
of class DataElementConverter will read data from the XmlReader, construct objects from the data and potentially
insert it into the database, according to the directives in the ImportParams object.

The ImportService interface exposes a method importData( ImportParams, InputStream ). The ImportService is
responsible for instantiating the appropriate converters and invoke their import methods. The import process is using
the BatchHandler interface heavily.

The ImportExportServiceManager interface provides methods for retrieving all ImportServices and ExportServices,
as well as retrieving a specific ImportService or ExportService based on a format key. This makes it simple to retrieve
the correct service from using classes since the name of the format can be used as parameter in order to get an instance
of the corresponding service. This is implemented with a Map as the backing structure where the key is the format
and the value is the Import- or ExportService reference. This map is defined in the Spring configuration, and delegates
to Spring to instantiate and populate the map. This allows for extensibility as developing a new import service is
simply a matter of providing an implementing the ImportService interface and add it to the map definition in the Spring
configuration, without touching the ImportExportServiceManager code.

Fig. Import-export converter diagram

Functionality that is general for converters of all formats is centralized in abstract converter classes. The
AbstractConverter class provides four abstract methods which must be implemented by using converters, which are
importUnique( Object ), importMatching( Object, Object), Object getMatching() and boolean isIdentical( Object,
Object ). It also provides a read( Object, GroupMemberType, ImportParams ) method that should be invoked by all
converters at the end of the read process for every object. This method utilizes the mentioned abstract methods and
dispatches the object to the analysis, preview or import routines depending on the state of the object and the current
import directives. This allows for extensibility as converters for new formats can extend their corresponding abstract
converter class and reuse this functionality.

7.3. The Data Mart Project


The data mart component is responsible for producing aggregated data from the raw data in the time and space
dimension. The aggregated data is represented by the AggregatedDataValue and AggregatedIndicatorValue objects.
The DataSetCompletenessResult object is also included in the data mart and is discussed in the section covering the
reporting project. These objects and their corresponding database tables are referred to as the data mart.

The following section will list the rules for aggregation in DHIS 2.

8
DHIS 2 Technical Architecture The Data Mart Project

• Data is a aggregated in the time and space dimension. The time dimension is represented by the Period object and
the space dimension by the OrganisationUnit object, organised in a parent-child hierarchy.
• Data registered for all periods which intersects with the aggregation start and end date is included in the aggregation
process. Data for periods which are not fully within the aggregation start and end date is weighed according to a
factor “number of days within aggregation period / total number of days in period”.
• Data registered for all children of the aggregation OrganisationUnit is included in the aggregation process.
• Data registered for a data element is aggregated based on the aggregation operator and data type of the data element.
The aggregation operator can be sum (values are summarized), average (values are averaged) and count (values
are counted). The data type can be string (text), int (number), and bool (true or false). Data of type string can not
be aggregated.
• Aggregated data of type sum – int is presented as the summarized value.
• Aggregated data of type sum – bool is presented as the number of true registrations.
• Aggregated data of type average – int is presented as the averaged value.
• Aggregated data of type average – bool is presented as a percentage value of true registrations in proportion to
the total number of registrations.
• An indicator represents a formula based on data elements. Only data elements with aggregation operator sum or
average and with data type int can be used in indicators. Firstly, data is aggregated for the data elements included
in the indicator. Finally, the indicator formula is calculated.
• A calculated data element represents a formula based on data elements. The difference from indicator is that the
formula is on the form “data element * factor”. The aggregation rules for indicator apply here as well.

Fig. Data mart diagram

The AggregationCache component provides caching in ThreadLocal variables. This caching layer is introduced to get
optimal caching [9]. The most frequently used method calls in the data mart component is represented here.

9
DHIS 2 Technical Architecture The Reporting Project

The DataElementAggregator interface is responsible for retrieving data from the crosstabulated temporary storage and
aggregate data in the time and space dimension. This happens according to the combination of data element aggregation
operator and data type the class represents. One implementation exist for each of the four variants of valid combinations,
namely SumIntDataElementAggregator, SumBoolDataElementAggregator, AverageIntDataElementAggregator and
AverageBoolAggregtor.

The DataElementDataMart component utilizes a DataElementAggregator and is responsible for writing aggregated
data element data to the data mart for a given set of data elements, periods, and organisation units.

The IndicatorDataMart component utilizes a set of DataElementAggregators and is responsible for writing aggregated
indicator data to the data mart for a given set of indicators, periods, and organisation units.

The CalculatedDataElementDataMart component utilizes a set of DataElementAggregators and is responsible for


writing aggregated data element data to the data mart for a given set of calculated data elements, periods, and
organisation units.

The DataMartStore is responsible for retrieving aggregated data element and indicator data, and data from the
temporary crosstabulated storage.

The CrossTabStore is responsible for creating, modifying and dropping the temporary crosstabulated table. The
CrossTabService is responsible for populating the temporary crosstabulated table. This table is used in an intermediate
step in the aggregation process. The raw data is de-normalized on the data element dimension, in other words
the crosstabulated table gets one column for each data element. This step implies improved performance since the
aggregation process can be executed against a table with a reduced number of rows compared to the raw data table.

The DataMartService is the central component in the data mart project and controls the aggregation process. The order
of operations is:
• Existing aggregated data for the selected parameters is deleted.
• The temporary crosstabulated table is created and populated using the CrossTabService component.
• Data element data for the previously mentioned valid variants is exported to the data mart using the
DataElementDataMart component.
• Indicator data is exported to the data mart using the IndicatorDataMart component.
• Calculated data element data is exported to the data mart using the CalculatedDataElementDataMart component.
• The temporary crosstabulated table is removed.

The data element tables are called “aggregateddatavalue” and “aggregatedindicatorvalue” and are used both inside
DHIS 2 for e.g. report tables and by third-party reporting applications like MS Excel.

7.4. The Reporting Project


The reporting project contains components related to reporting, which will be described in the following sections.

7.4.1. Report table

The ReportTable object represents a crosstabulated database table. The table can be crosstabulated on any number
of its three dimensions, which are the descriptive dimension (which can hold data elements, indicators, or data set
completeness), period dimension, and organisation unit dimension. The purpose is to be able to customize tables for
later use either in third-party reporting tools like BIRT or directly in output formats like PDF or HTML inside the
system. Most of the logic related to crosstabulation is located in the ReportTable object. A ReportTable can hold:
• Any number of data elements, indicators, data sets, periods, and organisation units.
• A RelativePeriods object, which holds 10 variants of relative periods. Examples of such periods are last 3 months, so
far this year, and last 3 to 6 months. These periods are relative to the reporting month. The purpose of this is to make
the report table re-usable in time, i.e. avoid the need for the user to replace periods in the report table as time goes by.

10
DHIS 2 Technical Architecture The Reporting Project

• A ReportParams object, which holds report table parameters for reporting month, parent organisation unit, and
current organisation unit. The purpose is to make the report table re-usable across the organisation unit hierarchy
and in time, i.e. make it possible for the user to re-use the report table across organisation units and as time goes by.
• User options such as regression lines. Value series which represents regression values can be included when the
report table is crosstabulated on the period dimension.

Fig. Report table diagram

The ReportTableStore is responsible for persisting ReportTable objects, and currently has a Hibernate implementation.

The ReportTableService is responsible for performing business logic related to report tables such as generation of
relative periods, as well as delegating CRUD operations to the ReportTableStore.

The ReportTableManager is responsible for creating and removing report tables, as well as retrieving data.

The ReportTableCreator is the key component, and is responsible for:


• Exporting relevant data to the data mart using the DataMartExportService or the DataSetCompletenessService. Data
will later be retrieved from here and used to populate the report table.
• Create the report table using the ReportTableManager.
• Include potential regression values.
• Populate the report table using a BatchHandler.
• Remove the report table using the ReportTableManager.

7.4.2. Chart
The Chart object represents preferences for charts. Charts are either period based or organisation unit based. A chart
has tree dimensions, namely the value, category, and filter dimension. The value dimension contains any numbers
of indicators. In the period based chart, the category dimension contains any number of periods while the filter
dimension contains a single organisation unit. In the organisation unit based chart, the category dimension contains
any number of organisation units while the filter dimension contains a single period. Two types of charts are available,
namely bar charts and line charts. Charts are materialized using the JFreeChart library. The bar charts are rendered
with a BarRenderer [2], the line charts with a LineAndShapeRenderer [2], while the data source for both variants
is a DefaultCategoryDataSet [3]. The ChartService is responsible for CRUD operations, while the ChartService is
responsible for creating JfreeCharts instances based on a Chart object.

Fig. Chart diagram

11
DHIS 2 Technical Architecture The Reporting Project

7.4.3. Data set completeness

The purpose of the data set completeness functionality is to record the number of data sets that have been completed.
The definition of when a data set is complete is subjective and based on a function in the data entry screen where the user
can mark the current data set as complete. This functionality provides a percentage completeness value based on the
number of reporting organisation units with completed data sets compared to the total number of reporting organisation
units for a given data set. This functionality also provides the number of completed data sets reported on-time, more
specifically reported before a defined number of days after the end of the reporting period. This date is configurable.

Fig. Data set completeness diagram

The CompleteDataSetRegistration object is representing a data set marked as complete by a user. This property
holds the data set, period, organisation unit and date for when the complete registrations took place. The
CompleteDataSetRegistrationStore is responsible for persistence of CompleteDataSetRegistration objects and
provides methods returning collections of objects queried with different variants of data sets, periods, and organisation
units as input parameters. The CompleteDataSetRegistrationService is mainly delegating method calls the store layer.
These components are located in the dhis-service-core project.

The completeness output is represented by the DataSetCompletenessResult object. This object holds information
about the request that produced it such as data set, period, organisation unit, and information about the data
set completeness situation such as number of reporting organisation units, number of complete registrations,
and number of complete registrations on-time. The DataSetCompletenessService is responsible for the business
logic related to data set completeness reporting. It provides methods which mainly returns collections of
DataSetCompletenessResults and takes different variants of period, organisation unit and data set as parameters. It
uses the CompleteDataSetRegistrationService to retrieve the number of registrations, the DataSetService to retrieve
the number of reporting organisation units, and performs calculations to derive the completeness percentage based on
these retrieved numbers.

The DataSetCompletenessExportService is responsible for writing DataSetCompletenessResults to a database table


called “aggregateddatasetcompleteness”. This functionality is considered to be part of the data mart as this data can be
used both inside DHIS 2 for e.g. report tables and by third-party reporting applications like MS Excel. This component

12
DHIS 2 Technical Architecture The System Support Project

is retrieving data set completeness information from the DataSetCompeletenessService and is using the BatchHandler
interface to write such data to the database.

7.4.4. Document
The Document object represents either a document which is uploaded to the system or a URL. The DocumentStore is
responsible for persisting Document objects, while the DocumentService is responsible for business logic.

Fig. Document diagram

7.4.5. Pivot table


The PivotTable object represents a pivot table. It can hold any number of indicators, periods, organisation units, and
corresponding aggregated indicator values. It offers basic pivot functionality like pivoting and filtering the table on all
dimensions. The business logic related to pivot tables is implemented in Javascript and is located in the presentation
layer. The PivotTableService is reponsible for creating and populating PivotTable objects.

7.4.6. The External Project


The LocationManager component is responsible for the communication between DHIS 2 and the file system of the
operating system. It contains methods which provide read access to files through File and InputStream instances, and
write access to the file system through File and OutputStream instances. The target location is relative to a system
property “dhis2.home” and an environment variable “DHIS2_HOME” in that order. This component is used e.g. by
the HibernateConfigurationProvider to read in the Hibernate configuration file, and should be re-used by all new
development efforts.

The ConfigurationManager is a component which facilitates the use of configuration files for different purposes in
DHIS 2. It provides methods for writing and reading configuration objects to and from XML. The XStream library is
used to implement this functionality. This component is typically used in conjunction with the LocationManager.

7.5. The System Support Project


The system support project contains supportive classes that are general and can be reused througout the system.

7.5.1. DeletionManager
The deletion manager solution is responsible for deletion of associated objects. When an object has a depdency to
another object this association needs to be removed by the application before the latter object can be deleted (unless the
association is defined to be cascading in the DBMS). Often an object in a peripheral module will have an associations
to a core object. When deleting the core object this association must be removed before deleting the core object.The
core module cannot have a dependency to the peripheral module however due to the system design and the problem of
cyclic dependencies. The deletion manager solves this by letting all objects implement a DeletionHandler which takes
care of associations to other objects. A DeletionHandler should override methods for objects that, when deleted, will
affect the current object in any way. The DeletionHandler can choose to disallow the deletion completely by overriding
the allowDelete* method, or choose to allow the deletion and remove the associations by overriding the delete*
method. Eg. a DeletionHandler for DataElementGroup should override the deleteDataElement(..) method which should
remove the DataElement from all DataElementGroups. If one decide that DataElement which are a member of any
DataElementGroups cannot be deleted, it should override the allowDeleteDataElement() method and return false if
there exists DataElementGroups with associations to that DataElement.

13
DHIS 2 Technical Architecture The Presentation Layer

First, all DeletionHandler implementations are registered with the DeletionManager through a Spring
MethodInvokingFactoryBean in the Spring config file. This solution adheres to the observer design pattern.

Second, all method invocations that should make the DeletionManager execute are mapped to the DeletionInterceptor
with Spring AOP advice in the Spring config file. The DeletionInterceptor in turn invokes the execute method
of the DeletionManager. First, the DeletionManager will through reflection invoke the allowDelete* method on
all DeletionHandlers. If no DeletionHandlers returned false it will proceed to invoke the delete* method on all
DeletionHandlers. This way all DeletionHandlers get a chance to clean up associations to the object being deleted.
Finally the object itself is deleted.

8. The Presentation Layer


The presentation layer of DHIS 2 is based on web modules which are assembled into a portal. This implies
a modularized design where each module has its own domain, e.g. the dhis-web-reporting module deals with
reports, charts, pivot tables, documents, while the dhis-web-maintenance-dataset module is responsible for data set
management. The web modules are based on Struts and follow the MVC pattern [5]. The modules also follow the
Maven standard for directory layout, which implies that Java classes are located in src/main/java, configuration files
and other resources in src/main/resources, and templates and other web resources in src/main/webapp. All modules
can be run as a standalone application.

Common Java classes, configuration files, and property files are located in the dhis-web-commons project, which is
packaged as a JAR file. Common templates, style sheets and other web resources are located in the dhis-web-commons-
resources project, which is packaged as a WAR file. These are closely related but are separated into two projects. The
reason for this is that other modules must be able to have compile dependencies on the common Java code, which
requires it to be packaged as a JAR file. For other modules to be able to access the common web resources, these must
be packaged as a WAR file [6].

8.1. The Portal


DHIS 2 uses a light-weight portal construct to assemble all web modules into one application. The portal functionality
is located in the dhis-web-portal project. The portal solution is integrated with Struts, and the following section requires
some prior knowledge about this framework, please refer to struts.apache.org for more information.

8.1.1. Module Assembly


All web modules are packaged as WAR files. The portal uses the Maven WAR plug-in to assemble the common web
modules and all web modules into a single WAR file. Which modules are included in the portal can be controlled
simply through the dependency section in the POM file [7] in the dhis-web-portal project. The web module WAR files
will be extracted and its content merged together.

8.1.2. Portal Module Requirements


The portal requires the web modules to adhere to a few principles:
• The web resources must be located in a folder src/main/webapp/<module-artifact-id >.
• The xwork.xml configuration file must extend the dhis-web-commons.xml configuration file.
• The action definitions in xwork.xml for a module must be in a package where the name is <module-artifact-id>,
namespace is /<module-artifact-id>, and which extends the dhis-web-commons package.
• All modules must define a default action called index.
• The web.xml of the module must define a redirect filter, open-session-in-view filter, security filter, and the Struts
FilterDispatcher [8].
• All modules must have dependencies to the dhis-web-commons and dhis-web-commons-resources projects.

8.1.3. Common Look-And-Feel


Common look and feel is achieved using a back-bone Velocity template which includes a page template and a menu
template defined by individual actions in the web modules. This is done by using static parameters in the Struts/Xwork

14
DHIS 2 Technical Architecture Framework Stack

xwork.xml configuration file. The action response is mapped to the back-bone template called main.vm, while static
parameters called page and menu refers to the templates that should be included. This allows the web modules to
display its desired content and left side menu while maintaining a common look-and-feel.

8.1.4. Main Menu


The main menu contains links to each module. Each menu link will redirect to the index action of each module. The
menu is updated dynamically according to which web modules are on the classpath. The menu is visibly generated
using the ModuleManager component, which provides information about which modules are currently included. A
module is represented by the Module object, which holds properties about the name, package name, and default action
name. The ModuleManager detects web modules by reading the Struts Configuration and PackageConfig objects, and
derives the various module names from the name of each package definition. The Module objects are loaded onto the
Struts value stack by Struts interceptors using the ModuleManager. These values are finally used in the back-bone
Velocity template to produce the menu mark-up.

9. Framework Stack
The following frameworks are used in the DHIS 2 application.

9.1. Application Frameworks


• Hibernate (www.hibernate.org)
• Spring (www.springframework.org)
• Struts struts.apache.org
• Velocity (www.velocity.apache.org)
• Commons (www.commons.apache.org)
• JasperReports jasperforge.org/projects/jasperreports
• JFreeChart (www.jfree.org/jfreechart/)
• JUnit (www.junit.org)

9.2. Development Frameworks


• Maven (apache.maven.org)
• Bazaar (bazaar-vcs.org)

10. Definitions
[1] “Classpath” refers to the root of a JAR file, /WEB-INF/lib or /WEB-INF/classes in a WAR-file and /src/main/
resources in the source code; locations from where the JVM is able to load classes.

[2] JFreeChart class located in the org.jfree.chart.renderer package.

[3] JFreeChart class located in the org.jfree.data.category package.

[4] Operations related to creating, retrieving, updating, and deleting objects.

[5] Model-View-Controller, design pattern for web applications which separates mark-up code from application logic
code.

[6] The WAR-file dependency is a Maven construct and allows projects to access the WAR file contents during runtime.

[7] Project Object Model, the key configuration file in a Maven 2 project.

[8] Represents the front controller in the MVC design pattern in Struts.

15
DHIS 2 Technical Architecture Definitions

[9] Hibernate second-level cache does not provide satisfactory performance.

16
Web API Introduction

Chapter 1. Web API


The Web API is a component which makes it possible for external systems to access and manipulate data stored in an
instance of DHIS 2. More precisely, it provides a programmatic interface to a wide range of exposed data and service
methods for applications such as third-party software clients, web portals and internal DHIS 2 modules.

1.1. Introduction
The Web API adheres to many of the principles behind the REST architectural style. To mention some few and
important ones:
1. The fundamental building blocks are referred to as resources. A resource can be anything exposed to the Web,
from a document to a business process - anything a client might want to interact with. The information aspects of a
resource can be retrieved or exchanged through resource representations. A representation is a view of a resource's
state at any given time. For instance, the reportTable resource in DHIS represents a tabular report of aggregated
data for a certain set of parameters. This resource can be retrieved in a variety of representation formats including
HTML, PDF, and MS Excel.
2. All resources can be uniquely identified by a URI (also referred to as URL). All resources have a default
representation. You can indicate that you are interested in a specific representation by supplying an Accept HTTP
header, a file extension or a format query parameter. So in order to retrieve the PDF representation of a report table
you can supply a Accept: application/pdf header or append .pdf or ?format=pdf to your request URL.
3. Interactions with the API requires correct use of HTTP methods or verbs. This implies that for a resource you must
issue a GET request when you want to retrieve it, POST request when you want to create one, PUT when you want to
update it and DELETE when you want to remove it. So if you want to retrieve the default representation of a report
table you can send a GET request to e.g. /reportTable/iu8j/hYgF6t, where the last part is the report table identifier.
4. Resource representations are linkable, meaning that representations advertise other resources which are relevant to
the current one by embedding links into itself. This feature greatly improves the usability and robustness of the API
as we will see later. For instance, you can easily navigate to the indicators which are associated with a report table
from the reportTable resource through the embedded links using your preferred representation format.

While all of this might sound complicated, the Web API is actually very simple to use. We will proceed with a few
practical examples in a minute.

1.2. Authentication
In order to interoperate with the Web API you will have to authenticate using Basic authentication. Basic authentication
is a technique for clients to send login credentials over HTTP to a web server. Technically speaking, the username is
appended with a colon and the password, Base64-encoded, prefixed Basic and supplied as the value of the Authorization
HTTP header. More formally that is Authorization: Basic base64encode(username:password) An
important note is that this authentication scheme provides no security since the username and password is sent in plain
text and can be easily decoded. Using it is recommended only if the server is using SSL/TLS (HTTPS) to encrypt
communication between itself and the client. Most DHIS 2 deployments typically use SSL today - consider it a hard
requirement to provide secure interactions with the Web API.

If you are building a form-based web application and want to authenticate using a web form you can have the form send
a POST request to the login endpoint in DHIS which is /dhis-web-commons-security/login.action?authOnly=true .
Two request parameters, j_username and j_password, containing the username and password in clear-text respectively,
are expected. The browser will then receive a cookie which will be used for authentication for subsequent requests.
The purpose of the authOnly parameter is to avoid a time-consuming redirect to the home page of the user.

You can verify and get information about the currently authenticated user by making a GET request to the following
URL:

/api/currentUser

17
Web API Date and period format

1.3. Date and period format


Throughout the Web API we refer to dates and periods. The date format is:

yyyy-MM-dd

For instance, if you want to express March 20, 2014 you must use 2014-03-20.

The period format is described in the following table.

Table 1.1. Period format

Interval Format Example Description


Day yyyyMMdd 20040315 March 15 2004
Week yyyyWn 2004W10 Week 10 2004
Month yyyyMM 200403 March 2004
Quarter yyyyQn 2004Q1 January-March 2004
Six-month yyyySn 2004S1 January-June 2004
Six-month yyyyAprilSn 2004AprilS1 April-September 2004
April
Year yyyy 2004 2004
Financial yyyyApril 2004April Apr 2004-Mar 2005
Year April
Financial yyyyJuly 2004July July 2004-June 2005
Year July
Financial yyyyOct 2004Oct Oct 2004-Sep 2005
Year Oct

In some parts of the API, like for the analytics resource, you can utilize relative periods in addition to fixed periods
(defined above). The relative periods are relative to the current date, and allows e.g. for creating dynamic reports. The
available relative period values are:

LAST_MONTH, LAST_BIMONTH, LAST_QUARTER, LAST_SIX_MONTH, MONTHS_THIS_YEAR,


QUARTERS_THIS_YEAR,
THIS_YEAR, MONTHS_LAST_YEAR, QUARTERS_LAST_YEAR, LAST_YEAR, LAST_5_YEARS,
LAST_12_MONTHS,
LAST_3_MONTHS, LAST_6_BIMONTHS, LAST_4_QUARTERS, LAST_2_SIXMONTHS,
THIS_FINANCIAL_YEAR,
LAST_FINANCIAL_YEAR, LAST_5_FINANCIAL_YEARS, LAST_WEEK, LAST_4_WEEKS, LAST_12_WEEKS,
LAST_52_WEEKS

1.4. Browsing the Web API


The entry point for browsing the Web API is /api/. This resource provide links to all available resources. Four resource
representation formats are consistently available for all resources: HTML, XML, JSON and JSONP. Some resources
will have other formats available, like MS Excel, PDF, CSV and PNG. To explore the API from a web browser,
navigate to the /api/ entry point and follow the links to your desired resource, for instance /api/dataElements. For all
resources which return a list of elements certain query parameters can be used to modify the response:

Table 1.2. Query parameters

Param Option values Default Description


option
links true | false true Indicates whether to include links to relevant elements.

18
Web API Translation

Param Option values Default Description


option
paging true | false true Indicates whether to return lists of elements in pages.
page number 1 Defines which page number to return.
pageSize number 50 Defines the number of elements to return for each page.
order propertyName:asc/ Order the output using a specified order, only properties that
desc are both persisted and simple (no collections, idObjects etc)
are supported.

An example of how these parameters can be used to get a full list of data element groups in XML response format is:

/api/dataElementGroups.xml?links=false&paging=false

You can query for elements on the name property instead of returning full list of elements using the query query
variable. In this example we query for all data elements with the word "anaemia" in the name:

/api/dataElements?query=anaemia

You can find an object based on its ID across all object types through the identifiableObjects resource:

/api/identifiableObjects/<id>

1.4.1. Translation
Support for I18n translation in the web-api was added in 2.19 release. It is supported by two parameters:

Table 1.3. Translate options

Parameter Values Description


translate true/false Translate web-api output, display* properties will be used
(displayName, displayShortName, displayDescription)
locale Locale to use Translate web-api using a specified locale (implies translate=true)

1.5. Working with the meta-data API


The meta-data resource can be accessed at /api/metaData. This resource lets you read and write the full set of meta-
data. This section will give a basic introduction to working with this API. For specific synchronization issues, please
see the integration chapter.

By default, interacting with /api/metaData using the GET HTTP method will give you all meta-data rendered as XML.
You can also be more specific about the meta-data elements you are interested in.

1.5.1. Content types


The Web API offers several content types for meta-data.

Table 1.4. Available Content-Types

Content-Type URL extension Description


application/xml .xml Returns the meta-data in XML representation
application/json .json Returns the meta-data in JSON representation
application/pdf .pdf Returns the meta-data as a PDF document
application/csv .csv Returns the meta-data as a CSV file

19
Web API Query parameters

Content-Type URL extension Description


application/vnd.ms- .xls | .xlsx Returns the meta-data as an Excel workbook
excel

1.5.2. Query parameters


The following query parameters are available for customizing your request.

Table 1.5. Available Query Filters

Param Type Required Options (default Description


first)
assumeTrue boolean false true | false Indicates whether to
get all resources
or no resources by
default.
viewClass enum false export | basic | Alternative views
detailed of the meta-
data. Please note
that only meta-
data exported with
viewClass=export
can be used for
import.
dryRun boolean false false | true If you set this
to true, the actual
import will not
happen. Instead the
system will generate
a summary of what
would have been
done.
{resources} boolean false true | false See /api for
(default depends on available resources.
assumeTrue) Indicates which
resources to include
in the response.
lastUpdated date false Several formats are Filters the meta-
available: yyyy, data based on the
yyyy-MM, yyyy- lastUpdated field.
MM-dd, yyyyMM,
yyyyMMdd
preheatCache boolean false true | false Turn cache-map
preheating on/off.
This is on by default,
turning this off will
make initial load
time for importer
much shorter (but
will make the import
itself slower). This
is mostly used
for cases where
you have a small

20
Web API Available strategies for import

Param Type Required Options (default Description


first)
XML file you
want to import,
and don't want to
wait for cache-map
preheating.
strategy enum false CREATE_AND_UPDATE
Import strategy to
| CREATE | use, see below for
UPDATE | more information.
DELETE
sharing boolean false false | true Should sharing be
supported or not.
The default is false,
which is the old
behavior. You can
set this to true to
allow updating user,
publicAccess and
userGroupAccesses
fields (if not they
are cleared out
on create, and not
touched on update).
mergeStrategy enum false MERGE_ALWAYS, Strategy for
MERGE_IF_NOT_NULL
merging of objects
when doing updates.
MERGE_ALWAYS
will just overwrite
the propery with the
new value provided,
MERGE_IF_NOT_NULL
will only set the
property if its
not null (only if
the property was
provided).

1.5.3. Available strategies for import


When importing data using the metaData resource you can define various strategies for import.

Table 1.6. Available Strategies

Type Description
CREATE_AND_UPDATE Allows creation and updating of objects.
CREATE Allows creation of objects only.
UPDATE Allows update of objects only.
DELETE Allows deletes of objects only.

1.5.4. Examples
Example: Get a filtered set of meta-data that was updated since August 1, 2014

21
Web API Meta-data filtering

As described in the last section, there is a number of options you can apply to /api/metaData to give you a filtered
view. The use-case we will be looking into here is where you want a nightly job that synchronizes organisation units.
We will be using cURL as the HTTP client.

curl -H "Accept: application/xml" -u admin:district


"https://apps.dhis2.org/demo/api/metaData?
assumeTrue=false&organisationUnits=true&lastUpdated=2014-08-01"

Example: Get meta-data that was updated since February 2014

This example will just the default assumeTrue setting, along with getting the last updates from February 2014. This
means that every single type that has been updated will be retrieved.

curl -H "Accept: application/xml" -u admin:district "https://apps.dhis2.org/demo/api/


metaData?lastUpdated=2014-02"

Example: Create meta-data

The meta-data resource can also be used to create or update meta-data by using the POST HTTP method. The meta-
data content can be both XML and JSON, using "application/xml" and "application/json" content type respectively.
The request payload content will be accepted in several formats, including plain text, zipped and gzipped. POSTing
a meta-data payload can be done for example like this, where metaData.xml is a file in the same directory with the
meta-data content:

curl -H "Content-Type: application/xml" -u admin:district -d @metaData.xml "https://


apps.dhis2.org/demo/api/metaData" -X POST -v

The import will happen in a asyncronous process which implies that the response will return as soon as the process is
started. The response status code to be expected is 204 No Content.

1.6. Meta-data filtering


To filter the meta-data there are several filter operations that can be applied to the returned list of meta-data. The format
of the filter itself is straight-forward and follows the pattern property:operator:value, where property is the property on
the meta-data you want to filter on, operator is the comparison operator you want to perform and value is the value to
check against (not all operators require value). Please see the schema section to discover which properties are available.
Recursive filtering, ie. filtering on associated objects or collection of objects, are supported as well.

Table 1.7. Available Operators

Operator Types Value required Description


eq string | boolean | true Equality
integer | float | enum
| collection (checks
for size) | date
ne string | boolean | true Inequality
integer | float |
collection (checks
for size) | date
like / ilike string true Case insensitive
string matching
nlike string true Case insensitive
string not matching
startsWith string true Case insensitive
string matching
endsWith string true Case insensitive
string matching

22
Web API Meta-data field filter

Operator Types Value required Description


gt string | boolean | true Greater than
integer | float |
collection (checks
for size) | date
ge string | boolean | true Greater than or
integer | float | equal
collection (checks
for size) | date
lt string | boolean | true Less than
integer | float |
collection (checks
for size) | date
le string | boolean | true Less than or equal
integer | float |
collection (checks
for size) | date
null all false Property is null
empty collection false Collection is empty

Different operators will be applied as logical and query, and equal operators will be applied as logical or query. The
filtering mechanism allows for recursion. See below for an example:

Example: Get data elements with id property ID1 or ID2:

/api/dataElements?filter=id:eq:ID1&filter=id:eq:ID2

Example: Get all data elements which has the dataSet with id ID1:

/api/dataElements?filter=dataSets.id:eq:ID1

Example 3: Get all data elements with aggregation operator "sum" and value type "int":

/api/dataElements.json?filter=aggregationOperator:eq:sum&filter=type:eq:int

1.7. Meta-data field filter


In certain situations the default views of the meta-data can be too verbose. E.g. the client might want only need a few
fields from each object and want to remove unnecessary ones. To discover which fields are available for each object
please see the schema section.

The format for include/exclude is very simple and allows for infinite recursion. To filter at the "root" level you
can just use the name of the field, i.e. ?fields=id,name which would only display the id and name for every object.
For objects that are either collections or complex objects with properties on their own you can use the format ?
fields=id,name,dataSets[id,name] which would return id, name of the root, and the id and name of every data set on
that object. Negation can be done with the exclamation operator, and we have a set of presets of field select (see below).
Both XML and JSON are supported.

Example: Get id and name on the indicators resource:

/api/indicators?fields=id,name

Example: Get id and name from dataElements, and id and name from the dataSets on dataElements:

/api/dataElements?fields=id,name,dataSets[id,name]

To exclude a field from the output you can use the exclamation (!) operator. This is allowed anywhere in the query and
will simply not include that property (as it might have been inserted in some of the presets).

23
Web API Field transformers

A few presets (selected fields groups) are available and can be applied using the ':' operator.

Table 1.8. Property operators

Operator Description
<field-name> Include property with name, if it exists.
<object>[<field- Includes a field within either a collection (will be applied to every object in that
name>, ...] collection), or just on a single object.
!<field-name>, <object>[! Do not include this field name, also works inside objects/collections. Useful when you
<field-name> use a preset to inlude fields.
*, <object>[*] Include all fields on a certain object, if applied to a collection, it will include all fields
on all objects on that collection.
:<preset> Alias to select multiple fields. Three presets are currently available, see table below
for descriptions.

Table 1.9. Field presets

Preset Description
all All fields of the object
* Alias for all
identifiable Includes id, name, code, created and lastUpdated fields
nameable Includes id, name, shortName, code, description, created and lastUpdated fields
persisted Returns all persisted property on a object, does not take into consideration if the object
is the owner of the relation.
owner Returns all persisted property on a object where the object is the owner of all
properties, this payload can be used to update through the web-api.

Example: Include all fields from dataSets except organisationUnits:

/api/dataSets?fields=:all,!organisationUnits

Example: Include only id, name and the collection of organisation units from a data set, but exclude the id from
organisation units:

/api/dataSets/BfMAe6Itzgt?fields=id,name,organisationUnits[:all,!id]

Example: Include nameable properties from all indicators:

/api/indicators.json?fields=:nameable

1.7.1. Field transformers


In DHIS 2.17 we introduced field transformers, the idea is to allow further customization of the properties on the server
side. For 2.17 we only supports one transformer called rename, it can be used like this:

/api/dataElements/ID?fields=id|rename(i),name|rename(n)

This will rename the id property to i and name property to n. Please note that the format should be considered beta
in 2.17, and the format might be changed in 2.18.

1.7.2. Field converters


In DHIS 2.17 alongside transformers we also introduced field converters, while field transformers usually do minor
changes to the data stream (name changes etc), field converters can completely change the output of the data. For 2.17
we are including 3 field converters:

24
Web API Meta-data create, read, update, delete,
validate
Table 1.10. Field converters

Name Description
size Gives sizes of strings (length) and collections, i.e. /api/dataElements?fields=dataSets::size
isEmpty Is string or collection empty, i.e. /api/dataElements?fields=dataSets::isEmpty
isNotEmpty Is string or collection not empty, i.e. /api/dataElements?fields=dataSets::isNotEmpty

Please note that the format should be considered beta in 2.17, and the format might be changed in 2.18.

1.8. Meta-data create, read, update, delete, validate


While some of the web-api endpoints already contains support for CRUD (create, read, update, delete), from version
2.15 this is now supported on all endpoints. It should work as you expect, and the subsections will give more detailed
information about create, update, and delete (read is already covered elsewhere, and have been supported for a long
time).

1.8.1. Creating and updating objects


For creating new objects you will need to know the endpoint, the type format, and make sure that you have the required
authorities. As an example , we will create and update an constant. To figure out the format, we can use the new schema
endpoint for getting format description (this will be further improved in 2.17). So we will start with getting that info:

http://<<server>>/api/schemas/constant.json

From the output, you can see that the required authorities for create are F_CONSTANT_ADD, and the important
properties are: name and value. From this we can create a JSON payload and save it as a file called constant.json:

{
"name": "PI",
"value": "3.14159265359"
}

The same content as an XML payload:

<constant name="PI" xmlns="http://dhis2.org/schema/dxf/2.0">


<value>3.14159265359</value>
</constant>

We are now ready create the new constant by sending a POST request to the constants endpoint with the JSON payload
using curl:

curl -d @constant.json "http://server/api/constants" -X POST -H "Content-Type:


application/json" -u user:password

A specific example of posting the constant to the demo server:

curl -d @constant.json "https://apps.dhis2.org/api/constants" -X POST -H "Content-


Type: application/json" -u admin:district

If everything went well, you should see an output similar to:

{
"status":"SUCCESS",
"importCount":{"imported":1,"updated":0,"ignored":0,"deleted":0},
"type":"Constant"
}

The process will be exactly the same for updating, you make your changes to the JSON/XML payload, find out the ID
of the constant, and then send a PUT request to the endpoint including ID:

25
Web API Deleting objects

curl -X PUT -d @pi.json -H "Content-Type: application/json" -u user:password http://


server/api/constants/ID

1.8.2. Deleting objects


Deleting objects are very straight forward, you will need to know the ID and the endpoint of the type you want delete,
let's continue our example from the last section and use a constant. Let's assume that the id is abc123, then all you
need to do is the send the DELETE request to the endpoint + id:

curl -X DELETE -u user:password


http://server/api/constants/ID

A successful delete should return HTTP status 204 (no content).

1.8.3. Adding and removing objects to/from collections


In order to add or remove objects to or from a collection of objects you can use the following pattern:

/api/{collection-object}/{collection-object-id}/{collection-name}/{object-id}

You should use the POST method to add, and the DELETE method to remove an object. The components of the pattern
are:
• collection object: The type of objects that owns the collection you want to modify.
• collection object id: The identifier of the object that owns the collection you want to modify.
• collection name: The name of the collection you want to modify.
• object id: The identifier of the object you want to add or remove from the collection.

As an example, in order to remove a data element with identifier IDB from a data element group with identifier IDA
you can do a DELETE request:

DELETE /api/dataElementGroups/IDA/dataElements/IDB

To add a category option with identifier IDB to a category with identifier IDA you can do a POST request:

POST /api/categories/IDA/categoryOptions/IDB

Please be aware that the collection object must be the owner of that relationship. This can be checked at the /api/
schemas or /api/schemas/type endpoint.

1.8.4. Validating payloads


System wide validation of metadata payloads are not yet enabled (it will be in 2.19), but you can validate your payload
manually by sending it to the proper schema endpoint. If you wanted to validate the constant from the create section
before, you would send it like this:

POST /api/schemas/constant
{ payload }

A simple (non-validating) example would be:

curl -X POST -d "{\"name\": \"some name\"}" -H "Content-Type: application/json" -u


admin:district https://apps.dhis2.org/dev/api/schemas/dataElement

Which would yield the result:

[
{
"message" : "Required property missing.",
"property" : "type"
},
{

26
Web API Partial updates

"property" : "aggregationOperator",
"message" : "Required property missing."
},
{
"property" : "domainType",
"message" : "Required property missing."
},
{
"property" : "shortName",
"message" : "Required property missing."
}
]

1.8.5. Partial updates


For cases where you don't want or need to update all properties on a object (which means downloading a potentially
huge payload, change one property, then upload again) we now support partial update of a single property, we are
planing to support fuller partial updates later (updating more than one field).

The format for updating a single property is the same as when you are updating a complete object, just with only 1
property in the JSON/XML file, i.e.:

curl -X PATCH -d "{\"name\": \"New Name\"}" -H "Content-Type: application/json" -u


admin:district https://apps.dhis2.org/dev/api/dataElements/fbfJHSPpUQD/name

Please note that we are including the property name two times, one time in the payload, and one time in the endpoint,
the generic endpoint for this is /api/type/id/property-name, and the Content-Type must also be included as usual (since
we support multiple formats).

1.9. CSV meta-data import


DHIS 2 supports import of meta-data in the CSV format. Columns which are not required can be omitted in the CSV
file, but the order will be affected. If you would like to specify columns which appear late in the order but not specify
columns which appear early in the order you can include empty columns ("") for them. The following object types
are supported:
• Data elements
• Data element groups
• Category options
• Category option groups
• Organisation units
• Organisation unit groups
• Validation rules
• Option sets

The formats for the currently supported object types for CSV import are listed below.

Table 1.11. Data Element CSV Format

Column Required Value (default first) Description


Name Yes Name. Max 230 char. Unique.
UID No UID Stable identifier. Max 11 char. Will be generated by system
if not specified.
Code No Stable code. Max 50 char.
Short name No 50 first char of name Will fall back to first 50 characters of name if unspecified.
Max 50 char. Unique.

27
Web API CSV meta-data import

Column Required Value (default first) Description


Description No Free text description.
Form name No Max 230 char.
Domain type No aggregate | tracker Domain type for data element, can be aggregate or tracker.
Max 16 char.
Value type No int | string | bool Value type. Max 16 char.
| trueOnly | date |
unitInterval
Number type No int | posInt | Only relevant if type is int. Max 16 char.
negInt | number |
zeroPositiveInt
Text type No text | longText Only relevant if type is string. Max 16 char.
Aggregation No sum | average | count | Operator indicating how to aggregate data in the time
operator stddev | variance dimension. Max 16 char.
Category No UID UID of category combination. Will default to default
combination category combination if not specified.
UID
Url No URL to data element resource. Max 255 char.
Zero is No false | true Indicates whether zero values will be stored for this data
significant element.
Option set No UID UID of option set to use for data.
Comment No UID UID of option set to use for comments.
option set

Table 1.12. Organisation Unit CSV Format

Column Required Value (default first) Description


Name Yes Name. Max 230 characters. Unique.
UID No UID Stable identifier. Max 11 char. Will be generated by system
if not specified.
Code No Stable code. Max 50 char.
Parent UID No UID UID of parent organisation unit.
Short name No 50 first char of name Will fall back to first 50 characters of name if unspecified.
Max 50 characters. Unique.
Description No Free text description.
UUID No UUID. Max 36 char.
Opening date No 1970-01-01 Opening date of organisation unit in YYYY-MM-DD
format.
Closed date No Closed date of organisation unit in YYYY-MM-DD format,
skip if currently open.
Comment No Free text comment for organisation unit.
Feature type No Can be Point, Polygon, MultiPolygon. Max 50 char.
Coordinates No Coordinates used for geospatial analysis in Geo JSON
format.
URL No URL to organisation unit resource. Max 255 char.
Contact No Contact person for organisation unit. Max 255 char.
person

28
Web API CSV meta-data import

Column Required Value (default first) Description


Address No Address for organisation unit. Max 255 char.
Email No Email for organisation unit. Max 150 char.
Phone No Phone number for organisation unit. Max 150 char.
number

Table 1.13. Validation Rule CSV Format

Column Required
Value (default first) Description
Name Yes Name. Max 230 characters. Unique.
UID No UID Stable identifier. Max 11 char. Will be generated by
system if not specified.
Code No Stable code. Max 50
Description No Free text description.
Instruction No Free text instruction.
Importance No medium | high | low
Rule type No validation | surveillance
Operator No equal_to | not_equal_to |
greater_than |
greater_than_or_equal_to |
less_than |
less_than_or_equal_to |
compulsory_pair
Period type No Monthly | Daily | Weekly |
Quarterly | SixMontly | Yearly
Left side Yes Mathematical formula based on data element and
expression option combo UIDs.
Left side Yes Free text.
expression
description
Left side null if No false | true Boolean.
blank
Right side Yes Mathematical formula based on data element and
expression option combo UIDs.
Right side Yes Free text.
expression
description
Right side null if No false | true Boolean.
blank

Table 1.14. Option Set CSV Format

Column Required Value Description


(default first)
Name Yes Name. Max 230 characters. Unique. The option set values should be
repeated for each option.
UID No UID Stable identifier. Max 11 char. Will be generated by system if not
specified.
Code No Stable code. Max 50 char.

29
Web API Data values

Column Required Value Description


(default first)
Option Yes Option. Free text. The option set values should be repeated for each
option.

Table 1.15. Data Element Group, Category Option, Category Option Group, Organisation Unit Group CSV
Format

Column Required Value Description


(default first)
Name Yes Name. Max 230 characters. Unique.
UID No UID Stable identifier. Max 11 char. Will be generated by system if not
specified.
Code No Stable code. Max 50 char.

An example of a CSV file for data elements can be seen below. The first row will always be ignored. Notice how you
can skip columns and rely on default values or simply leave columns blank:

name,uid,code,shortname,description,formname,domaintype,type,numbertype,texttype,aggregationopera
"Women participated in skill development training",,"D0001","Women participated
development training"
"Women participated in community organizations",,"D0002","Women participated community
organizations"

A minimal example for importing organisation units with a parent unit looks like this:

name,uid,code,parent
"West province",,"WESTP","ImspTQPwCqd"
"East province",,"EASTP","ImspTQPwCqd"

The format for option sets is special. One record represents an option, and the three first values representing the option
set should be repeated for each option (record):

name,uid,code,option
"Color",,,"Blue"
"Color",,,"Green"
"Gender",,,"Female"
"Gender",,,"Male"

1.10. Data values


This section is about sending and reading data values.

1.10.1. Sending data values


A common use-case for system integration is the need to send a set of data values from a third-party system into DHIS.
In this example we will use the DHIS 2 demo on http://apps.dhis2.org/demo as basis and we recommend that you follow
the provided links with a web browser while reading (log in with admin/district as username/password). We assume
that we have collected case-based data using a simple software client running on mobile phones for the Mortality <5
years data set in the community of Ngelehun CHC (in Badjia chiefdom, Bo district) for the month of January 2014.
We have now aggregated our data into a statistical report and want to send that data to the national DHIS 2 instance.

The resource which is most appropriate for our purpose of sending data values is the dataValueSets resource. A data
value set represents a set of data values which have a logical relationship, usually from being captured off the same
data entry form. We follow the link to the HTML representation which will take us to http://apps.dhis2.org/demo/api/
dataValueSets. The format looks like this:

30
Web API Sending data values

<dataValueSet xmlns="http://dhis2.org/schema/dxf/2.0" dataSet="dataSetID"


completeDate="date" period="period" orgUnit="orgUnitID"
attributeOptionCombo="aocID">
<dataValue dataElement="dataElementID" categoryOptionCombo="cocID" value="1"
comment="comment1"/>
<dataValue dataElement="dataElementID" categoryOptionCombo="cocID" value="2"
comment="comment2"/>
<dataValue dataElement="dataElementID" categoryOptionCombo="cocID" value="3"
comment="comment3"/>
</dataValueSet>

JSON is supported in this format:

{
"dataSet": "dataSetID",
"completeDate": "date",
"period": "period",
"orgUnit": "orgUnitID",
"attributeOptionCombo", "aocID",
"dataValues": [
{ "dataElement": "dataElementID", "categoryOptionCombo": "cocID", "value": "1",
"comment": "comment1" },
{ "dataElement": "dataElementID", "categoryOptionCombo": "cocID", "value": "2",
"comment": "comment2" },
{ "dataElement": "dataElementID", "categoryOptionCombo": "cocID", "value": "3",
"comment": "comment3" }
]
}

CSV is supported in this format:

"dataelement","period","orgunit","catoptcombo","attroptcombo","value","storedby","lastupd","comme
"dataElementID","period","orgUnitID","cocID","aocID","1","username","2015-04-01","comment1"
"dataElementID","period","orgUnitID","cocID","aocID","2","username","2015-04-01","comment2"
"dataElementID","period","orgUnitID","cocID","aocID","3","username","2015-04-01","comment3"

Note: Please refer to the date and period section above for time formats.

From the example we can see that we need to identify the period, the data set, the org unit (facility) and the data
elements for which to report.

To obtain the identifier for the data set we return to the entry point at http://apps.dhis2.org/demo/api and follow the
embedded link pointing at the dataSets resource located at http://apps.dhis2.org/demo/api/dataSets. From there we
find and follow the link to the Mortality < 5 years data set which leads us to http://apps.dhis2.org/demo/api/dataSets/
pBOMPrpg1QX. The resource representation for the Mortality < 5 years data set conveniently advertises links to the
data elements which are members of it. From here we can follow these links and obtain the identifiers of the data
elements. For brevity we will only report on three data elements: Measles with id f7n9E0hX8qk, Dysentery with id
Ix2HsbDMLea and Cholera with id eY5ehpbEsB7.

What remains is to get hold of the identifier of the facility (org unit). The dataSet representation conveniently provides
link to org units which report on it so we search for Ngelehun CHC and follow the link to the HTML representation
at http://apps.dhis2.org/demo/api/organisationUnits/DiszpKrYNg8, which tells us that the identifier of this org unit is
DiszpKrYNg8.

From our case-based data we assume that we have 12 cases of measles, 14 cases of dysentery and 16 cases of cholera.
We have now gathered enough information to be able to put together the XML data value set message:

<dataValueSet xmlns="http://dhis2.org/schema/dxf/2.0" dataSet="pBOMPrpg1QX"


completeDate="2014-02-03" period="201401" orgUnit="DiszpKrYNg8">
<dataValue dataElement="f7n9E0hX8qk" value="12"/>
<dataValue dataElement="Ix2HsbDMLea" value="14"/>
<dataValue dataElement="eY5ehpbEsB7" value="16"/>
</dataValueSet>

31
Web API Sending bulks of data values

In JSON format:

{
"dataSet": "pBOMPrpg1QX",
"completeData": "2014-02-03",
"period": "201401",
"orgUnit": "DiszpKrYNg8",
"dataValues": [
{ "dataElement": "f7n9E0hX8qk", "value": "1" },
{ "dataElement": "Ix2HsbDMLea", "value": "2" },
{ "dataElement": "eY5ehpbEsB7", "value": "3" }
]
}

To perform functional testing we will use the cURL tool (http://curl.haxx.se) which provides an easy way of transferring
data using HTTP. First we save the data value set XML content in a file called datavalueset.xml . From the directory
where this file resides we invoke the following from the command line:

curl -d @datavalueset.xml "https://apps.dhis2.org/demo/api/dataValueSets" -H "Content-


Type:application/xml" -u admin:district -v

For sending JSON content you must set the content-type header accordingly:

curl -d @datavalueset.json "https://apps.dhis2.org/demo/api/dataValueSets" -H


"Content-Type:application/json" -u admin:district -v

The command will dispatch a request to the demo Web API, set application/xml as the content-type and authenticate
using admin/district as username/password. If all goes well this will return a 200 OK HTTP status code. You can verify
that the data has been received by opening the data entry module in DHIS 2 and select the org unit, data set and period
used in this example.

The API follows normal semantics for error handling and HTTP status codes. If you supply an invalid username or
password, 401 Unauthorized is returned. If you supply a content-type other than application/xml, 415 Unsupported
Media Type is returned. If the XML content is invalid according to the DXF namespace, 400 Bad Request is returned.
If you provide an invalid identifier in the XML content, 409 Conflict is returned together with a descriptive message.

In this example, cURL will authenticate to the server through Basic authentication using our supplied username and
password as credentials through the -u flag.

In a real-world scenario, looking up identifiers, constructing and dispatching XML messages would be the task of the
client software application. This software would probably interact with the more machine-friendly XML and JSON
resource representations and not the human-friendly HTML representations like we did in this example. Developing
creative and robust consumers of the Web API services begins here.

1.10.2. Sending bulks of data values


The previous example showed us how to send a set of related data values sharing the same period and organisation
unit. This example will show us how to send large bulks of data values which don't necessarily are logically related.

Again we will interact with the with http://apps.dhis2.org/demo/api/dataValueSets resource. This time we will not
specify the dataSet and completeDate attributes. Also, we will specify the period and orgUnit attributes on the
individual data value elements instead of on the outer data value set element. This will enable us to send data values
for various periods and org units:

<dataValueSet xmlns="http://dhis2.org/schema/dxf/2.0">
<dataValue dataElement="f7n9E0hX8qk" period="201401" orgUnit="DiszpKrYNg8"
value="12"/>
<dataValue dataElement="f7n9E0hX8qk" period="201401" orgUnit="FNnj3jKGS7i"
value="14"/>
<dataValue dataElement="f7n9E0hX8qk" period="201402" orgUnit="DiszpKrYNg8"
value="16"/>

32
Web API Sending bulks of data values

<dataValue dataElement="f7n9E0hX8qk" period="201402" orgUnit="Jkhdsf8sdf4"


value="18"/>
</dataValueSet>

In JSON format:

{
"dataValues": [
{ "dataElement": "f7n9E0hX8qk", "period": "201401", "orgUnit": "DiszpKrYNg8",
"value": "12" },
{ "dataElement": "f7n9E0hX8qk", "period": "201401", "orgUnit": "FNnj3jKGS7i",
"value": "14" },
{ "dataElement": "f7n9E0hX8qk", "period": "201402", "orgUnit": "DiszpKrYNg8",
"value": "16" },
{ "dataElement": "f7n9E0hX8qk", "period": "201402", "orgUnit": "Jkhdsf8sdf4",
"value": "18" }
]
}

In CSV format:

"dataelement","period","orgunit","categoryoptioncombo","attributeoptioncombo","value"
"f7n9E0hX8qk","201401","DiszpKrYNg8","bRowv6yZOF2","bRowv6yZOF2","1"
"Ix2HsbDMLea","201401","DiszpKrYNg8","bRowv6yZOF2","bRowv6yZOF2","2"
"eY5ehpbEsB7","201401","DiszpKrYNg8","bRowv6yZOF2","bRowv6yZOF2","3"

We test by using cURL to send the data values in XML format:

curl -d @datavalueset.xml "https://apps.dhis2.org/demo/api/dataValueSets" -H "Content-


Type:application/xml" -u admin:district -v

Note that when using CSV format you must use the binary data option to preserve the line-breaks in the CSV file:

curl --data-binary @datavalueset.csv "https://apps.dhis2.org/demo/api/dataValueSets" -


H "Content-Type:application/csv" -u admin:district -v

The data value set resource provides an XML response which is useful when you want to verify the impact your request
had. The first time we send the data value set request above the server will respond with the following import summary:

<importSummary>
<dataValueCount imported="2" updated="1" ignored="1"/>
<dataSetComplete>false</dataSetComplete>
</importSummary>

This message tells us that 3 data values were imported, 1 data value was updated while zero data values were ignored.
The single update comes as a result of us sending that data value in the previous example. A data value will be ignored
if it references a non-existing data element, period, org unit or data set. In our case this single ignored value was caused
by the last data value having an invalid reference to org unit. The data set complete element will display the date of
which the data value set was completed, or false if no data element attribute was supplied.

The import process can be customized using a set of import parameters:

Table 1.16. Import parameters

Parameter Values (default first) Description


dataElementIdScheme id | name | code Property of the data element object
to use to map the data values.
orgUnitIdScheme id | name | code Property of the org unit object to use
to map the data values.
idScheme id | name | code Property of all objects including data
elements, org units and category
option combos, to use to map the
data values.

33
Web API CSV data value format

Parameter Values (default first) Description


dryRun false | true Whether to save changes on the
server or just return the import
summary.
preheatCache true | false Whether to preheat data element
and organisation unit caches with all
objects, false will perform better for
small imports.
importStrategy NEW_AND_UPDATES | NEW | Save objects of all, new or update
UPDATES | DELETES import status on the server.
async false | true Import data asynchronously and
return the request immeditately.
skipExistingCheck false | true Skip checks for existing data values.
Improves performance. Only use for
empty databases or when the data
values to import do not exist already.

All parameters are optional and can be supplied as query parameters in the request URL like this:

https://apps.dhis2.org/demo/api/dataValueSets?
dataElementIdScheme=code&orgUnitIdScheme=name&dryRun=true&importStrategy=new

They can also be supplied as XML attributes on the data value set element like below. XML attributes will override
query string parameters.

<dataValueSet xmlns="http://dhis2.org/schema/dxf/2.0" dataElementIdScheme="code"


orgUnitIdScheme="name" dryRun="true" importStrategy="new">
..
</dataValueSet>

1.10.2.1. Identifier schemes

Regarding the id schemes, by default the identifiers used in the XML messages uses the DHIS 2 stable object identifiers
referred to as uid. In certain interoperability situations we might experience that external system decides the identifiers
of the objects. In that case we can use the code property of the organisation units and other objects to set fixed identifiers.
When importing data values we hence need to reference the code property instead of the identfier property of these
meta-data objects. Identifier schemes can be specified in the XML message as well as in the request as query parameters.
To specify it in the XML payload you can do this:

<dataValueSet xmlns="http://dhis2.org/schema/dxf/2.0" dataElementIdScheme="CODE"


orgUnitIdScheme="UID" idScheme="CODE">
..
</dataValueSet>

The parameter table above explains how the id schemes can be specified as query parameters. The following rules
apply for what takes precedence:
• Id schemes defined in the XML or JSON payload take precedence over id schemes defined as URL query parameters.
• Specific id schemes including dataElementIdScheme and orgUnitIdScheme take precedence over the general
idScheme.
• The default id scheme is UID, which will be used if no explicit id scheme is defined.

1.10.3. CSV data value format


The following section describes the CSV format used in DHIS2. The first row is assumed to be a header row and will
be ignored during import.

34
Web API Generating data value set template

Table 1.17. CSV format of DHIS 2

Column Required Description


Data element Yes Refers to ID by default, can also be
name and code based on selected id
scheme
Period Yes In ISO format
Org unit Yes Refers to ID by default, can also be
name and code based on selected id
scheme
Category option combo No Refers to ID
Attribute option combo No Refers to ID (from version 2.16)
Value No Data value
Stored by No Refers to username of user who
entered the value
Last updated No Date in ISO format
Comment No Free text comment
Follow up No true or false

An example of a CSV file which can be imported into DHIS 2 is seen below.

"dataelement","period","orgunit","categoryoptioncombo","attributeoptioncombo","value","storedby",
"DUSpd8Jq3M7","201202","gP6hn503KUX","Prlt0C1RF0s",,"7","bombali","2010-04-17",,"false"
"DUSpd8Jq3M7","201202","gP6hn503KUX","V6L425pT3A0",,"10","bombali","2010-04-17",,"false"
"DUSpd8Jq3M7","201202","OjTS752GbZE","V6L425pT3A0",,"9","bombali","2010-04-06",,"false"

1.10.4. Generating data value set template


To generate a data value set template for a certain data set you can use the /api/dataSets/<id>/dataValueSet resource.
XML and JSON response formats are supported. Example:

api/dataSets/BfMAe6Itzgt/dataValueSet.json

The parameters you can use to further adjust the output are described below:

Table 1.18. Data values query parameters

Query Required Description


parameter
period No Period to use, will be included without any checks.
orgUnit No Organisation unit to use, supports multiple orgUnits, both id and code can
be used.
comment No Should comments be include, default: Yes.
orgUnitIdScheme No Organisation unit scheme to use, supports id | code.
dataElementIdScheme
No Data-element scheme to use, supports id | code.

1.10.5. Sending, reading and deleting individual data values


This example will show how to send individual data values to be saved in a request. This can be achieved by sending
a POST request to the dataValues resource:

https://apps.dhis2.org/demo/api/dataValues

35
Web API Reading data values

The following query parameters are supported for this resource:

Table 1.19. Data values query parameters

Query Required Description


parameter
de Yes Data element identifier
pe Yes Period identifier
ou Yes Organisation unit identifier
co No Category option combo identifier, default will be used if omitted
cc No (must combine Attribute combo identifier
with cp)
cp No (must combine Attribute option identifiers, separated with ; for multiple values
with cc)
value No Data value
comment No Data comment
followUp No Follow up on data value, will toggle the current boolean value

If any of the identifiers given are invalid, if the data value or comment are invalid or if the data is locked, the response
will contain the 409 Conflict status code and descriptive text message. If the operation lead to a saved or updated value,
200 OK will be returned. An example of a request looks like this:

curl "https://apps.dhis2.org/demo/api/dataValues?
de=s46m5MS0hxu&pe=201301&ou=DiszpKrYNg8&co=Prlt0C1RF0s&value=12" -X POST -u
admin:district -v

This resource also allows a special syntax for associating the value to an attribute option combination. This can be done
by sending the identifier of the attribute combination, together with the identifier(s) of the attribute option(s) which
the value represents within the combination. An example looks like this:

curl "https://apps.dhis2.org/demo/api/dataValues?
de=s46m5MS0hxu&ou=DiszpKrYNg8&pe=201308&cc=dzjKKQq0cSO&cp=wbrDrL2aYEc;btOyqprQ9e8&value=26"
-X POST -u admin:district -v

You can retrieve a data value with a request using the GET method. The value, comment and followUp params are
not applicable in this regard:

curl "https://apps.dhis2.org/demo/api/dataValues?
de=s46m5MS0hxu&pe=201301&ou=DiszpKrYNg8&co=Prlt0C1RF0s" -X GET -u admin:district -v

You can delete a data value with a request using the DELETE method.

1.10.6. Reading data values


This section explains how to retrieve data values from the Web API by interacting with the dataValueSets resource.
Data values can be retrieved in XML, JSON and CSV format. Since we want to read data we will use the GET HTTP
verb. We will also specify that we are interested in the XML resource representation by including an Accept HTTP
header with our request. The following query parameters are required:

Table 1.20. Data value set query parameters

Parameter Description
dataSet Data set identifier
period Period identifier in ISO format
orgUnit Organisation unit identifier

36
Web API Reading data values

Parameter Description
dataElementIdScheme Property of the data element object to use for data values.
orgUnitIdScheme Property of the org unit object to use for data values.
categoryOptionComboIdSchemeProperty of the category option combo object to use for data values.

It is assumed that we have posted data values to DHIS according to the previous section called "Sending data values".
We can now put together our request and send it using cURL:

curl "https://apps.dhis2.org/demo/api/dataValueSets?
dataSet=pBOMPrpg1QX&period=201401&orgUnit=DiszpKrYNg8" -H "Accept:application/xml" -u
admin:district -v

The response will look like this:

HTTP/1.1 200 OK
Content-Type: application/xml

<?xml version='1.0' encoding='UTF-8'?>


<dataValueSet xmlns="http://dhis2.org/schema/dxf/2.0" dataSet="pBOMPrpg1QX"
completeDate="2014-01-02" period="201401" orgUnit="DiszpKrYNg8">
<dataValue dataElement="eY5ehpbEsB7" period="201401" orgUnit="DiszpKrYNg8"
categoryOptionCombo="bRowv6yZOF2" value="10003"/>
<dataValue dataElement="Ix2HsbDMLea" period="201401" orgUnit="DiszpKrYNg8"
categoryOptionCombo="bRowv6yZOF2" value="10002"/>
<dataValue dataElement="f7n9E0hX8qk" period="201401" orgUnit="DiszpKrYNg8"
categoryOptionCombo="bRowv6yZOF2" value="10001"/>
</dataValueSet>

The header tells us that the request was processed successfully and that we are receiving a response in XML format.

You can request the data in JSON format:

curl "https://apps.dhis2.org/demo/api/dataValueSets.json?
dataSet=pBOMPrpg1QX&period=201401&orgUnit=DiszpKrYNg8" -u admin:district -v

The response will look something like this:

{
"dataSet": "pBOMPrpg1QX",
"completeData": "2014-02-03",
"period": "201401",
"orgUnit": "DiszpKrYNg8",
"dataValues": [
{ "dataElement": "eY5ehpbEsB7", "categoryOptionCombo": "bRowv6yZOF2", "period":
"201401",
"orgUnit": "DiszpKrYNg8", "value": "10003" },
{ "dataElement": "Ix2HsbDMLea", "categoryOptionCombo": "bRowv6yZOF2", "period":
"201401",
"orgUnit": "DiszpKrYNg8", "value": "10002" },
{ "dataElement": "f7n9E0hX8qk", "categoryOptionCombo": "bRowv6yZOF2", "period":
"201401",
"orgUnit": "DiszpKrYNg8", "value": "10001" }
]
}

You can request data in CSV format:

curl "https://apps.dhis2.org/demo/api/dataValueSets.csv?
dataSet=pBOMPrpg1QX&period=201401&orgUnit=DiszpKrYNg8" -u admin:district -v

37
Web API Reading large bulks of data values

1.10.7. Reading large bulks of data values


This section explains how to retrieve large bulks of data values which not necessarily belong in a single data value
set. Data values can be retrieved in XML, JSON and CSV format. We will interact with the dataValueSets resource.
The query parameters to use are these:

Table 1.21. Data value set query parameters

Parameter Description
dataSet Data set identifier, can be specified multiple times
startDate Start date for the time span of the values to export
endDate End date for the time span of the values to export
orgUnit Organisation unit identifier, can be specified multiple times
children Whether to include the children in the hierarchy of the organisation units

The dataSet and orgUnit parameters can be repeated in order to include multiple data sets and organisation units. An
example request for XML format looks like this:

curl "https://apps.dhis2.org/demo/api/dataValueSets?
dataSet=pBOMPrpg1QX&dataSet=BfMAe6Itzgt&startDate=2013-01-01&endDate=2013-01-31&
orgUnit=YuQRtpLP10I&orgUnit=vWbkYPRmKyS&children=true" -H "Accept:application/xml" -u
admin:district -v

You can get the response in xml, json and csv format. You can indicate which response format you prefer through the
Accept HTTP header like in the example above. For XML you use application/xml, for JSON you use application/
json and for CSV you use application/csv.

1.11. Events
This section is about sending and reading events.

1.11.1. Sending events


DHIS 2 supports three kinds of events: single events with no registration (also referred to as anonymous events), single
event with registration and multiple events with registration. Registration implies that the data is linked to a tracked
entity instance which is identified using some sort of identifier.

To send events to DHIS 2 you must interact with the events resource. The approach to sending events is similar to
sending aggregate data values. You will need a program which can be looked up using the programs resource, an
orgUnit which can be looked up using the organisationUnits resource, and a list of valid data element identifiers
which can be looked up using the dataElements resource. For events with registration, a tracked entity instance
identifier is required, read about how to get this in the section about the trackedEntityInstances resource. For sending
events to programs with multiple stages, you will need to also include the programStage identifier, the identifiers for
programStages can be found in the programStages resource.

A simple single event with no registration example payload in XML format where we send events from the "Inpatient
morbidity and mortality" program for the "Ngelehun CHC" facility in the demo database can be seen below:

<?xml version="1.0" encoding="utf-8"?>


<event program="eBAyeGv0exc" orgUnit="DiszpKrYNg8" eventDate="2013-05-17"
status="COMPLETED" storedBy="admin">
<coordinate latitude="59.8" longitude="10.9" />
<dataValues>
<dataValue dataElement="qrur9Dvnyt5" value="22" />
<dataValue dataElement="oZg33kd9taw" value="Male" />

38
Web API Sending events

<dataValue dataElement="msodh3rEMJa" value="2013-05-18" />


</dataValues>
</event>

To perform some testing we can save the XML payload as a file called event.xml and send it as a POST request to the
events resource in the API using curl with the following command:

curl -d @event.xml "https://apps.dhis2.org/demo/api/events" -H "Content-


Type:application/xml" -u admin:district -v

The same payload in JSON format looks like this:

{
"program": "eBAyeGv0exc",
"orgUnit": "DiszpKrYNg8",
"eventDate": "2013-05-17",
"status": "COMPLETED",
"storedBy": "admin",
"coordinate": {
"latitude": "59.8",
"longitude": "10.9"
},
"dataValues": [
{ "dataElement": "qrur9Dvnyt5", "value": "22" },
{ "dataElement": "oZg33kd9taw", "value": "Male" },
{ "dataElement": "msodh3rEMJa", "value": "2013-05-18" }
]
}

To send this you can save it to a file called event.json and use curl like this:

curl -d @event.json "localhost/api/events" -H "Content-Type:application/json" -u


admin:district -v

We also support sending multiple events at the same time. A payload in XML format might look like this:

<?xml version="1.0" encoding="utf-8"?>


<events>
<event program="eBAyeGv0exc" orgUnit="DiszpKrYNg8" eventDate="2013-05-17"
status="COMPLETED" storedBy="admin">
<coordinate latitude="59.8" longitude="10.9" />
<dataValues>
<dataValue dataElement="qrur9Dvnyt5" value="22" />
<dataValue dataElement="oZg33kd9taw" value="Male" />
</dataValues>
</event>
<event program="eBAyeGv0exc" orgUnit="DiszpKrYNg8" eventDate="2013-05-17"
status="COMPLETED" storedBy="admin">
<coordinate latitude="59.8" longitude="10.9" />
<dataValues>
<dataValue dataElement="qrur9Dvnyt5" value="26" />
<dataValue dataElement="oZg33kd9taw" value="Female" />
</dataValues>
</event>
</events>

You will receive an import summary with the response which can be inspected in order to get information about the
outcome of the request, like how many values were imported successfully. The payload in JSON format looks like this:

{
"events": [
{
"program": "eBAyeGv0exc",
"orgUnit": "DiszpKrYNg8",

39
Web API Sending events

"eventDate": "2013-05-17",
"status": "COMPLETED",
"storedBy": "admin",
"coordinate": {
"latitude": "59.8",
"longitude": "10.9"
},
"dataValues": [
{ "dataElement": "qrur9Dvnyt5", "value": "22" },
{ "dataElement": "oZg33kd9taw", "value": "Male" }
] },
{
"program": "eBAyeGv0exc",
"orgUnit": "DiszpKrYNg8",
"eventDate": "2013-05-17",
"status": "COMPLETED",
"storedBy": "admin",
"coordinate": {
"latitude": "59.8",
"longitude": "10.9"
},
"dataValues": [
{ "dataElement": "qrur9Dvnyt5", "value": "26" },
{ "dataElement": "oZg33kd9taw", "value": "Female" }
] }
]
}

(From 2.13) As part of the import summary you will also get the identifier reference to the event you just sent, together
with a href element which points to the server location of this event.

OrgUnit matching: By default the orgUnit parameter will match on the ID (of the orgUnit, but from 2.15 you can
also select the orgUnit id matching scheme by using the parameter orgUnitIdScheme=SCHEME, where the options
are: ID, UID, UUID, CODE, and NAME (ID and UID will both matchUIDs).

Update: To update an existing event, the format of the payload is the same, but the URL you are posting to must add
the identifier to the end of the URL string and the request must be PUT.

curl -X PUT -d @updated_event.xml "localhost/api/events/ID" -H "Content-


Type:application/xml" -u admin:district -v

curl -X PUT -d @updated_event.json "localhost/api/events/ID" -H "Content-


Type:application/json" -u admin:district -v

Delete: To delete an existing event, all you need is to send a DELETE request with a identifier reference to the server
you are using.

curl -X DELETE "localhost/api/events/ID" -u admin:district -v

Get: To get an existing event you can issue a GET request including the identifier like this:

curl "localhost/api/events/ID" -H "Content-Type:application/xml" -u admin:district -v

The table below describes the meaning of each element. Most elements should be fairly self-explanatory.

Table 1.22. Events resource format

Parameter Type Required Options (default Description


first)
programId string true Identifier of the
single event with no
registration program

40
Web API CSV Import / Export

Parameter Type Required Options (default Description


first)
organisationUnitId string true Identifier of the
organisation unit
where the event took
place
eventDate date true The date of when the
event occured
status enum false ACTIVE, Whether the event is
COMPLETED, complete or not
VISITED,

FUTURE_VISIT,
LATE_VISIT,

SKIPPED
storedBy string false Defaults to current Who stored this
user event (can be
username, system-
name etc)
coordinate double false Refers to wher the
event took place
geographically
(latitude and
longitude)
dataElementId string true Identifier of data
element
value string true Data value or
measure for this
event

1.11.2. CSV Import / Export


In addition to XML and JSON for event import/export, in DHIS 2.17 we introduced support for the CSV format.
Support for this format builds on what was described in the last section, so here we will only write about what the
CSV specific parts are.

To use the CSV format you must either use the /api/events.csv endpoint, or add content-type: text/csv for import, and
accept: text/csv for export when using the /api/events endpoint.

The order of column in the CSV which are used for both export and import is as follows:

Table 1.23. CSV column

Index Key Type Description


1 event identifier Identifier of event
2 status enum Status of event,
can be ACTIVE |
COMPLETED | VISITED
| FUTURE_VISIT |
LATE_VISIT | SKIPPED
3 program identifier Identifier of program
4 programStage identifier Identifier of program stage

41
Web API Querying and reading events

Index Key Type Description


5 enrollment identifier Identifier of enrollment
(program stage instance)
6 orgUnit identifier Identifier of organisation
unit
7 eventDate date Event date
8 dueDate date Due Date
9 latitude double Latitude where event
happened
10 longitude double Longitude where event
happened
11 dataElement identifier Identifier of data element
12 value string Value / measure of event
13 storedBy string Event was stored by
(defaults to current user)
14 providedElsewhere boolean Was this value collected
somewhere else

1.11.3. Querying and reading events


This section explains how to read out the events that have been stored in the DHIS2 instance. For more advanced uses
of the event data, please see the section on event analytics. The output format from the /api/events endpoint will match
the format that is used to send events to it (which the analytics event api does not support). Both XML and JSON are
supported, either through adding .json/.xml or by setting the appropriate Accept header.

Table 1.24. Events resource query parameters

Key Type Required Description


program identifier true (if not programStage Identifier of program.
is provided)
programStage identifier false Identifier of program stage
programStatus enum false Status of event in
program, ca be ACTIVE
| COMPLETED |
CANCELLED
followUp boolean false Whether event is
considered for follow up
in program, can be true |
false or omitted.
trackedEntityInstance identifier false Identifier of tracked entity
instance
orgUnit identifier true Identifier of organisation
unit
ouMode enum false Org unit selection mode,
can be SELECTED
| CHILDREN |
DESCENDANTS
startDate date false Only events newer than
this date

42
Web API Forms

Key Type Required Description


endDate date false Only events older than this
date
status enum false Status of event,
can be ACTIVE |
COMPLETED | VISITED
| FUTURE_VISIT |
LATE_VISIT | SKIPPED
skipMeta boolean false Exclude the meta data
part of response (improves
performance)

1.11.3.1. Examples

Query for all events with children of a certain organisation unit:

api/events.json?orgUnit=YuQRtpLP10I&ouMode=CHILDREN

Query for all events with all descendants of a certain organisation unit, implying all organisation units in the sub-
hierarchy:

api/events.json?orgUnit=O6uvpzGd5pu&ouMode=DESCENDANTS

Query for all events with a certain program and organisation unit:

api/events.json?orgUnit=DiszpKrYNg8&program=eBAyeGv0exc

Query for all events with a certain program and organisation unit for a specific tracked entity instance:

api/events.json?orgUnit=DiszpKrYNg8&
program=eBAyeGv0exc&trackedEntityInstance=gfVxE3ALA9m

Query for all events with a certain program and organisation unit older or equal to 2014-02-03:

api/events.json?orgUnit=DiszpKrYNg8&program=eBAyeGv0exc&endDate=2014-02-03

Query for all events with a certain program stage, organisation unit and tracked entity instance in the year 2014:

api/events.json?orgUnit=DiszpKrYNg8
&program=eBAyeGv0exc&trackedEntityInstance=gfVxE3ALA9m
&startDate=2014-01-01&endDate=2014-12-31

1.12. Forms
To retrieve information about a form (which corresponds to a data set and its sections) you can interact with the form
resource. The form response is accessible as XML and JSON and will provide information about each section (group)
in the form as well as each field in the sections, including label and identifiers. By supplying period and organisation
unit identifiers the form response will be populated with data values.

Table 1.25. Form query parameters

Parameter Option Description


pe ISO period Period for which to populate form data values.
ou UID Organisation unit for which to populate form data values.
metaData false | true Whether to include meta-data about each data element of form sections.

To retrieve the form for a data set you can do a GET request like this:

43
Web API Validation

api/dataSets/<dataset-id>/form.json

To retrieve the form for the data set with identifier "BfMAe6Itzgt" in XML:

api/dataSets/BfMAe6Itzgt/form

To retrieve the form including meta-data in JSON:

api/dataSets/BfMAe6Itzgt/form.json?metaData=true

To retrieve the form filled with data values for a specific period and organisation unit in XML:

api/dataSets/BfMAe6Itzgt/form.xml?ou=DiszpKrYNg8&pe=201401

When it comes to custom data entry forms, this resource also allows for creating such forms directly for a data set.
This can be done through a POST or PUT request with content type text/html where the payload is the custom form
markup such as:

curl -d @form.html "localhost/api/dataSets/BfMAe6Itzgt/form" -H "Content-Type:text/


html" -u admin:district -X PUT -v

1.13. Validation
To generate a data validation summary you can interact with the validation resource. The dataSet resource is optimized
for data entry clients for validating a data set / form, and can be accessed like this:

api/validation/dataSet/QX4ZTUbOt3a.json?pe=201501&ou=DiszpKrYNg8

The first path variable is an identifier referring to the data set to validate. XML and JSON resource representations
are supported. The response contains violations to validation rules. This will be extended with more validation types
in coming versions.

To retrieve validation rules which are relevant for a specific data set, meaning validation rules with formulas where all
data elements are part of the specific data set, you can make a GET request to to validationRules resource like this:

api/validationRules?dataSet=<dataset-id>

The validation rules have a left side and a right side, which is compared for validity according to an operator. The valid
operator values are found in the table below.

Table 1.26. Operators

Value Description
equal_to Equal to
not_equal_to Not equal to
greater_than Greater than
greater_than_or_equal_to Greater than or equal to
less_than Less than
less_than_or_equal_to Less than or equal to

The left side and right side expressions are mathematical expressions which can contain references to data elements
and category option combinations on the following format:

${<dataelement-id>.<catoptcombo-id>}

The left side and right side expressions have a missing value strategy. This refers to how the system should treat data
values which are missing for data elements / category option combination references in the formula in terms of whether
the validation rule should be checked for validity or skipped. The valid missing value strategies are found in the table
below.

44
Web API Indicators

Table 1.27. Missing value strategies

Value Description
SKIP_IF_ANY_VALUE_MISSING
Skip validation rule if any data value is missing
SKIP_IF_ALL_VALUES_MISSING
Skip validation rule if all data values are missing
NEVER_SKIP Never skip validation rule irrespective of missing data values

1.14. Indicators
To retrieve indicators you can make a GET request to the indicators resource like this:

api/indicators

The indicators represent expressions which can be calculated and presented as a result. The indicator expressions are
split into a numerator and denominator. The numerators and denominators are mathematical expressions which can
contain variables for data elements, constants and organisation unit grups. The syntax looks like this:

${<dataelement-id>.<catoptcombo-id>} + C{<constant-id>} + OUG{<orgunitgroup-id>}

A corresponding example looks like this:

#{P3jJH5Tu5VC.S34ULMcHMca} + C{Gfd3ppDfq8E} + OUG{CXw2yu5fodb}

Note that for data element variables the category option combo identifier can be omitted. The variable will then
represent the total for the data element, e.g. across all category option combos. Example:

#{P3jJH5Tu5VC} + 2

Expressions can be any kind of valid mathematical expression, as an example:

( 2 * #{P3jJH5Tu5VC.S34ULMcHMca} ) / ( #{FQ2o8UBlcrS.S34ULMcHMca} - 200 ) * 25

1.15. Complete data set registrations


This section is about complete data set registrations for data sets. A registration marks as a data set as completely
captured.

1.15.1. Completing and un-completing data sets


This section explains how you can register and un-register a data set as complete. To complete or un-complete a data
set you will interact with the completeDataSetRegistrations resource:

/api/completeDataSetRegistrations

This resource supports the methods POST for registration and DELETE for un-registration. The following query
parameters are supported:

Table 1.28. Complete data set registrations query parameters

Query Required Description


parameter
ds Yes Data set identifier
pe Yes Period identifier
ou Yes Organisation unit identifier
cc No (must combine Attribute combo identifier (for locking check)
with cp)

45
Web API Sending bulks of complete data set
registrations

Query Required Description


parameter
cp No (must combine Attribute option identifiers, separated with ; for multiple values (for
with cp) locking check)
multiOu No (default false) Whether registration applies to sub units

1.15.2. Sending bulks of complete data set registrations


This section explains how to send complete data set registrations. To send registrations you can issue a POST request
to the completeDataSetRegistrations resource. Completing a data set is per data set, period and organisation unit. You
can optionally specify attribute option combo. You must specify the date of when the data set was completed. The
format is as follows:

<?xml version='1.0' encoding='UTF-8'?>


<completeDataSetRegistrations xmlns="http://dhis2.org/schema/dxf/2.0">
<completeDataSetRegistration>
<dataSet id="pBOMPrpg1QX" />
<period id="201401" />
<attributeOptionCombo id="bRowv6yZOF2" />
<date>2014-01-01</date>
<organisationUnit id="DiszpKrYNg8" />
</completeDataSetRegistration>
<completeDataSetRegistration>
<dataSet id="pBOMPrpg1QX" />
<period id="201401" />
<attributeOptionCombo id="bRowv6yZOF2" />
<date>2014-01-01</date>
<organisationUnit id="g8upMTyEZGZ" />
</completeDataSetRegistration>
<completeDataSetRegistration>
<dataSet id="pBOMPrpg1QX" />
<period id="201401" />
<attributeOptionCombo id="bRowv6yZOF2" />
<date>2010-01-01</date>
<organisationUnit id="jNb63DIHuwU" />
</completeDataSetRegistration>
</completeDataSetRegistrations>

To test the resource you can issue a request using curl:

curl -d @completereg.xml "https://apps.dhis2.org/demo/api/


completeDataSetRegistrations" -H "Content-Type:application/xml" -u admin:district -v

1.15.3. Reading complete data set registrations


This section explains how to retrieve data set completeness registrations. We will be using the
completeDataSetRegistrations resource. The query parameters to use are these:

Table 1.29. Data value set query parameters

Parameter Description
dataSet Data set identifier, can be specified multiple times
period PeriodType
startDate Start date for the time span of the values to export
endDate End date for the time span of the values to export
orgUnit Organisation unit identifier, can be specified multiple times

46
Web API Data approval

Parameter Description
children Whether to include the children in the hierarchy of the organisation units

The dataSet and orgUnit parameters can be repeated in order to include multiple data sets and organisation units. An
example request looks like this:

curl "https://apps.dhis2.org/demo/api/completeDataSetRegistrations?
dataSet=pBOMPrpg1QX&dataSet=pBOMPrpg1QX&startDate=2014-01-01&endDate=2014-01-31
&orgUnit=YuQRtpLP10I&orgUnit=vWbkYPRmKyS&children=true" -H "Accept:application/xml" -u
admin:district -v

You can get the response in xml and json format. You can indicate which response format you prefer through the Accept
HTTP header like in the example above. For xml you use application/xml; for json you use application/json.

1.16. Data approval


This section explains how to approve, unapprove and check approval status using the dataApprovals resource. Approval
is done per data set, period, organisation unit and attribute option combo.

To get approval information for a data set you can issue a GET request similar to this:

api/dataApprovals?ds=aLpVgfXiz0f&pe=2013&ou=DiszpKrYNg8

Table 1.30. Data approval query parameters

Query Required Description


parameter
ds Yes Data set identifier
pe Yes Period identifier
ou Yes Organisation unit identifier
cog No Attribute category option group identifier
cp No Attribute category option identifier(s), repeat the parameter for multiple
values

This will give you a response something like this:

{
"mayApprove": false,
"mayUnapprove": false,
"mayAccept":false,
"mayUnaccept":false,
"state":"UNAPPROVED_ELSEWHERE"
}

The returned parameters are:

Table 1.31. Data approval query parameters

Return Parameter Description


mayApprove Whether the current user may approve this data selection.
mayUnapprove Whether the current user may unapprove this data selection.
mayAccept Whether the current user may accept this data selection.
mayUnaccept Whether the current user may unaccept this data selection.
state One of the data approval states from the table below.

47
Web API Data approval

Table 1.32. Data approval states

State Description
UNAPPROVABLE Data approval does not apply to this selection. (Data is neither "approved"
nor "unapproved".)
UNAPPROVED_WAITING Data could be approved for this selection, but is waiting for some lower-
level approval before it is ready to be approved.
UNAPPROVED_ELSEWHERE Data is unapproved, and is waiting for approval somewhere else (not
approvable here.)
UNAPPROVED_READY Data is unapproved, and is ready to be approved for this selection.
APPROVED_HERE Data is approved, and was approved here (so could be unapproved here.)
APPROVED_ELSEWHERE Data is approved, but was not approved here (so cannot be unapproved here.)
This covers the following cases:
• Data is approved at a higher level.
• Data is approved for wider scope of category options.
• Data is approved for all sub-periods in selected period.
In the first two cases, there is a single data approval object that covers the
selection. In the third case there is not.
ACCEPTED_HERE Data is approved and accepted here (so could be unapproved here.)
ACCEPTED_ELSEWHERE Data is approved and accepted, but elsewhere.

Note that when querying for the status of data approval, you may specify any combination of the query parameters.
The combination you specify does not need to describe the place where data is to be approved at one of the approval
levels. For example:
• The organisation unit might not be at an approval level. The approval status is determined by whether data is approved
at an approval level for an ancestor of the organisation unit.
• You may specify individual attribute category options. The approval status is determined by whether data is approved
for an attribute category option combination that includes one or more of these options.
• You may specify a time period that is longer than the period for the data set at which the data is entered and approvede.
The approval status is determined by whether the data is approved for all the data set periods within the period you
specify.

To approve data you can issue a POST request to the dataApprovals resource. To un-approve data you can issue a
DELETE request to the dataApprovals resource.

To accept data you can issue a POST request to the dataApprovals/acceptances resource. To un-accept data you can
issue a DELETE request to the dataApprovals/acceptances resource.

These requests contain the following parameters:

Table 1.33. Data approval action parameters

Action Required Description


parameter
ds Yes Data set identifier
pe Yes Period identifier
ou Yes Organisation unit identifier
cog No Attribute category option group identifier. Required if approving for an
approval level that contains a category option group set, otherwise must
not be present.

Note that, unlike querying the data approval status, you must specify parameters that correspond to a selection of data
that could be approved. In particular, all of the following must be true:

48
Web API Messages

• The organisation unit's level must be specified by an approval level.


• The category option group (if specified) must be a member of an approval level's category option group set (if
specified) for an approval level with the same organisation unit level.
• The time period specified must match the period type of the data set.
• The data set must specify that data can be approved for this data set.

1.17. Messages
DHIS 2 features a mechanism for sending messages for purposes such as user feedback, notifications and general
information to users. Messages are delivered to the DHIS 2 message inbox but can also be sent to the user's email
addresses and mobile phones as SMS. In this example we will see how we can utilize the Web API to send, read and
manage messages. We will pretend to be the DHIS Administrator user and send a message to the Mobile user. We will
then pretend to be the mobile user and read our new message. Following this we will manage the admin user inbox
by marking and removing messages.

1.17.1. Writing and reading messages


The resource we need to interact with when sending and reading messages is the messageConversations resource. We
start by visiting the Web API entry point at http://apps.dhis2.org/demo/api where we find and follow the link to the
messageConversations resource at http://apps.dhis2.org/demo/api/messageConversations. The description tells us that
we can use a POST request to create a new message using the following XML format for sending to multiple users:

<message xmlns="http://dhis2.org/schema/dxf/2.0">
<subject>This is the subject</subject>
<text>This is the text</text>
<users>
<user id="user1ID" />
<user id="user2ID" />
<user id="user3ID" />
</users>
</message>

For sending to all users contained in one or more user groups, we can use:

<message xmlns="http://dhis2.org/schema/dxf/2.0">
<subject>This is the subject</subject>
<text>This is the text</text>
<userGroups>
<userGroup id="userGroup1ID" />
<userGroup id="userGroup2ID" />
<userGroup id="userGroup3ID" />
</userGroups>
</message>

For sending to all users connected to one or more organisation units, we can use:

<message xmlns="http://dhis2.org/schema/dxf/2.0">
<subject>This is the subject</subject>
<text>This is the text</text>
<organisationUnits>
<organisationUnit id="ou1ID" />
<organisationUnit id="ou2ID" />
<organisationUnit id="ou3ID" />
</organisationUnits>
</message>

Since we want to send a message to our friend the mobile user we need to look up her identifier. We do so by going to
the Web API entry point and follow the link to the users resource at http://apps.dhis2.org/demo/api/users. We continue
by following link to the mobile user at http://apps.dhis2.org/demo/api/users/PhzytPW3g2J where we learn that her

49
Web API Writing and reading messages

identifier is PhzytPW3g2J. We are now ready to put our XML message together to form a message where we want to
ask the mobile user whether she has reported data for January 2014:

<message xmlns="http://dhis2.org/schema/dxf/2.0">
<subject>Mortality data reporting</subject>
<text>Have you reported data for the Mortality data set for January 2014?</text>
<users>
<user id="PhzytPW3g2J" />
</users>
</message>

To test this we save the XML content into a file called message.xml. We use cURL to dispatch the message the the
DHIS 2 demo instance where we indicate that the content-type is XML and authenticate as the admin user:

curl -d @message.xml "https://apps.dhis2.org/demo/api/messageConversations" -H


"Content-Type:application/xml" -u admin:district -X POST -v

A corresponding payload in JSON and POST command look like this:

{
"subject": "Hey",
"text": "How are you?",
"users": [
{
"id": "OYLGMiazHtW"
},
{
"id": "N3PZBUlN8vq"
}
],
"userGroups": [
{
"id": "ZoHNWQajIoe"
}
],
"organisationUnits": [
{
"id": "DiszpKrYNg8"
}
]
}

curl -d @message.json "https://apps.dhis2.org/demo/api/messageConversations" -H


"Content-Type:application/json" -u admin:district -X POST -v

If all is well we receive a 201 Created HTTP status code. Also note that we receive a Location HTTP header which
value informs us of the URL of the newly created message conversation resource - this can be used by a consumer
to perform further action.

We will now pretend to be the mobile user and read te message which was just sent by dispatching a GET request to
the messageConversations resource. We supply an Accept header with application/xml as the value to indicate that we
are interested in the XML resource representation and we authenticate as the mobile user:

curl "https://apps.dhis2.org/demo/api/messageConversations" -H "Accept:application/


xml" -u mobile:district -X GET -v

In response we get the following XML:

<messageConversations xmlns="http://dhis2.org/schema/dxf/2.0"
link="https://apps.dhis2.org/demo/api/messageConversations">
<messageConversation name="Mortality data reporting" id="ZjHHSjyyeJ2"
link="https://apps.dhis2.org/demo/api/messageConversations/ZjHHSjyyeJ2"/>
<messageConversation name="DHIS version 2.7 is deployed" id="GDBqVfkmnp2"
link="https://apps.dhis2.org/demo/api/messageConversations/GDBqVfkmnp2"/>

50
Web API Managing messages

</messageConversations>

From the response we are able to read the identifier of the newly sent message which is ZjHHSjyyeJ2. Note that the
link to the specific resource is embedded and can be followed in order to read the full message. From the description
at http://apps.dhis2.org/demo/api/messageConversations we learned that we can reply directly to an existing message
conversation once we know the URL by including the message text as the request payload (body). We are now able
to construct a URL for sending our reply:

curl -d "Yes the Mortality data set has been reported" "https://apps.dhis2.org/demo/
api/messageConversations/ZjHHSjyyeJ2" -H "Content-Type:text/plain" -u mobile:district
-X POST -v

If all went according to plan you will receive a 200 OK status code.

1.17.2. Managing messages


Note: the Web-API calls discussed in this section were introduced in DHIS 2.17

As users receive and send messages, conversations will start to pile up in their inboxes, eventually becoming laborious
to track. We will now have a look at managing a users message inbox by removing and marking conversations through
the Web-API. We will do so by performing some maintenance in the inbox of the DHIS Administrator user.

First, let's have a look at removing a few messages from the inbox. Be sure to note that all removal operations described
here only remove the relation between a user and a message conversation. In practical terms this means that we are not
deleting the messages themselves (or any content for that matter) but are simply removing the message thread from
the user such that it is not longer listed in the /api/messageConversations resource.

To remove a message conversation from a users inbox we need to issue a DELETE request to the resource identified
by the id of the message conversation and the participating user. For example, to remove the user with id xE7jOejl9FI
from the conversation with id jMe43trzrdi:

curl https://apps.dhis2.org/demo/api/messageConversations/jMe43

If the request was successful the server will reply with a 200 OK. The response body contains an XML or JSON object
(according to the accept header of the request) containing the id of the removed user.

{ "removed" : ["xE7jOejl9FI"] }

On failure the returned object will contain a message payload which describes the error.

{ "message" : "No user with uid: dMV6G0tPAEa" }

The observant reader will already have noticed that the object returned on success in our example is actually a list of
ids (containing a single entry). This is due to the endpoint also supporting batch removals. The request is made to the
same messageConversations resource but follows slightly different semantics. For batch operations the conversation
ids are given as query string parameters. The following example removes two separate message conversations for the
current user:

curl "https://apps.dhis2.org/demo/api/messageConversations?
mc=WzMRrCosqc0&mc=lxCjiigqrJm" -X DELETE -u admin:district -v

If you have sufficient permissions, conversations can be removed on behalf of another user by giving an optional user
id parameter.

curl "https://apps.dhis2.org/demo/api/messageConversations?
mc=WzMRrCosqc0&mc=lxCjiigqrJm&user=PhzytPW3g2J" -X DELETE -u admin:district -v

As indicated, batch removals will return the same message format as for single operations. The list of removed objects
will reflect successful removals performed. Partially errorenous requests (i.e. non-existing id) will therefore not cancel
the entire batch operation.

Messages carry a boolean read property. This allows tracking whether a user has seen (opened) a message or not. In
a typical application scenario (e.g. the DHIS 2 web portal) a message will be marked read as soon as the user opens it

51
Web API Interpretations

for the first time. However, users might want to manage the read or unread status of their messages in order to keep
track of certains conversations.

Marking messages read or unread follows similar semantics as batch removals, and also supports batch operations. To
mark messages as read we issue a POST to the messageConversations/read resource with a request body containing
one or more message ids. To mark messages as unread we issue an identical request to the messageConversations/
unread resource. As is the case for removals, an optional user request parameter can be given.

Let's mark a couple of messages as read by the current user:

curl "https://apps.dhis2.org/dev/api/messageConversations/read" -d
'["ZrKML5WiyFm","Gc03smoTm6q"]' -X POST -H "Content-Type: application/json" -u
admin:district -v

The response is a 200 OK with the following JSON body:

{ "markedRead" : [ "ZrKML5WiyFm", "Gc03smoTm6q" ] }

1.18. Interpretations
For certain analysis-related resources in DHIS, like charts, maps and report tables, one can write and share a data
interpretation. An interpretation is simply a link to the the relevant resource together with a text expressing some insight
about the data. Interpretations access control follows the access given for the interpreted object.

1.18.1. Reading interpretations


To read interpretations we will interact with the api/interpretations resource. The output in JSON response format
could look like below (use e.g. api/interpretations.json):

{
"interpretations": [{
"created": "2013-10-07T11:37:19.273+0000",
"lastUpdated": "2013-10-07T12:08:58.028+0000",
"type": "map",
"href": "https://apps.dhis2.org/demo/api/interpretations/d3BukolfFZI",
"id": "d3BukolfFZI"
}, {
"created": "2013-05-30T10:24:06.181+0000",
"lastUpdated": "2013-05-30T10:25:08.066+0000",
"type": "reportTable",
"href": "https://apps.dhis2.org/demo/api/interpretations/XSHiFlHAhhh",
"id": "XSHiFlHAhhh"
}, {
"created": "2013-05-29T14:47:13.081+0000",
"lastUpdated": "2013-05-29T14:47:13.081+0000",
"type": "chart",
"href": "https://apps.dhis2.org/demo/api/interpretations/kr4AnZmYL43",
"id": "kr4AnZmYL43"
}]
}

An interpretation contains properties for identifier, date of creation and date of last modification. The type property
refers to the kind of object is being interpreted, and is useful to show an appropriate visual clue in a client. Valid options
are "chart", "map", "reportTable" and "dataSetReport". By following the link given in the "href" property one can get
more information about a specific interpretation. In the case of the map interpretation, the response will look like this:

{
"created": "2013-10-07T11:37:19.273+0000",
"lastUpdated": "2014-10-07T12:08:58.028+0000",
"map": {

52
Web API Writing interpretations

"name": "ANC: ANC 2 Coverage",


"created": "2014-11-13T12:01:21.918+0000",
"lastUpdated": "2014-11-13T12:01:21.918+0000",
"href": "https://apps.dhis2.org/demo/api/maps/bhmHJ4ZCdCd",
"id": "bhmHJ4ZCdCd"
},
"text": "We can see that the ANC 2 coverage of Kasonko and Lei districts are under
40 %. What could be the cause for this?",
"comments": [{
"created": "2014-10-07T12:08:58.026+0000",
"lastUpdated": "2014-10-07T12:08:58.026+0000",
"text": "Due to the rural environment, getting women to the facilities is a
challenge. Outreach campaigns might be helpful.",
"href": "https://apps.dhis2.org/demo/api/null/iB4Etq8yTE6",
"id": "iB4Etq8yTE6"
}],
"type": "map",
"href": "https://apps.dhis2.org/demo/api/interpretations/d3BukolfFZI",
"id": "d3BukolfFZI"
}

The map interpretation contains identifier and type information in the "id" and "type" properties. The interpretation text
is available in the "text" property and references to any comments in the "comments" list. It also contains information
about the interpreted object, in this case the "map" property. Note that you can follow the link to the actual map through
the "href" property. For all analytical objects you can append /data to the URL to retrieve the data associated with the
resource, as apposed to the meta-data. As an example, by following the map link and appending /data one can retrieve
a PNG (image) representation of the thematic map through the following URL:

https://apps.dhis2.org/demo/api/maps/bhmHJ4ZCdCd/data

1.18.2. Writing interpretations


We will start by writing an interpretation for the chart with identifier EbRN2VIbPdV. To write chart interpretations we
will interact with the http://apps.dhis2.org/demo/api/interpretations/chart/{chartId} resource. The interpretation will
be the request body. Based on this we can put together the following request using cURL:

curl -d "This chart shows a significant ANC 1-3 dropout" "https://apps.dhis2.org/demo/


api/interpretations/chart/EbRN2VIbPdV" \
-H "Content-Type:text/plain" -u admin:district -v

Second we will write a comment on the interpretation we just wrote. By looking at the interpretation response you will
see that a Location header is returned. This header tells us the URL of the newly created interpretation and from that
we can read its identifier. This identifier is randomly generated so you will have to replace the one in the command
below with your own. To write a comment we can interact with the http://apps.dhis2.org/demo/api/interpretations/
{interpretationId}/comment like this:

curl -d "An intervention is needed" "https://apps.dhis2.org/demo/api/interpretations/


j8sjHLkK8uY/comment"
-H "Content-Type:text/plain" -u admin:district -v

You can also write interpretations for report tables in a similar way by interacting with the http://app.dhis2.org/demo/
api/interpretations/reportTable/{reportTableId}. For report tables you can also provide an optional ou query parameter
to supply an organisation unit identifier in the case where the report table has an organisation unit report parameter:

curl -d "This table reveals poor data quality" "https://apps.dhis2.org/demo/api/


interpretations/reportTable/xIWpSo5jjT1?ou=O6uvpzGd5pu"
-H "Content-Type:text/plain" -u admin:district -v

1.18.3. Creating, updating and removing interpretation comments


Creating comments to existing interpretations:

53
Web API Viewing analytical resource representations

POST "plain-text comment" to /api/interpretations/ID/comments

Updating comments in existing interpretations:

PUT "plain-text comment" to /api/interpretations/ID/comments/ID

Removing comments in existing interpretations:

DELETE /api/interpretations/ID/comments/ID

1.19. Viewing analytical resource representations


DHIS 2 has several resources for data analysis. These resources include charts, maps, reportTables, reports and
documents. By visiting these resources you will retrieve information about the resource. For instance, by navigating
to api/charts/R0DVGvXDUNP the response will contain the name, last date of modication and so on for the chart. To
retrieve the analytical representation, for instance a PNG representation of the chart, you can append /data to all these
resources. For instance, by visiting api/charts/R0DVGvXDUNP/data the system will return a PNG image of the chart.

Table 1.34. Analytical resources

Resource Description Data URL Resource representations


charts Charts api/charts/<identifier>/data png
eventCharts Event charts api/eventCharts/<identifier>/data png
maps Maps api/maps/<identifier>/data png
reportTables Pivot tables api/reportTables/<identifier>/data json | jsonp | html | xml | pdf | xls
| csv
reports Standard reports api/reports/<identifier>/data pdf | xls | html
documents Resources api/documents/<identifier>/data <follows document>

The data content of the analytical representations can be modified by providing a date query parameter. This requires
that the analytical resource is set up for relative periods for the period dimension.

Table 1.35. Data query parameters

Query parameter Value Description


date Date in yyyy-MM-dd format Basis for relative periods in report (requires relative
periods)

Table 1.36. Query parameters for png / image types (charts, maps)

Query parameter Description


width Width of image in pixels
height Height of image in pixels

Some examples of valid URLs for retrieving various analytical representations are listed below.

api/charts/R0DVGvXDUNP/data
api/charts/R0DVGvXDUNP/data?date=2013-06-01

api/reportTables/jIISuEWxmoI/data.html
api/reportTables/jIISuEWxmoI/data.html?date=2013-01-01
api/reportTables/FPmvWs7bn2P/data.xls
api/reportTables/FPmvWs7bn2P/data.pdf

api/maps/DHE98Gsynpr/data
api/maps/DHE98Gsynpr/data?date=2013-07-01

54
Web API Plugins

api/reports/OeJsA6K1Otx/data.pdf
api/reports/OeJsA6K1Otx/data.pdf?date=2014-01-01

1.20. Plugins
DHIS 2 comes with plugins which enables you to embed live data directly in your web portal or web site. Currently,
plugins exist for charts, maps and pivot tables.

1.20.1. Embedding pivot tables with the Pivot Table plug-in


In this example we will see how we can embed good-looking, light-weight html pivot tables with data served from
a DHIS back-end into a Web page. To accomplish this we will use the Pivot table plug-in. The plug-in is written in
Javascript and depends on the Ext JS library only. A complete working example can be found at http://apps.dhis2.org/
portal/table.html. Open the page in a web browser and view the source to see how it is set up.

We start by having a look at what the complete html file could look like. This setup puts two tables in our web page.
The first one is referring to an existing table. The second is configured inline.

<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" type="text/css" href="http://dhis2-cdn.org/v215/ext/
resources/css/ext-plugin-gray.css" />
<script src="https://dhis2-cdn.org/v215/ext/ext-all.js"></script>
<script src="https://dhis2-cdn.org/v215/plugin/table.js"></script>

<script>
var base = "https://apps.dhis2.org/demo";

// Login - if OK, call the setLinks function

Ext.onReady( function() {
Ext.Ajax.request({
url: base + "dhis-web-commons-security/login.action",
method: "POST",
params: { j_username: "portal", j_password: "Portal123" },
success: setLinks
});
});

function setLinks() {

// Referring to an existing table through the id parameter, render to "table1"


div

DHIS.getTable({ url: base, el: "table1", id: "R0DVGvXDUNP" });

// Full table configuration, render to "table2" div

DHIS.getTable({
url: base,
el: "table2",
columns: [
{dimension: "de", items: [{id: "YtbsuPPo010"}, {id: "l6byfWFUGaP"}]}
],
rows: [
{dimension: "pe", items: [{id: "LAST_12_MONTHS"}]}
],
filters: [

55
Web API Embedding pivot tables with the Pivot Table
plug-in
{dimension: "ou", items: [{id: "USER_ORGUNIT"}]}
],
// All following options are optional
showTotals: false,
showSubTotals: false,
hideEmptyRows: true,
showHierarchy: true,
displayDensity: "comfortable",
fontSize: "large",
digitGroupSeparator: "comma",
legendSet: {id: "BtxOoQuLyg1"}
});
}
</script>
</head>

<body>
<div id="table1"></div>
<div id="table2"></div>
</body>
</html>

Three files are included in the header section of the HTML document. The first two files are the Ext JS javascript
library (we use the DHIS 2 content delivery network in this case) and its css stylesheet. The third file is the Pivot table
plug-in. Make sure the path is pointing to your DHIS server installation.

<link rel="stylesheet" type="text/css" href="http://dhis2-cdn.org/v215/ext/resources/


css/ext-plugin-gray.css" />
<script src="http://dhis2-cdn.org/v215/ext/ext-all.js"></script>
<script src="http://dhis2-cdn.org/v215/plugin/table.js"></script>

To authenticate with the DHIS server we use the same approach as in the previous section. In the header of the HTML
document we include the following Javascript inside a script element. The setLinks method will be implemented later.
Make sure the base variable is pointing to your DHIS installation.

var base = "https://apps.dhis2.org/demo/";

Ext.onReady( function() {
Ext.Ajax.request({
url: base + "dhis-web-commons-security/login.action",
method: "POST",
params: { j_username: "portal", j_password: "Portal123" },
success: setLinks
});
});

Now let us have a look at the various options for the Pivot table plug-in. Two properies are required: el and url (please
refer to the table below). Now, if you want to refer to pre-defined tables already made inside DHIS it is sufficient to
provide the additional id parameter. If you instead want to configure a pivot table dynamically you shoud omit the id
parameter and provide data dimensions inside a columns array, a rows array and optionally a filters array instead.

A data dimension is defined as an object with a text property called dimension. This property accepts the following
values: in (indicator), de (data element), ds (data set), dc (data element operand), pe (period), ou (organisation unit) or
the id of any organisation unit group set or data element group set (can be found in the web api). The data dimension
also has an array property called items which accepts objects with an id property.

To sum up, if you want to have e.g. "ANC 1 Coverage", "ANC 2 Coverage" and "ANC 3 Coverage" on the columns
in your table you can make the following columns config:

columns: [{
dimension: "in", // "in", "de", "ds", "dc", "pe", "ou" or any dimension id
items: [
{id: "Uvn6LCg7dVU"}, // the id of ANC 1 Coverage

56
Web API Embedding pivot tables with the Pivot Table
plug-in
{id: "OdiHJayrsKo"}, // the id of ANC 2 Coverage
{id: "sB79w2hiLp8"} // the id of ANC 3 Coverage
]
}]

Table 1.37. Pivot table plug-in configuration

Param Type Required Options (default Description


first)
el string Yes Identifier of the
HTML element to
render the table in
your web page
url string Yes Base URL of the
DHIS server
id string No Identifier of a
pre-defined table
(favorite) in DHIS
columns array Yes (if no id Data dimensions to
provided) include in table as
columns
rows array Yes (if no id Data dimensions to
provided) include in table as
rows
filter array No Data dimensions to
include in table as
filters
showTotals boolean No true | false Whether to display
totals for columns
and rows
showSubTotals boolean No true | false Whether to display
sub-totals for
columns and rows
hideEmptyRows boolean No false | true Whether to hide
rows with no data
showHierarchy boolean No false | true Whether to extend
orgunit names with
the name of all
anchestors
displayDensity string No "normal" | The amount of space
"comfortable" | inside table cells
"compact"
fontSize string No "normal" | "large" | Table font size
"small"
digitGroupSeparator string No "space" | "comma" | How values are
"none" formatted: 1 000 |
1,000 | 1000
legendSet object No Show a color
indicator next to
the values (currently
reusing legend sets
from GIS)

57
Web API Embedding charts with the Visualizer chart
plug-in
We continue by adding one pre-defined and one dynamic pivot table to our HTML document. You can browse the list
of available pivot tables using the Web API here: http://apps.dhis2.org/demo/api/reportTables.

function setLinks() {
DHIS.getTable({ url: base, el: "table1", id: "R0DVGvXDUNP" });

DHIS.getTable({
url: base,
el: "table2",
columns: [
{dimension: "de", items: [{id: "YtbsuPPo010"}, {id: "l6byfWFUGaP"}]}
],
rows: [
{dimension: "pe", items: [{id: "LAST_12_MONTHS"}]}
],
filters: [
{dimension: "ou", items: [{id: "USER_ORGUNIT"}]}
],
// All following options are optional
showTotals: false,
showSubTotals: false,
hideEmptyRows: true,
showHierarchy: true,
displayDensity: "comfortable",
fontSize: "large",
digitGroupSeparator: "comma",
legendSet: {id: "BtxOoQuLyg1"}
});
}

Finally we include some div elements in the body section of the HTML document with the identifiers referred to in
the plug-in Javascript.

<div id="table1"></div>
<div id="table2"></div>

To see a complete working example please visit http://apps.dhis2.org/portal/table.html.

1.20.2. Embedding charts with the Visualizer chart plug-in


In this example we will see how we can embed good-looking Ext JS charts (http://www.sencha.com/products/extjs)
with data served from a DHIS back-end into a Web page. To accomplish this we will use the DHIS Visualizer plug-in.
The plug-in is written in Javascript and depends on the Ext JS library only. A complete working example can be found
at http://apps.dhis2.org/portal/chart.html. Open the page in a web browser and view the source to see how it is set up.

We start by having a look at what the complete html file could look like. This setup puts two charts in our web page.
The first one is referring to an existing chart. The second is configured inline.

<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" type="text/css" href="http://dhis2-cdn.org/v215/ext/
resources/css/ext-plugin-gray.css" />
<script src="http://dhis2-cdn.org/v215/ext/ext-all.js"></script>
<script src="http://dhis2-cdn.org/v215/plugin/chart.js"></script>

<script>
var base = "https://apps.dhis2.org/demo";

// Login - if OK, call the setLinks function

Ext.onReady( function() {
Ext.Ajax.request({

58
Web API Embedding charts with the Visualizer chart
plug-in
url: base + "dhis-web-commons-security/login.action",
method: "POST",
params: { j_username: "portal", j_password: "Portal123" },
success: setLinks
});
});

function setLinks() {

// Referring to an existing chart through the id parameter, render to "chart1"


div

DHIS.getChart({ url: base, el: "chart1", id: "R0DVGvXDUNP" });

// Full chart configuration, render to "chart2" div

DHIS.getChart({
url: base,
el: "chart2",
type: "stackedBar",
columns: [ // Chart series
{dimension: "de", items: [{id: "YtbsuPPo010"}, {id: "l6byfWFUGaP"}]}
],
rows: [ // Chart categories
{dimension: "pe", items: [{id: "LAST_12_MONTHS"}]}
],
filters: [
{dimension: "ou", items: [{id: "USER_ORGUNIT"}]}
],
// All following options are optional
showData: false,
targetLineValue: 70,
baseLineValue: 20,
showTrendLine: true,
hideLegend: true,
title: "My chart title",
domainAxisTitle: "Periods",
rangeAxisTitle: "Percent"
});
}
</script>
</head>

<body>
<div id="chart1"></div>
<div id="chart2"></div>
</body>
</html>

Three files are included in the header section of the HTML document. The first two files are the Ext JS javascript
library (we use the DHIS 2 content delivery network in this case) and its stylesheet. The third file is the Visualizer
plug-in. Make sure the path is pointing to your DHIS server installation.

<link rel="stylesheet" type="text/css" href="http://dhis2-cdn.org/v215/ext/resources/


css/ext-plugin-gray.css" />
<script src="http://dhis2-cdn.org/v215/ext/ext-all.js"></script>
<script src="http://dhis2-cdn.org/v215/plugin/chart.js"></script>

To authenticate with the DHIS server we use the same approach as in the previous section. In the header of the HTML
document we include the following Javascript inside a script element. The setLinks method will be implemented later.
Make sure the base variable is pointing to your DHIS installation.

var base = "https://apps.dhis2.org/demo/";

59
Web API Embedding charts with the Visualizer chart
plug-in

Ext.onReady( function() {
Ext.Ajax.request({
url: base + "dhis-web-commons-security/login.action",
method: "POST",
params: { j_username: "portal", j_password: "Portal123" },
success: setLinks
});
});

Now let us have a look at the various options for the Visualizer plug-in. Two properies are required: el and url (please
refer to the table below). Now, if you want to refer to pre-defined charts already made inside DHIS it is sufficient
to provide the additional id parameter. If you instead want to configure a chart dynamically you shoud omit the
id parameter and provide data dimensions inside a columns array (chart series), a rows array (chart categories) and
optionally a filters array instead.

A data dimension is defined as an object with a text property called dimension. This property accepts the following
values: in (indicator), de (data element), ds (data set), dc (data element operand), pe (period), ou (organisation unit) or
the id of any organisation unit group set or data element group set (can be found in the web api). The data dimension
also has an array property called items which accepts objects with an id property.

To sum up, if you want to have e.g. "ANC 1 Coverage", "ANC 2 Coverage" and "ANC 3 Coverage" as series in your
chart you can make the following columns config:

columns: [{
dimension: "in", // could be "in", "de", "ds", "dc", "pe", "ou" or any dimension id
items: [
{id: "Uvn6LCg7dVU"}, // the id of ANC 1 Coverage
{id: "OdiHJayrsKo"}, // the id of ANC 2 Coverage
{id: "sB79w2hiLp8"} // the id of ANC 3 Coverage
]
}]

Table 1.38. Visualizer chart plug-in configuration

Param Type Required Options (default Description


first)
el string Yes Identifier of the
HTML element to
render the chart in
your web page
url string Yes Base URL of the
DHIS server
id string No Identifier of a
pre-defined chart
(favorite) in DHIS
type string No column | Chart type
stackedcolumn | bar
| stackedbar | line |
area | pie
columns array Yes (if no id Data dimensions to
provided) include in chart as
series
rows array Yes (if no id Data dimensions to
provided) include in chart as
category

60
Web API Embedding charts with the Visualizer chart
plug-in

Param Type Required Options (default Description


first)
filter array No Data dimensions to
include in chart as
filters
showData boolean No false | true Whether to display
data on the chart
showTrendLine boolean No false | true Whether to display
trend line(s) on the
chart
hideLegend boolean No false | true Whether to hide the
chart legend
hideTitle boolean No false | true Whether to hide the
chart title
targetLineValue double No Value of target line
to display on the
chart
targetLineLabel string No Label for target line
baseLineValue double No Value of baseline to
display on the chart
baseLineLabel string No Label for baseline
domainAxisTitle string No Title for the domain
axis
rangeAxisTitle string No Title for the range
axis
width integer No Width of chart
height integer No Height of chart

We continue by including two pre-defined charts and to dynamic charts to our HTML document. You can browse the
list of available charts using the Web API here: http://apps.dhis2.org/demo/api/charts.

function setLinks() {
DHIS.getChart({ url: base, el: "chart1", id: "R0DVGvXDUNP" });

DHIS.getChart({
url: base,
el: "chart2",
type: "stackedBar",
columns: [ // Chart series
{dimension: "de", items: [{id: "YtbsuPPo010"}, {id: "l6byfWFUGaP"}]}
],
rows: [ // Chart categories
{dimension: "pe", items: [{id: "LAST_12_MONTHS"}]}
],
filters: [
{dimension: "ou", items: [{id: "USER_ORGUNIT"}]}
],
// All following options are optional
showData: false,
targetLineValue: 70,
baseLineValue: 20,
showTrendLine: true,
hideLegend: true,
title: "My chart title",

61
Web API Embedding maps with the GIS map plug-in

domainAxisTitle: "Periods",
rangeAxisTitle: "Percent"
});
}

Finally we include some div elements in the body section of the HTML document with the identifiers referred to in
the plug-in Javascript.

<div id="chart1"></div>
<div id="chart2"></div>

To see a complete working example please visit http://apps.dhis2.org/portal/chart.html.

1.20.3. Embedding maps with the GIS map plug-in


In this example we will see how we can embed maps with data served from a DHIS back-end into a Web page. To
accomplish this we will use the GIS map plug-in. The plug-in is written in Javascript and depends on the Ext JS library
only. A complete working example can be found at http://apps.dhis2.org/portal/map.html. Open the page in a web
browser and view the source to see how it is set up.

We start by having a look at what the complete html file could look like. This setup puts two maps in our web page.
The first one is referring to an existing map. The second is configured inline.

<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" type="text/css" href="http://dhis2-cdn.org/v215/ext/
resources/css/ext-plugin-gray.css" />
<script src="http://dhis2-cdn.org/v215/ext/ext-all.js"></script>
<script src="https://maps.google.com/maps/api/js?sensor=false"></script>
<script src="http://dhis2-cdn.org/v215/openlayers/OpenLayers.js"></script>
<script src="http://dhis2-cdn.org/v215/plugin/map.js"></script>

<script>
var base = "https://apps.dhis2.org/demo";

// Login - if OK, call the setLinks function

Ext.onReady( function() {
Ext.Ajax.request({
url: base + "dhis-web-commons-security/login.action",
method: "POST",
params: { j_username: "portal", j_password: "Portal123" },
success: setLinks
});
});

function setLinks() {
DHIS.getMap({ url: base, el: "map1", id: "ytkZY3ChM6J" });

DHIS.getMap({
url: base,
el: "map2",
mapViews: [{
columns: [{dimension: "in", items: [{id: "Uvn6LCg7dVU"}]}], // data
rows: [{dimension: "ou", items: [{id: "LEVEL-3"}, {id: "ImspTQPwCqd"}]}], //
organisation units,
filters: [{dimension: "pe", items: [{id: "LAST_3_MONTHS"}]}], // period
// All following options are optional
classes: 7,
colorLow: "02079c",
colorHigh: "e5ecff",
opacity: 0.9,

62
Web API Embedding maps with the GIS map plug-in

legendSet: {id: "fqs276KXCXi"}


}]
});
}
</script>
</head>

<body>
<div id="map1"></div>
<div id="map2"></div>
</body>
</html>

Four files and Google Maps are included in the header section of the HTML document. The first two files are the Ext
JS javascript library (we use the DHIS 2 content delivery network in this case) and its stylesheet. The third file is the
OpenLayers javascript mapping framework (http://openlayers.org) and finally we include the GIS map plug-in. Make
sure the path is pointing to your DHIS server installation.

<link rel="stylesheet" type="text/css" href="http://dhis2-cdn.org/v215/ext/resources/


css/ext-plugin-gray.css" />
<script src="http://dhis2-cdn.org/v215/ext/ext-all.js"></script>
<script src="https://maps.google.com/maps/api/js?sensor=false"></script>
<script src="http://dhis2-cdn.org/v215/openlayers/OpenLayers.js"></script>
<script src="http://dhis2-cdn.org/v215/plugin/map.js"></script>

To authenticate with the DHIS server we use the same approach as in the previous section. In the header of the HTML
document we include the following Javascript inside a script element. The setLinks method will be implemented later.
Make sure the base variable is pointing to your DHIS installation.

Ext.onReady( function() {
Ext.Ajax.request({
url: base + "dhis-web-commons-security/login.action",
method: "POST",
params: { j_username: "portal", j_password: "Portal123" },
success: setLinks
});
});

Now let us have a look at the various options for the GIS plug-in. Two properies are required: el and url (please refer
to the table below). Now, if you want to refer to pre-defined maps already made in the DHIS 2 GIS it is sufficient
to provide the additional id parameter. If you instead want to configure a map dynamically you shoud omit the id
parameter and provide mapViews (layers) instead. They should be configured with data dimensions inside a columns
array, a rows array and optionally a filters array instead.

A data dimension is defined as an object with a text property called dimension. This property accepts the following
values: in (indicator), de (data element), ds (data set), dc (data element operand), pe (period), ou (organisation unit) or
the id of any organisation unit group set or data element group set (can be found in the web api). The data dimension
also has an array property called items which accepts objects with an id property.

To sum up, if you want to have a layer with e.g. "ANC 1 Coverage" in your map you can make the following columns
config:

columns: [{
dimension: "in", // could be "in", "de", "ds", "dc", "pe", "ou" or any dimension id
items: [{id: "Uvn6LCg7dVU"}], // the id of ANC 1 Coverage
}]

Table 1.39. GIS map plug-in configuration

Param Type Required Options (default Description


first)
el string Yes Identifier of the
HTML element to

63
Web API Embedding maps with the GIS map plug-in

Param Type Required Options (default Description


first)
render the map in
your web page
url string Yes Base URL of the
DHIS server
id string No Identifier of a
pre-defined map
(favorite) in DHIS
baseLayer string/boolean No 'gs', 'googlestreets' | Show background
'gh', 'googlehybrid' | map
'osm',
'openstreetmap' |
false, null, 'none',
'off'
hideLegend boolean No false | true Hide legend panel
mapViews array Yes (if no id Array of layers
provided)

If no id is provided you must add map view objects with the following config options:

Table 1.40. Map plug-in configuration

layer string No "thematic1" | The layer to which


"thematic2" | the map view
"thematic3" | content should be
"thematic4" | added
"boundary" |
"facility" |
columns array Yes Indicator, data
element, data
operand or data set
(only one will be
used)
rows array Yes Organisation units
(multiple allowed)
filter array Yes Period (only one
will be used)
classes integer No 5 | 1-7 The number of
automatic legend
classes
method integer No 2|3 Legend calculation
method where 2 =
equal intervals and 3
= equal counts
colorLow string No "ff0000" (red) | Any The color
hex color representing the first
automatic legend
class
colorHigh string No "00ff00" (green) | The color
Any hex color representing the last

64
Web API Embedding maps with the GIS map plug-in

automatic legend
class
radiusLow integer No 5 | Any integer Only applies for
facilities (points) -
radius of the point
with lowest value
radiusHigh integer No 15 | Any integer Only applies for
facilities (points) -
radius of the point
with highest value
opacity double No 0.8 | 0 - 1 Opacity/
transparency of the
layer content
legendSet object No Pre-defined legend
set. Will override
the automatic
legend set.
labels boolean/object No false | true | Show labels on the
object properties: map
fontSize (integer),
color (hex string),
strong (boolean),
italic (boolean)
width integer No Width of map
height integer No Height of map

We continue by adding one pre-defined and one dynamically configured map to our HTML document. You can browse
the list of available maps using the Web API here: http://apps.dhis2.org/demo/api/maps.

function setLinks() {
DHIS.getMap({ url: base, el: "map1", id: "ytkZY3ChM6J" });

DHIS.getMap({
url: base,
el: "map2",
mapViews: [
columns: [ // Chart series
columns: [{dimension: "in", items: [{id: "Uvn6LCg7dVU"}]}], // data
],
rows: [ // Chart categories
rows: [{dimension: "ou", items: [{id: "LEVEL-3"}, {id: "ImspTQPwCqd"}]}], //
organisation units
],
filters: [
filters: [{dimension: "pe", items: [{id: "LAST_3_MONTHS"}]}], // period
],
// All following options are optional
classes: 7,
colorLow: "02079c",
colorHigh: "e5ecff",
opacity: 0.9,
legendSet: {id: "fqs276KXCXi"}
]
});
}

65
Web API Creating a chart carousel with the carousel
plug-in
Finally we include some div elements in the body section of the HTML document with the identifiers referred to in
the plug-in Javascript.

<div id="map1"></div>
<div id="map2"></div>

To see a complete working example please visit http://apps.dhis2.org/portal/map.html.

1.20.4. Creating a chart carousel with the carousel plug-in


The chart plug-in also makes it possible to create a chart carousel which for instance can be used to create an attractive
front page on a Web portal. To use the carousel we need to import a few files in the head section of our HTML page:

<link rel="stylesheet" type="text/css" href="http://dhis2-cdn.org/v213/ext/resources/


css/ext-plugin-gray.css" />
<link rel="stylesheet" type="text/css" href="https://apps.dhis2.org/demo/dhis-web-
commons/javascripts/ext-ux/carousel/css/carousel.css" />
<script type="text/javascript" src="https://extjs-public.googlecode.com/svn/tags/
extjs-4.0.7/release/ext-all.js"></script>
<script type="text/javascript" src="https://apps.dhis2.org/demo/dhis-web-commons/
javascripts/ext-ux/carousel/Carousel.js"></script>
<script type="text/javascript" src="https://apps.dhis2.org/demo/dhis-web-commons/
javascripts/plugin/plugin.js"></script>

The first file is the CSS stylesheet for the chart plug-in. The second file is the CSS stylesheet for the carousel widget.
The third file is the Ext JavaScript framework which this plug-in depends on. The fourth file is the carousel plug-in
JavaScript file. The fifth file is the chart plug-in JavaScript file. The paths in this example points at the DHIS 2 demo
site, make sure you update them to point to your own DHIS 2 installation.

Please refer to the section about the chart plug-in on how to do authentication.

To create a chart carousel we will first render the charts we want to include in the carousel using the method described
in the chart plug-in section. Then we create the chart carousel itself. The charts will be rendered into div elements
which all have a CSS class called chart. In the carousel configuration we can then define a selector expression which
refers to those div elements like this:

DHIS.getChart({ uid: 'R0DVGvXDUNP', el: 'chartA1', url: base });


DHIS.getChart({ uid: 'X0CPnV6uLjR', el: 'chartA2', url: base });
DHIS.getChart({ uid: 'j1gNXBgwKVm', el: 'chartA3', url: base });
DHIS.getChart({ uid: 'X7PqaXfevnL', el: 'chartA4', url: base });

new Ext.ux.carousel.Carousel( 'chartCarousel', {


autoPlay: true,
itemSelector: 'div.chart',
interval: 5,
showPlayButton: true
});

The first argument in the configuration is the id of the div element in which you want to render the carousel. The
autoPlay configuration option refers to whether we want the carousel to start when the user loads the Web page. The
interval option defines how many seconds each chart should be displayed. The showPlayButton defines whether we
want to render a button for the user to start and stop the carousel. Finally we need to insert the div elements in the
body of the HTML document:

<div id="chartCarousel">

<div id="chartA1"></div>
<div id="chartA2"></div>
<div id="chartA3"></div>
<div id="chartA4"></div>

To see a complete working example please visit http://apps.dhis2.org/portal/carousel.html.

66
Web API SQL views

1.21. SQL views


SQL views are useful for creating data views which may be more easily constructed with SQL compared combining the
multiple objects of the Web API. As an example, lets assume we have been asked to provide a view of all organization
units with their names, parent names, organization unit level and name, and the coordinates listed in the database. The
view might look something like this:

SELECT ou.name as orgunit, par.name as parent, ou.coordinates, ous.level, oul.name


from organisationunit ou
INNER JOIN _orgunitstructure ous ON ou.organisationunitid = ous.organisationunitid
INNER JOIN organisationunit par ON ou.parentid = par.organisationunitid
INNER JOIN orgunitlevel oul ON ous.level = oul.level
WHERE ou.coordinates is not null
ORDER BY oul.level, par.name, ou.name

We will use curl to first execute the view on the DHIS 2 server. This is essentially a materialization process, and
ensures that we have the most recent data available through the SQL view when it is retrieved from the server. You
can first look up the SQL view from the api/sqlViews resource, then POST using the following command:

curl "https://apps.dhis2.org/demo/api/sqlViews/dI68mLkP1wN/execute" -X POST -u


admin:district -v

The next step in the process is the retrieval of the data.The basic structure of the URL is as follows

http://{server}/api/sqlViews/{id}/data(.csv)

The {server} parameter should be replaced with your own server. The next part of the URL /api/sqlViews/
should be appended with the specific SQL view identifier. Append either data for XML data or data.csv for comma
delimited values. Support response formats are json, xml, csv, xls, html and html+css. As an example, the following
command would retrieve XML data for the SQL view defined above.

curl "https://apps.dhis2.org/demo/api/sqlViews/dI68mLkP1wN/data.csv" -u admin:district


-v

1.21.1. Criteria
You can do simple filtering on the columns in the result set by appending criteria query parameters to the URL, using
the column names and filter values separated by columns as parameter values, on the following format:

/api/sqlViews/{id}/data?criteria=col1:value1&criteria=col2:value2

As an example, to filter the SQL view result set above to only return organisation units at level 4 you can use the
following URL:

https://apps.dhis2.org/demo/api/sqlViews/dI68mLkP1wN/data.csv?criteria=level:4

1.21.2. Variables
SQL views support variable subsitution. Variable subsitition is only available for SQL view of type query, meaning
SQL views which are not created in the database but simply executed as regular SQL queries. Variables can be inserted
directly into the SQL query and must be on this format:

${variable-key}

As an example, an SQL query that retrieves all data elements of a given value type where the value type is defined
through a variable can look like this:

select * from dataelement where valuetype = '${valueType}';

These variables can then be supplied as part of the URL when requested through the sqlViews Web API resource.
Variables can be supplied on the following format:

67
Web API Dashboard

/api/sqlViews/{id}/data?var=key1:value1&var=key2:value2

An example query corresponding to the example above can look like this:

/api/sqlViews/dI68mLkP1wN/data.json?var=valueType:int

The valueType variable will be subsituted with the int value, and the query will return data elements with int value type.

1.22. Dashboard
The dashboard is designed to give you an overview of multiple analytical items like maps, charts, pivot tables and
reports which together can provide a comprehensive overview of your data. Dashboards are available in the Web
API through the dashboards resource. A dashboard contains a list of dashboard items. An item can represent a single
resource, like a chart, map or report table, or represent a list of links to analytical resources, like reports, resources,
tabular reports and users. A dashboard item can contain up to eight links. Typically, a dashboard client could choose to
visualize the single-object items directly in a user interface, while rendering the multi-object items as clickable links.

1.22.1. Browsing dashboards


To get a list of your dashboards with basic information including identifier, name and link in JSON format you can
make a GET request to the following URL:

/api/dashboards.json

The dashboards resource will provide a list of dashboards. Remember that the dashboard object is shared so the list
will be affected by the currently authenticated user. You can retrieve more information about a specific dashboard by
following its link, similar to this:

api/dashboards/vQFhmLJU5sK.json

A dashboard contains information like name and creation date and an array of dashboard items. The response in JSON
format will look similar to this response (certain information has been removed for the sake of brevity).

{
"lastUpdated" : "2013-10-15T18:17:34.084+0000",
"id" : "vQFhmLJU5sK",
"created" : "2013-09-08T20:55:58.060+0000",
"name" : "Mother and Child Health",
"href" : "https://apps.dhis2.org/demo/api/dashboards/vQFhmLJU5sK",
"publicAccess" : "--------",
"externalAccess" : false,
"itemCount" : 17,
"displayName" : "Mother and Child Health",
"access" : {
"update" : true,
"externalize" : true,
"delete" : true,
"write" : true,
"read" : true,
"manage" : true
},
"user" : {
"id" : "xE7jOejl9FI",
"name" : "John Traore",
"created" : "2013-04-18T15:15:08.407+0000",
"lastUpdated" : "2014-12-05T03:50:04.148+0000",
"href" : "https://apps.dhis2.org/demo/api/users/xE7jOejl9FI"
},
"dashboardItems" : [{
"id" : "bu1IAnPFa9H",
"created" : "2013-09-09T12:12:58.095+0000",

68
Web API Searching dashboards

"lastUpdated" : "2013-09-09T12:12:58.095+0000"
}, {
"id" : "ppFEJmWWDa1",
"created" : "2013-09-10T13:57:02.480+0000",
"lastUpdated" : "2013-09-10T13:57:02.480+0000"
}
],
"userGroupAccesses" : []
}

A more tailored response can be obtained by specifying specific fields in the request. An example is provided below,
which would return more detailed information about each object on a users dashboard.

api/dashboards/vQFhmLJU5sK/?fields=:all,dashboardItems[:all]

1.22.2. Searching dashboards


When setting a dashboard it is convenient from a consumer point of view to be able to search for various analytical
resources using the /dashboards/q resource. This resource lets you search for matches on the name property of the
following objects: charts, maps, report tables, users, reports and resources. You can do a search by making a GET
request on the following resource URL pattern, where my-query should be replaced by the preferred search query:

api/dashboards/q/my-query.json

JSON and XML response formats are currently supported. The response in JSON format will contain references to
matching resources and counts of how many matches were found in total and for each type of resource. It will look
similar to this:

{
"charts": [{
"name": "ANC: 1-3 dropout rate Yearly",
"id": "LW0O27b7TdD"
}, {
"name": "ANC: 1 and 3 coverage Yearly",
"id": "UlfTKWZWV4u"
}, {
"name": "ANC: 1st and 3rd trends Monthly",
"id": "gnROK20DfAA"
}],
"maps": [{
"name": "ANC: 1st visit at facility (fixed) 2013",
"id": "YOEGBvxjAY0"
}, {
"name": "ANC: 3rd visit coverage 2014 by district",
"id": "ytkZY3ChM6J"
}],
"reportTables": [{
"name": "ANC: ANC 1 Visits Cumulative Numbers",
"id": "tWg9OiyV7mu"
}],
"reports": [{
"name": "ANC: 1st Visit Cumulative Chart",
"id": "Kvg1AhYHM8Q"
}, {
"name": "ANC: Coverages This Year",
"id": "qYVNH1wkZR0"
}],
"searchCount": 8,
"chartCount": 3,
"mapCount": 2,
"reportTableCount": 1,
"reportCount": 2,

69
Web API Creating, updating and removing dashboards

"userCount": 0,
"patientTabularReportCount": 0,
"resourceCount": 0
}

1.22.3. Creating, updating and removing dashboards


Creating, updating and deleting dashboards follow standard REST semantics. In order to create a new dashboard you
can make a POST request to the /api/dashboards resource. From a consumer perspective it might be convenient to first
create a dashboard and later add items to it. JSON and XML formats are supported for the request payload. To create
a dashboard with the name "My dashboard" you can use a payload in JSON like this:

{
"name": "My dashboard"
}

To update, e.g. rename, a dashboard, you can make a PUT request with a similar request payload the same api/dasboards
resource.

To remove a dashboard, you can make a DELETE request to the specific dashboard resource similar to this:

api/dashboards/vQFhmLJU5sK

1.22.4. Adding, moving and removing dashboard items and content


In order to add dashboard items a consumer can use the /api/dashboards/<dashboard-id>/items/content resource,
where <dashboard-id> should be replaced by the relevant dashboard identifier. The request must use the POST method.
The URL syntax and parameters are described in detail in the following table.

Table 1.41. Items content parameters

Query parameter Description Options


type Type of the resource to be represented by the chart | map | reportTable | users |
dashboard item reports | reportTables | resources |
patientTabularReports
id Identifier of the resource to be represented by the Resource identifier
dashboard item

A POST request URL for adding a chart to a specific dashboard could look like this, where the last id query parameter
value is the chart resource identifier:

/api/dashboards/vQFhmLJU5sK/items/content?type=chart&id=LW0O27b7TdD

When adding resource of type map, chart and report table, the API will create and add a new item to the dashboard.
When adding a resource of type users, reports, report tables and resources, the API will try to add the resource to an
existing dashboard item of the same type. If no item of same type or no item of same type with less than eight resources
associated with it exists, the API will create a new dashboard item and the resource to it.

In order to move a dashboard item to a new position within the list of items in a dashboard, a consumer can make
a POST request to the following resource URL, where <dashboard-id> should be replaced by the identifier of the
dashboard, <item-id> should be replaced by the identifier of the dashboard item and <index> should be replaced by
the new position of the item in the dashboard, where the index is zero-based:

/api/dashboards/<dashboard-id>/items/<item-id>/position/<index>

To remove a dashboard item completely from a specific dashboard a consumer can make a DELETE request to the
below resource URL, where <dashboard-id> should be replaced by the identifier of the dashboard and <item-id> should
be replaced by the identifier of the dashboard item. The dashboard item identifiers can be retrieved through a GET
request to the dashboard resource URL.

70
Web API Analytics

/api/dashboards/<dashboard-id>/items/<item-id>

To remove a specific content resource within a dashboard item a consumer can make a DELETE request to the below
resource URL, where <content-resource-id> should be replaced by the identifier of a resource associated with the
dasboard item; e.g. the identifier of a report or a user. For instance, this can be used to remove a single report from a
dashboard item of type reports, as opposed to removing the dashboard item completely:

/api/dashboards/<dashboard-id>/items/<item-id>/content/<content-resource-id>

1.23. Analytics
To access analytical, aggregated data in DHIS 2 you can work with the analytics resource. The analytics resource is
powerful as it lets you query and retrieve data aggregated along all available data dimensions. For instance, you can ask
the analytics resource to provide the aggregated data values for a set of data elements, periods and organisation units.
Also, you can retrieve the aggregated data for a combination of any number of dimensions based on data elements
and organisation unit group sets.

DHIS 2 features a multi-dimensional data model with several fixed and dynamic data dimensions. The fixed dimensions
are the data element, period (time) and organisation unit dimension. You can dynamically add dimensions through
categories, data element group sets and organisation unit group sets. The table below displays the available data
dimensions in DHIS 2. Each data dimension has a corresponding dimension identifier, and each dimension can have
a set of dimension items:

Table 1.42. Dimensions and dimension items

Dimension Dimension Dimension items


id
Data elements, indicators and data set dx Data elements, indicators, data set identifiers, keyword
reporting rates DE_GROUP-<group-id>
Periods (time) pe ISO periods and relative periods, see "date and period
format"
Organisation unit hierarchy ou Organisation unit identifiers, and keywords
USER_ORGUNIT, USER_ORGUNIT_CHILDREN,
USER_ORGUNIT_GRANDCHILDREN, LEVEL-
<level> and OU_GROUP-<group-id>
Category option combinations co Not possible to define dimension items - all relevant
items are returned
Categories <category Category option identifiers (omit to get all items)
id>
Data element group sets <group set Data element group identifiers (omit to get all items)
id>
Organisation unit group sets <group set Organisation unit group identifiers (omit to get all
id> items)
Category option group sets <group set Category option group identifiers (omit to get all items)
id>

It is not necessary to be aware of which objects are used for the various dynamic dimensions when designing analytics
queries. You can get a complete list of dynamic dimensions by visiting this URL in the Web API:

api/dimensions

The base URL to the analytics resource is api/analytics. To request specific dimensions and dimension items you can
use a query string on the following format, where dim-id and dim-item should be substituted with real values:

api/analytics?dimension=dim-id:dim-item;dim-item&dimension=dim-id:dim-item;dim-item

71
Web API Analytics

As illustrated above, the dimension identifier is followed by a colon while the dimension items are separated by semi-
colons. As an example, a query for two data elements, two periods and two organisation units can be done with the
following URL:

api/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU
&dimension=pe:2014Q1;2014Q2&dimension=ou:O6uvpzGd5pu;lc3eMKXaEfw

To query for data broken down by category option combinations instead of data element totals you can include the
category dimension in the query string, for instance like this:

api/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU
&dimension=co&dimension=pe:201401&dimension=ou:O6uvpzGd5pu;lc3eMKXaEfw

When selecting data elements you can also select all data elements in a group as items by using the DE_GROUP-
<id> syntax:

api/analytics?dimension=dx:DE_GROUP-h9cuJOkOwY2
&dimension=pe:201401&dimension=ou:O6uvpzGd5pu

To query for organisation unit group sets and data elements you can use the following URL - notice how the group set
identifier is used as dimension identifier and the groups as dimension items:

api/analytics?dimension=Bpx0589u8y0:oRVt7g429ZO;MAs88nJc9nL
&dimension=pe:2014&dimension=ou:ImspTQPwCqd

To query for data elements and categories you can use this URL - use the category identifier as dimension identifier
and the category options as dimension items:

api/analytics?dimension=dx:s46m5MS0hxu;fClA2Erf6IO&dimension=pe:2014
&dimension=YNZyaJHiHYq:btOyqprQ9e8;GEqzEKCHoGA&filter=ou:ImspTQPwCqd

To query using relative periods and organisation units associated with the current user you can use a URL like this:

api/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU
&dimension=pe:LAST_12_MONTHS&dimension=ou:USER_ORGUNIT

When selecting organisation units for a dimension you can select an entire level optionally constrained by any number
of boundary organisation units with the LEVEL-<level> syntax. Boundary refers to a top node in a sub-hierarchy,
meaning that all organisation units at the given level below the given boundary organisation unit in the hierarchy will
be included in the response, and is provided as regular organisation unit dimension items:

api/analytics?dimension=dx:fbfJHSPpUQD&dimension=pe:2014&dimension=ou:LEVEL-3

api/analytics?dimension=dx:fbfJHSPpUQD&dimension=pe:2014
&dimension=ou:LEVEL-3;LEVEL-4;O6uvpzGd5pu;lc3eMKXaEf

When selecting organisation units you can also select all organisation units in an organisation unit group to be
included as dimension items using the OU_GROUP-<id> syntax. The organisation units in the groups can optionally
be constrained by any number of boundary organisation units. Both the level and the group items can be repeated any
number of times:

api/analytics?dimension=dx:fbfJHSPpUQD&dimension=pe:2014
&dimension=ou:OU_GROUP-w0gFTTmsUcF;O6uvpzGd5pu

api/analytics?dimension=dx:fbfJHSPpUQD&dimension=pe:2014
&dimension=ou:OU_GROUP-w0gFTTmsUcF;OU_GROUP-EYbopBOJWsW;O6uvpzGd5pu;lc3eMKXaEf

A few things to be aware of when using the analytics resource are listed below.
• Data elements, indicator and data sets are part of a common data dimension, identified as "dx". This means that you
can use any of data elements, indicators and data set identifiers together with the "dx" dimension identifier in a query.
• For the data element group set and organisation unit group set dimensions, all dimension items will be used in the
query if no dimension items are given for the dimension.
• For the period dimension, the dimension items are ISO period identifiers and/or relative periods. Please refer to the
section above called "Date and period format" for the period format and available relative periods.

72
Web API Request query parameters

• For the organisation unit dimension you can specify the items to be the organisation unit or sub-units of the
organisation unit associated with the user currently authenticated for the request using they keys USER_ORGUNIT
or USER_ORGUNIT_CHILDREN as items, respectively. You can also specify organisation unit identifiers directly,
or a combination of both.
• For the organisation unit dimension you can specify the organisation hierarchy level and the boundary unit to use
for the request on the format LEVEL-<level>-<boundary-id>; as an example LEVEL-3-ImspTQPwCqd implies all
organisation units below the given boundary unit at level 3 in the hierarchy.
• For the organisation unit dimension the dimension items are the organisation units and their sub-hierarchy - data
will be aggregated for all organisation units below the given organisation unit in the hierarchy.
• You cannot specify dimension items for the category option combination dimension. Instead the response will
contain the items which are linked to the data values.

1.23.1. Request query parameters


The analytics resource lets you specify a range of query parameters:

Table 1.43. Query parameters

Query Required Description Options


parameter
dimension Yes Dimensions to be retrieved, repeated for each Any dimension
filter No Filters to apply to the query, repeated for each Any dimension
aggregationType No Aggregation type to use in the aggregation process SUM | AVERAGE_INT |
AVERAGE_INT_DISAGGREGATION
| AVERAGE_BOOL |
COUNT | STDDEV |
VARIANCE
measureCriteria No Filters for the data/measures EQ | GT | GE | LT | LE
skipMeta No Exclude the meta data part of response (improves false | true
performance)
skipRounding No Skip rounding of data values, i.e. provide full false | true
precision
hierarchyMeta No Include names of organisation unit ancestors and false | true
hierarchy paths of organisation units in the meta-
data
ignoreLimit No Ignore limit on max 50 000 records in response - false | true
use with care
tableLayout No Use plain data source or table layout for response false | true
hideEmptyRows No Hides empty rows in response, applicable when false | true
table layout is true
showHierarchy No Display full org unit hierarchy path together with false | true
org unit name
displayProperty No Property to display for meta-data NAME | SHORTNAME
columns No Dimensions to use as columns for table layout Any dimension (must be
query dimension)
rows No Dimensions to use as rows for table layout Any dimension (must be
query dimension)

The dimension query parameter defines which dimensions should be included in the analytics query. Any number
of dimensions can be specified. The dimension parameter should be repeated for each dimension to include in the

73
Web API Response formats

query response. The query response can potentially contain aggregated values for all combinations of the specified
dimension items.

The filter parameter defines which dimensions should be used as filters for the data retrieved in the analytics query.
Any number of filters can be specified. The filter parameter should be repeated for each filter to use in the query. A
filter differs from a dimension in that the filter dimensions will not be part of the query response content, and that the
aggregated values in the response will be collapsed on the filter dimensions. In other words, the data in the response
will be aggregated on the filter dimensions, but the filters will not be included as dimensions in the actual response. As
an example, to query for certain data elements filtered by the periods and organisation units you can use the following
URL:

api/analytics?
dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU&filter=pe:2014Q1;2014Q2&filter=ou:O6uvpzGd5pu;lc3eMKXaEfw

The aggregationType query parameter lets you define which aggregation operator should be used for the query. By
default the aggregation operator defined for data elements included in the query will be used. If your query does not
contain any data elements, but does include data element groups, the aggregation operator of the first data element in
the first group will be used. The order of groups and data elements is undefined. This query parameter allows you to
override the default and specify a specific aggregation operator. As an example you can set the aggregation operator
to "count" with the following URL:

api/analytics?
dimension=dx:fbfJHSPpUQD&dimension=pe:2014Q1&dimension=ou:O6uvpzGd5pu&aggregationType=COUNT

The measureCriteria query parameter lets you filter out ranges of data records to return. You can instruct the system
to return only records where the aggregated data value is equal, greater than, greater or equal, less than or less or equal
to certain values. You can specify any number of criteria on the following format, where critieria and value should
be substituted with real values:

api/analytics?measureCriteria=criteria:value;criteria:value

As an example, the following query will return only records where the data value is greater or equal to 6500 and less
than 33000:

api/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU&dimension=pe:2014
&dimension=ou:O6uvpzGd5pu;lc3eMKXaEfw&measureCriteria=GE:6500;LT:33000

In order to have the analytics resource generate the data in the shape of a ready-made table, you can provide the
tableLayout parameter with true as value. Instead of generating a plain, normalized data source, the analytics resource
will now generate the data in table layout. You can use the columns and rows parameters with dimension identifiers
separated by semi-colons as values to indicate which ones to use as table columns and rows. The column and rows
dimensions must be present as a data dimension in the query (not a filter). Such a request can look like this:

api/analytics.html?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU&dimension=pe:2014Q1;2014Q2
&dimension=ou:O6uvpzGd5pu&tableLayout=true&columns=dx;ou&rows=pe

1.23.2. Response formats


The analytics response containing aggregate data can be returned in various representation formats. As usual, you can
indicate interest in a specific format by appending a file extension to the URL, through the Accept HTTP header or
through the format query parameter. The default format is JSON. The available formats and content-types are listed
below.
• json (application/json)
• jsonp (application/javascript)
• xml (application/xml)
• csv (application/csv)
• html (text/html)
• html+css
• xls (application/vnd.ms-excel)

74
Web API Response formats

As an example, to request an analytics response in XML format you can use the following URL:

api/analytics.xml?dimension=dx:fbfJHSPpUQD
&dimension=pe:2014&dimension=ou:O6uvpzGd5pu;lc3eMKXaEfw

The analytics responses must be retrieved using the HTTP GET method. This allows for direct linking to analytics
responses from Web pages as well as other HTTP-enabled clients. To do functional testing we can use the cURL
library. By executing this command against the demo database you will get an analytics response in JSON format:

curl "apps.dhis2.org/demo/api/analytics.json?
dimension=dx:eTDtyyaSA7f;FbKK4ofIv5R&dimension=pe:2014Q1;2014Q2&filter=ou:ImspTQPwCqd"
-u admin:district

The JSON response will look like this:

{
"headers": [
{
"name": "dx",
"column": "Data",
"meta": true,
"type": "java.lang.String"
},
{
"name": "pe",
"column": "Period",
"meta": true,
"type": "java.lang.String"
},
{
"name": "value",
"column": "Value",
"meta": false,
"type": "java.lang.Double"
}
],
"height": 4,
"metaData": {
"pe": [
"2014Q1",
"2014Q2"
],
"ou": [
"ImspTQPwCqd"
],
"names": {
"2014Q1": "Jan to Mar 2014",
"2014Q2": "Apr to Jun 2014",
"FbKK4ofIv5R": "Measles Coverage <1 y",
"ImspTQPwCqd": "Sierra Leone",
"eTDtyyaSA7f": "Fully Immunized Coverage"
}
},
"rows": [
[
"eTDtyyaSA7f",
"2014Q2",
"81.1"
],
[
"eTDtyyaSA7f",
"2014Q1",
"74.7"
],

75
Web API Constraints

[
"FbKK4ofIv5R",
"2014Q2",
"88.9"
],
[
"FbKK4ofIv5R",
"2014Q1",
"84.0"
]
],
"width": 3
}

The response represents a table of dimensional data. The headers array gives an overview of which columns are
included in the table and what the columns contain. The column property shows the column dimension identifier, or if
the column contains measures, the word "Value". The meta property is true if the column contains dimension items or
false if the column contains a measure (aggregated data values). The name property is similar to the column property,
except it displays "value" in case the column contains a measure. The type property indicates the Java class type of
the column values.

The height and width properties indicate how many data columns and rows are contained in the response, respectively.

The metaData periods property contains a unique, ordered array of the periods included in the response. The metaData
ou property contains an array of the identifiers of organisation units included in the response. The metaData names
property contains a mapping between the identifiers used in the data response and the names of the objects they
represent. It can be used by clients to substitute the identifiers within the data response with names in order to give
a more meaningful view of the data table.

The rows array contains the dimensional data table. It contains columns with dimension items (object or period
identifiers) and a column with aggregated data values. The example response above has a data/indicator column, a
period column and a value column. The first column contains indicator identifiers, the second contains ISO period
identifiers and the third contains aggregeted data values.

1.23.3. Constraints
There are several constraints on the input you can provide to the analytics resource.
• At least one dimension must be specified in a query.
• Dimensions cannot be specified as dimension and filter simultaneously.
• At least one period must be specified as dimension or filter.
• Indicators, data sets and categories cannot be specified as filters.
• Data element group sets cannot be specified together with data sets.
• Categories can only be specified together with data elements, not indicators or data sets.
• A dimension cannot be specified more than once.
• Fixed dimensions ("dx", "pe", "ou") must have at least one option if included in a query.
• A table cannot contain more than 50 000 cells by default, this can be configured under system settings.

When a query request violates any of these constraints the server will return a response with status code 409 and
content-type "text/plain" together with a textual description of the problem.

1.24. Event analytics


The event analytics API lets you query events captured in DHIS 2. This resource lets you retrieve events based on
a program and optionally a program stage, and lets you retrieve and filter events on any event dimensions. Event
dimensions include data elements, attributes, organisation units and periods. The query analytics resource will simply
return events matching a set of criteria and does not perform any aggregation. The event dimensions are listed in the
table below.

76
Web API Request query parameters

Table 1.44. Event dimensions

Dimension Dimension Description


id
Data elements <id> Data element identifiers
Attributes <id> Attribute identifiers
Periods pe ISO periods and relative periods, see "date and period format"
Organisation units ou Organisation unit identifiers
Organisation unit group sets <id> Organisation unit group set identifiers

1.24.1. Request query parameters


The analytics event API let you specify a range of query parameters.

Table 1.45. Query parameters for both event query and aggregate analytics

Query parameter Required Description Options


program Yes Program identifier. Any program
identifier
stage No Program stage identifier. Any program stage
identifier
startDate Yes Start date for events. Date in yyyy-MM-
dd format
endDate Yes End date for events. Date in yyyy-MM-
dd format
dimension Yes Dimension identifier including data elements, Operators can be
attributes, periods, organisation units and organisation EQ | GT | GE | LT |
unit group sets. Parameter can be repeated any number LE | NE | LIKE | IN
of times. Item filters can be applied to a dimension on
the format <item-id>:<operator>:<filter>. Filter values
are case-insensitive.
filter No Dimension identifier including data elements,
attributes, periods, organisation units and organisation
unit group sets. Parameter can be repeated any number
of times. Item filters can be applied to a dimension on
the format <item-id>:<operator>:<filter>. Filter values
are case-insensitive.
hierarchyMeta No Include names of organisation unit ancestors and false | true
hierarchy paths of organisation units in the meta-data

Table 1.46. Query parameters for event query analytics only

Query parameter Required Description Options


ouMode No The mode of selecting organisation units. Default DESCENDANTS,
is DESCENDANTS, meaning all sub units in the CHILDREN,
hierarchy. CHILDREN refers to immediate children SELECTED
in the hierarchy; SELECTED refers to the selected
organisation units only.
asc No Dimensions to be sorted ascending, can reference event EVENTDATE |
date, org unit name and code and any item identifiers. OUNAME |
OUCODE | item
identifier

77
Web API Event query analytics

Query parameter Required Description Options


desc No Dimensions to be sorted descending, can reference EVENTDATE |
event date, org unit name and code and any item OUNAME |
identifiers. OUCODE | item
identifier
coordinatesOnly No Whether to only return events which have coordinates false | true
page No The page number. Default page is 1. Numeric positive
value
pageSize No The page size. Default size is 50 items per page. Numeric zero or
positive value

Table 1.47. Query parameters for aggregate event analytics only

Query parameter Required Description Options


value No Value dimension identifier. Can be a data element or Data element or
an attribute which must be of numeric value type. attribute identifier.
aggregationType No Aggregation type for the value dimension. Default is AVERAGE | SUM
AVERAGE. | COUNT |
STDDEV |
VARIANCE | MIN
| MAX
displayProperty No Property to display for meta-data. NAME |
SHORTNAME
sortOrder No Sort the records on the value column in ascending or ASC | DESC
descending order.
limit No The maximum number of records to return. Cannot be Numeric positive
larger than 10 000. value
outputType No Specify output type for analytical data which can be EVENT |
events, enrollments or tracked entity instances. The ENROLLMENT |
two last options apply to programs with registration TRACKED_ENTITY_INSTANC
only.
skipMeta No Exclude the meta data part of response (improves false | true
performance)
skipRounding No Skip rounding of aggregate data values. false | true

1.24.2. Event query analytics


The events/query resource lets you query for captured events. This resource does not perform any aggregation, rather
it lets you query and filter for information about events. You can specify any number of dimensions and any number
of filters in a query. Dimension item identifiers can refer to any of data elements, person attributes, person identifiers,
fixed and relative periods and organisation units. Dimensions can optionally have a query operator and a filter. Event
queries should be on the format described below.

api/analytics/events/query/<program-id>?startDate=yyyy-MM-dd&endDate=yyyy-MM-dd
&dimension=ou:<ou-id>;<ou-id>&dimension=<item-id>&dimension=<item-
id>:<operator>:<filter>

For example, to retrieve events from the "Inpatient morbidity and mortality" program between January and October
2014, where the "Gender" and "Age" data elements are included and the "Age" dimension is filtered on "18", you can
use the following query:

api/analytics/events/query/eBAyeGv0exc?startDate=2014-01-01&endDate=2014-10-31
&dimension=ou:O6uvpzGd5pu;fdc6uOvgoji&dimension=oZg33kd9taw&dimension=qrur9Dvnyt5:EQ:18

78
Web API Event query analytics

To retrieve events for the "Birth" program stage of the "Child programme" program between March and December
2014, where the "Weight" data element, filtered for values larger than 2000:

api/analytics/events/query/IpHINAT79UW?
stage=A03MvHHogjR&startDate=2014-03-01&endDate=2014-12-31
&dimension=ou:O6uvpzGd5pu&dimension=UXz7xuGCEhU:GT:2000

Sorting can be applied to the query for the event date of the event and any dimensions. To sort descending on the event
date and ascending on the "Age" data element dimension you can use:

api/analytics/events/query/eBAyeGv0exc?startDate=2014-01-01&endDate=2014-10-31
&dimension=ou:O6uvpzGd5pu&dimension=qrur9Dvnyt5&desc=EVENTDATE&asc=qrur9Dvnyt5

Paging can be applied to the query by specifying the page number and the page size parameters. If page number is
specified but page size is not, a page size of 50 will be used. If page size is specified but page number is not, a page
number of 1 will be used. To get the third page of the response with a page size of 20 you can use a query like this:

api/analytics/events/query/eBAyeGv0exc?startDate=2014-01-01&endDate=2014-10-31
&dimension=ou:O6uvpzGd5pu&dimension=qrur9Dvnyt5&page=3&pageSize=20

1.24.2.1. Filtering
Filters can be applied to data elements, person attributes and person identifiers. The filtering is done through the query
parameter value on the following format:

&dimension=<item-id>:<operator>:<filter-value>

As an example, you can filter the "Weight" data element for values greater than 2000 and lower than 4000 like this:

&dimension=UXz7xuGCEhU:GT:2000&dimension=UXz7xuGCEhU:LT:4000

You can filter the "Age" data element for multiple, specific ages using the IN operator like this:

&dimension=qrur9Dvnyt5:IN:18;19;20

You can specify multiple filters for a given item by repeating the operator and filter components:

&dimension=qrur9Dvnyt5:GT:5;LT;15

The available operators are listed below.

Table 1.48. Filter operators

Operator Description
EQ Equal to
GT Greater than
GE Greater than or equal to
LT Less than
LE Less than or equal to
NE Not equal to
LIKE Like (free text match)
IN Equal to one of multiple values separated by ";"

1.24.2.2. Ranges / legend sets


For aggregate queries you can specify a range / legend set for numeric data element and attribute dimensions. The
purpose is to group the numeric values into ranges. As an example, instead of generating data for an "Age" data element
for distinct years, you can group the information into age groups. To achieve this, the data element or attribute must
be associated with the legend set. The format is described below:

?dimension=<item-id>-<legend-set-id>

79
Web API Event query analytics

An example looks like this:

api/analytics/events/aggregate/eBAyeGv0exc.json?
stage=Zj7UnCAulEk&dimension=qrur9Dvnyt5-Yf6UHoPkdS6
&dimension=ou:ImspTQPwCqd&dimension=pe:LAST_12_MONTHS

1.24.2.3. Response formats

The default response representation format is JSON. The requests must be using the HTTP GET method. The following
response formats are supported.
• json (application/json)
• jsonp (application/javascript)
• xls (application/vnd.ms-excel)

As an example, to get a response in Excel format you can use a file extension in the request URL like this:

api/analytics/events/query/eBAyeGv0exc.xls?startDate=2014-01-01&endDate=2014-10-31
&dimension=ou:O6uvpzGd5pu&dimension=oZg33kd9taw&dimension=qrur9Dvnyt5

You can set the hierarchyMeta query parameter to true in order to include names of all ancestor organisation units in
the meta-section of the response:

api/analytics/events/query/eBAyeGv0exc?startDate=2014-01-01&endDate=2014-10-31
&dimension=ou:YuQRtpLP10I&dimension=qrur9Dvnyt5:EQ:50&hierarchyMeta=true

The default response JSON format will look similar to this:

{
"headers": [
{
"name": "psi",
"column": "Event",
"type": "java.lang.String",
"hidden": false,
"meta": false
},
{
"name": "ps",
"column": "Program stage",
"type": "java.lang.String",
"hidden": false,
"meta": false
},
{
"name": "eventdate",
"column": "Event date",
"type": "java.lang.String",
"hidden": false,
"meta": false
},
{
"name": "coordinates",
"column": "Coordinates",
"type": "java.lang.String",
"hidden": false,
"meta": false
},
{
"name": "ouname",
"column": "Organisation unit name",
"type": "java.lang.String",
"hidden": false,

80
Web API Event query analytics

"meta": false
},
{
"name": "oucode",
"column": "Organisation unit code",
"type": "java.lang.String",
"hidden": false,
"meta": false
},
{
"name": "ou",
"column": "Organisation unit",
"type": "java.lang.String",
"hidden": false,
"meta": false
},
{
"name": "oZg33kd9taw",
"column": "Gender",
"type": "java.lang.String",
"hidden": false,
"meta": false
},
{
"name": "qrur9Dvnyt5",
"column": "Age",
"type": "java.lang.String",
"hidden": false,
"meta": false
} ],
"metaData": {
"names": {
"qrur9Dvnyt5": "Age",
"eBAyeGv0exc": "Inpatient morbidity and mortality",
"ImspTQPwCqd": "Sierra Leone",
"O6uvpzGd5pu": "Bo",
"YuQRtpLP10I": "Badjia",
"oZg33kd9taw": "Gender"
},
"ouHierarchy": {
"YuQRtpLP10I": "/ImspTQPwCqd/O6uvpzGd5pu"
},
},
"width": 8,
"height": 25,
"rows": [
["yx9IDINf82o", "Zj7UnCAulEk", "2014-08-05", "[5.12, 1.23]", "Ngelehun CHC",
"OU_559", "YuQRtpLP10I", "Female", "50"],
["IPNa7AsCyFt", "Zj7UnCAulEk", "2014-06-12", "[5.22, 1.43]", "Ngelehun CHC",
"OU_559", "YuQRtpLP10I", "Female", "50"],
["ZY9JL9dkhD2", "Zj7UnCAulEk", "2014-06-15", "[5.42, 1.33]", "Ngelehun CHC",
"OU_559", "YuQRtpLP10I", "Female", "50"],
["MYvh4WAUdWt", "Zj7UnCAulEk", "2014-06-16", "[5.32, 1.53]", "Ngelehun CHC",
"OU_559", "YuQRtpLP10I", "Female", "50"]
]
}

The headers section of the response describes the content of the query result. The event unique identifier, the program
stage identifier, the event date, the organisation unit name, the organisation unit code and the organisation unit identifier
appear as the first six dimensions in the response and will always be present. Next comes the data elements, person
attributes and person identifiers which were specified as dimensions in the request, in this case the "Gender" and "Age"
data element dimensions. The header section contains the identifier of the dimension item in the "name" property and
a readable dimension description in the "column" property.

81
Web API Event aggregate analytics

The metaData section, ou object contains the identifiers of all organisation units present in the response mapped to a
string representing the hierarchy. This hierarchy string lists the identifiers of the ancestors (parents) of the organistion
unit starting from the root. The names object contains the identifiers of all items in the response mapped to their names.

The rows section contains the events produced by the query. Each row represents exactly one event.

1.24.3. Event aggregate analytics


In order to get aggregated numbers of events captured in DHIS 2 you can work with the analytics/events/aggregate
resource. This resource lets you retrieve aggregate data based on a program and optionally a program stage, and lets
you filter on any event dimension. In other words, it does not return the event information itself, rather the aggregate
numbers of events matching the request query. Event dimensions include data elements, person attributes, person
identifiers, periods and organisation units.

Aggregate event queries should be on the format described below.

api/analytics/events/aggregate/<program-id>?startDate=yyyy-MM-dd&endDate=yyyy-MM-dd
&dimension=ou:<ou-id>;<ou-id>&dimension=<item-id>&dimension=<item-
id>:<operator>:<filter>

For example, to retrieve aggregate numbers for events from the "Inpatient morbidity and mortality" program between
January and October 2014, where the "Gender" and "Age" data elements are included, the "Age" dimension item is
filtered on "18" and the "Gender" item is filtered on "Female", you can use the following query:

api/analytics/events/aggregate/eBAyeGv0exc?startDate=2014-01-01&endDate=2014-10-31
&dimension=ou:O6uvpzGd5pu&dimension=oZg33kd9taw:EQ:Female&dimension=qrur9Dvnyt5:GT:50

To retrieve data for fixed and relative periods instead of start and end date, in this case May 2014 and last 12 months,
and the organisation unit associated with the current user, you can use the following query:

api/analytics/events/aggregate/eBAyeGv0exc?dimension=pe:201405;LAST_12_MONTHS
&dimension=ou:USER_ORGUNIT;fdc6uOvgo7ji&dimension=oZg33kd9taw

In order to specify "Female" as a filter for "Gender" for the data response, meaning "Gender" will not be part of the
response but will filter the aggregate numbers in it, you can use the following syntax:

api/analytics/events/aggregate/eBAyeGv0exc?dimension=pe:2014;
&dimension=ou:O6uvpzGd5pu&filter=oZg33kd9taw:EQ:Female

To specify the "Bo" organisation unit and the period "2014" as filters, and the "Mode of discharge" and Gender" as
dimensions, where "Gender" is filtered on the "Male" item, you can use a query like this:

api/analytics/events/aggregate/eBAyeGv0exc?filter=pe:2014&filter=ou:O6uvpzGd5pu
&dimension=fWIAEtYVEGk&dimension=oZg33kd9taw:EQ:Male

To create a "Top 3 report" for "Mode of discharge" you can use the limit and sortOrder query parameters similar to this:

api/analytics/events/aggregate/eBAyeGv0exc?filter=pe:2014&filter=ou:O6uvpzGd5pu
&dimension=fWIAEtYVEGk&limit=3&sortOrder=DESC

To specify a value dimension with a corresponding aggregation type you can use the value and aggregationType query
parameters. Specifying a value dimension will make the analytics engine return aggregate values for the values of that
dimension in the response as opposed to counts of events.

api/analytics/events/aggregate/eBAyeGv0exc.json?
stage=Zj7UnCAulEk&dimension=ou:ImspTQPwCqd
&dimension=pe:LAST_12_MONTHS&dimension=fWIAEtYVEGk&value=qrur9Dvnyt5&aggregationType=AVERAGE

1.24.3.1. Response formats


The default response representation format is JSON. The requests must be using the HTTP GET method. The response
will look similar to this:

82
Web API Event aggregate analytics

"headers": [
{
"name": "oZg33kd9taw",
"column": "Gender",
"type": "java.lang.String",
"meta": false
},
{
"name": "qrur9Dvnyt5",
"column": "Age",
"type": "java.lang.String",
"meta": false
},
{
"name": "pe",
"column": "Period",
"type": "java.lang.String",
"meta": false
},
{
"name": "ou",
"column": "Organisation unit",
"type": "java.lang.String",
"meta": false
},
{
"name": "value",
"column": "Value",
"type": "java.lang.String",
"meta": false
}
],
"metaData": {
"names": {
"eBAyeGv0exc": "Inpatient morbidity and mortality"
}
},
"width": 5,
"height": 39,
"rows": [
[
"Female",
"95",
"201405",
"O6uvpzGd5pu",
"2"
],
[
"Female",
"63",
"201405",
"O6uvpzGd5pu",
"2"
],
[
"Female",
"67",
"201405",
"O6uvpzGd5pu",
"1"
],
[
"Female",

83
Web API Geo features

"71",
"201405",
"O6uvpzGd5pu",
"1"
],
[
"Female",
"75",
"201405",
"O6uvpzGd5pu",
"14"
],
[
"Female",
"73",
"201405",
"O6uvpzGd5pu",
"5"
],
]
}

Note that the max limit for rows to return in a single response is 10 000. If the query produces more than the max limit,
a 409 Conflict status code will be returned.

1.25. Geo features


The geoFeatures resource lets you retrieve geospatial information from DHIS 2. Geo features are stored together
with organisation units, and the syntax for retrieving features is identical to the syntax used for the organisation unit
dimension for the analytics resource. It is recommended to read up on the analytics api resource before continuing
reading this section. You must use the GET request type, and only JSON response format is supported.

As an example, to retrieve geo features for all organisation units at level 3 in the organisation unit hierarchy you can
use a GET request with the following URL:

api/geoFeatures.json?ou=ou:LEVEL-3

To retrieve geo features for organisation units at level within the boundary of an organisation unit (e.g. at level 2) you
can use this URL:

api/geoFeatures.json?ou=ou:LEVEL-4;O6uvpzGd5pu

The semantics of the response properties are described in the following table.

Table 1.49. Geo features response

Property Description
id Organisation unit / geo feature identifier
na Organisation unit / geo feature name
hcd Has coordinates down, indicating whether one or more children organisation units exist with
coordinates (below in the hierarchy)
hcu Has coordinates up, indicating whether the parent organisation unit has coordinates (above in the
hierarchy)
le Level of this organisation unit / geo feature.
pg Parent graph, the graph of parent organisation unit identifiers up to the root in the hierarchy
pi Parent identifier, the identifier of the parent of this organisation unit
pn Parent name, the name of the parent of this organisation unit

84
Web API GeoJSON

Property Description
ty Geo feature type, 1 = point and 2 = polygon or multi-polygon
co Coordinates of this geo feature

1.25.1. GeoJSON
Support for GeoJSON output was added in 2.17, to export GeoJSON, you can simple add .geosjon as an extension to
the endpoint /api/organisationUnits, or you can use the Accept header application/json+geojson.

Two parameters are supported level (defaults to 1) and parent (defaults to root organisation units), both can be added
multiple times, some examples follow.

Get all features at level 2 and 4:

api/organisationUnits.geojson?level=2&level=4

Get all features at level 3 with a boundary organisation unit:

api/organisationUnits.geojson?parent=fdc6uOvgoji&level=3

1.26. Generating resource, analytics and data mart tables


DHIS 2 features a set of generated database tables which are used as basis for various system functionality. These tables
can be executed immediately or scheduled to be executed at regular intervals through the user interface. They can also
be generated through the Web API as explained in this section. This task is typically one for a system administrator
and not consuming clients.

The resource tables are used internally by the DHIS 2 application for various analysis functions. These tables are also
valuable for users writing advanced SQL reports. They can be generated with a POST or PUT request to the following
URL:

api/resourceTables

The analytics tables are optimized for data aggregation and used currently in DHIS 2 for the pivot table module. The
analytics tables can be generated with a POST or PUT request to:

api/resourceTables/analytics

Table 1.50. Analytics tables optional query parameters

Query parameter Options Description


skipResourceTables false | true Skip generation of resource tables
skipAggregate false | true Skip generation of aggregate data and completeness data
skipEvents false | true Skip generation of event data
lastYears integer Number of last years of data to include

The data mart is tables containing pre-calculated aggregated data which are used by DHIS 2 analysis modules and can
be used directly by SQL reports. The data mart tables can be generated with a POST or PUT request to:

api/resourceTables/dataMart

These requests will return immediately and initiate a server-side process.

85
Web API Maintenance

1.27. Maintenance
To perform maintenance you can interact with the maintenance resource. You should use POST or PUT as method
for requests. The following requests are available.

Period pruning will remove periods which are not linked to any data values:

api/maintenance/periodPruning

Zero data value removal will delete zero data values linked to data elements where zero data is defined as not significant:

api/maintenance/zeroDataValueRemoval

Drop SQL views will drop all SQL views in the database. Note that it will not delete the DHIS 2 SQL views.

api/maintenance/dropSqlViews

Create SQL views will recreate all SQL views in the database.

api/maintenance/createSqlViews

Category option combo update will remove obsolete and generate missing category option combos for all category
combinations:

api/maintenance/categoryOptionComboUpdate

Cache clearing will clear the application Hibernate cache and the analytics partition caches:

api/maintenance/cache

1.28. System resource


The system resource provides you with convenient information and functions. The system resource can be found at
/api/system.

1.28.1. Generate identifiers


To generate valid, random DHIS 2 identifiers you can do a GET request to this resource:

http://<server-url>/api/system/id?n=3

The n query parameter is optional and indicates how many identifiers you want to be returned with the response. The
default is to return one identifier. The response will contain a JSON object with a array named codes, similar to this:

{
"codes": [
"Y0moqFplrX4",
"WI0VHXuWQuV",
"BRJNBBpu4ki"
]
}

The DHIS 2 UID format has these requirements:


• 11 characters long.
• Alphanumeric characters only, ie. alphabetic or numeric characters (A-Za-z0-9).
• Start with an alphabetic character (A-Za-z).

1.28.2. View system information


To get information about the current system you can do a GET request to this URL:

86
Web API Check if username and password
combination is correct
http://yourdomain.com/api/system/info

JSON and JSONP response formats are supported. The system info response currently includes the below properties.
Note that if the user who is requesting this resourec does not have full authority in the system then only the first seven
properties will be included, as this information is security sensitive.

{
contextPath: "http://yourdomain.com",
userAgent: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/29.0.1547.62 Safari/537.36",
version: "2.13-SNAPSHOT",
revision: "11852",
buildTime: "2013-09-01T21:36:21.000+0000",
serverDate: "2013-09-02T12:35:54.311+0000",
environmentVariable: "DHIS2_HOME",
javaVersion: "1.7.0_06",
javaVendor: "Oracle Corporation",
javaIoTmpDir: "/tmp",
javaOpts: "-Xms600m -Xmx1500m -XX:PermSize=400m -XX:MaxPermSize=500m",
osName: "Linux",
osArchitecture: "amd64",
osVersion: "3.2.0-52-generic",
externalDirectory: "/home/dhis/config/dhis2",
databaseInfo: {
type: "PostgreSQL",
name: "dhis2",
user: "dhis"
},
memoryInfo: "Mem Total in JVM: 848 Free in JVM: 581 Max Limit: 1333",
cpuCores: 8
}

To get information about the system context (contextPath and userAgent) only you can do a GET request to the below
URL. JSON and JSONP response formats are supported:

http://yourdomain.com/api/system/context

1.28.3. Check if username and password combination is correct


To check if some user credentials (a username and password combination) is correct you can make a GET request to
the following resource using basic authentication:

http://<server-url>/api/system/ping

You can detect the outcome of the authentication by inspecting the HTTP status code of the response header. The
meaning of the possible status codes are listed below. Note that this applies to Web API requests in general.

Table 1.51. HTTP Status codes

HTTP Description Outcome


Status code
200 OK Authentication was successful
302 Found No credentials was supplied with the request - no authentication took place
401 Unauthorized The username and password combination was incorrect - authentication failed

1.29. Users
This section covers the user resource methods.

87
Web API User query

1.29.1. User query


The users resource offers additional query parameters beyond the standard parameters (e.g. paging). To query for users
at the users resource you can use the following parameters.

Table 1.52. User query parameters

Parameter Type Description


query Text Query value for first name, surname, username and email, case in-sensitive.
phoneNumber Text Query for phone number.
canManage false | true Filter on whether the current user can manage the returned users through the
managed user group relationships.
authSubset false | true Filter on whether the returned users have a subset of the authorities of the current
user.
lastLogin Date Filter on users who have logged in later than the given date.
inactiveMonthsNumber Filter on users who have not logged in for the given number of months.
inactiveSince Date Filter on users who have not logged in later than the given date.
selfRegistered false | true Filter on users who have self-registered their user account.
invitationStatusnone | all | Filter on user invitations, including all or expired invitations.
expired
ou Identifier Filter on users who are associated with the organisation unit with the given identifier.
page Number The page number.
pageSize Number The page size.

A query for max 10 users with "konan" as first name or surname (case in-sensitive) who have a subset of authorities
compared to the current user:

/api/users?query=konan&authSubset=true&pageSize=10

1.29.2. User account invitations


The Web API supports inviting people to create user accounts through the invite resource. To create an invitation you
should POST a user in XML or JSON format to the invite resource. A specific username can be forced by defining
the username in the posted entity. By omitting the username, the person will be able to specify it herself. The system
will send out an invitation through email. This requires that email settings have been properly configured. The invite
resource is useful in order to securely allow people to create accounts without anyone else knowing the password or by
transferring the password in plain text. The payload to use for the invite is the same as for creating users. An example
payload in JSON looks like this:

{
"firstName": "John",
"surname": "Doe",
"email": "[email protected]",
"userCredentials": {
"username": "johndoe",
"userRoles": [ {
"id": "Euq3XfEIEbx"
} ]
},
"organisationUnits": [ {
"id": "ImspTQPwCqd"
} ],
"userGroups": [ {
"id": "vAvEltyXGbD"
} ]

88
Web API User account invitations

The user invite entity can be posted like this:

curl -d @invite.json "localhost/api/users/invite" -H "Content-Type:application/json" -


u admin:district -v

To send out invites for multiple users at the same time you must use a slightly different format. For JSON:

{
"users": [ {
"firstName": "John",
"surname": "Doe",
"email": "[email protected]",
"userCredentials": {
"username": "johndoe",
"userRoles": [ {
"id": "Euq3XfEIEbx"
} ]
},
"organisationUnits": [ {
"id": "ImspTQPwCqd"
} ]
}, {
"firstName": "Tom",
"surname": "Johnson",
"email": "[email protected]",
"userCredentials": {
"userRoles": [ {
"id": "Euq3XfEIEbx"
} ]
},
"organisationUnits": [ {
"id": "ImspTQPwCqd"
} ]
}
]
}

To create multiple invites you can post the payload to the api/users/invites resource like this:

curl -d @invites.json "localhost/api/users/invites" -H "Content-Type:application/json"


-u admin:district -v

There are certain requirements for user account invitations to be sent out:
• Email SMTP server must be configured properly on the server.
• The user to be invited must have specified a valid email.
• The user to be invited must not be granted user roles with critical authorities (see below).
• If username is specified it must not be already taken by another existing user.
If any of these requirements are not met the invite resource will return with a 409 Conflict status code together with
a descriptive message.

The critical authorities which cannot be granted with invites include:


• ALL
• Scheduling administration
• Set system settings
• Add, update, delete and list user roles
• Add, update, delete and view SQL views

89
Web API User replication

1.29.3. User replication


To replicate a user you can use the replica resource. Replicating a user can be useful when debugging or reproducing
issues reported by a particular user. You need to provide a new username and password for the replicated user which
you will use to authenticate later. Note that you need the ALL authority to perform this action. To replicate a user you
can post a JSON payload looking like below:

{
"username": "replica",
"password": "Replica.1234"
}

This payload can be posted to the replica resource, where you provide the identifier of the user to replicate in the URL:

/api/users/<uid>/replica

An example of replicating a user using curl looks like this:

curl -d @replica.json "localhost/api/users/N3PZBUlN8vq/replica" -H "Content-


Type:application/json" -u admin:district -v

1.30. Current user information and associations


In order to get information about the currently authenticated user and its associations to other resources you can work
with the me resource (you can also refer to it by its old name currentUser). The current user related resources gives
your information which is useful when building clients for instance for data entry and user management. The following
describes these resources and their purpose.

Provides basic information about the user that you are currently logged in as, including username, user credentials,
assigned organisation units:

/api/me

Gives information about currently unread messages and interpretations:

/api/me/dashboard

Lists all messages and interpretations in the inbox (including replies):

/api/me/inbox

Gives the full profile information for current user. This endpoint support both GET to retrieve profile and POST to
update profile (the exact same format is used):

/api/me/user-account

Returns the set of authorities granted to the current user:

/api/me/authorization

Returns true or false, indicating whether the current user has been granted the given <auth> authorization:

/api/me/authorization/<auth>

Lists all organisation units directly assigned to the user:

/api/me/organisationUnits

Gives all the datasets assigned to the users organisation units, and their direct children. This endpoint contains all
required information to build a form based on one of our datasets. If you want all descendants of your assigned
organisation units, you can use the query parameter includeDescendants=true :

/api/me/dataSets

90
Web API System settings

Gives all the programs assigned to the users organisation units, and their direct children. This endpoint contains all
required information to build a form based on one of our datasets. If you want all descendants of your assigned
organisation units, you can use the query parameter includeDescendants=true :

/api/me/programs

Gives the data approval levels which are relenvant to the current user:

/api/me/dataApprovalLevels

1.31. System settings


You can manipulate system settings by interacting with the systemSettings resource. A system setting is a simple key-
value pair, where both the key and the value are plain text strings. To save or update a system setting you can make
a POST request to the following URL:

/api/systemSettings/my-key?value=my-val

Alternatively, you can submit the setting value as the request body, where content type is set to "text/plain". As an
example, you can use curl like this:

curl "apps.dhis2.org/demo/api/systemSettings/my-key" -d "My long value" -H "Content-


Type: text/plain" -u admin:district -v

To set system settings in bulk you can send a JSON object with a property -value pair for each system setting key-
value pair using a POST request:

{
"keyApplicationNotification": "Welcome",
"keyApplicationIntro": "DHIS 2",
"keyApplicationFooter": "Read more at dhis2.org"
}

You should replace my-key with your real key and my-val with your real value. To retrieve the value for a given key
in plain text you can make a GET request to the following URL:

/api/systemSettings/my-key

Alternatively, you can specify the key as a query parameter:

/api/systemSettings?key=my-key

You can retrieve specific system settings as JSON by repeating the key query parameter:

curl "apps.dhis2.org/demo/api/systemSettings?
key=keyApplicationNotification&key=keyApplicationIntro" -H "Content-Type: application/
json" -u admin:district -v

You can retrieve all system settings with a GET request:

/api/systemSettings

To delete a system setting, you can make a DELETE request to the URL similar to the one used above for retrieval.

1.32. User settings


You can manipulate user settings by interacting with the userSettings resource. A user setting is a simple key-value pair,
where both the key and the value are plain text strings. The user setting will be linked to the user who is authenticated
for the Web API request. To save or update a setting for the currently authenticated user you can make a POST request
to the following URL:

/api/userSettings/my-key?value=my-val

91
Web API Configuration

You can specify the user for which to save the setting explicitly with this syntax:

/api/userSettings/my-key?user=user-id&value=my-val

Alternatively, you can submit the setting value as the request body, where content type is set to "text/plain". As an
example, you can use curl like this:

curl "apps.dhis2.org/demo/api/userSettings/my-key" -d "My long value" -H "Content-


Type: text/plain" -u admin:district -v

You should replace my-key with your real key and my-val with your real value. To retrieve the value for a given key
in plain text you can make a GET request to the following URL:

/api/userSettings/my-key

To delete a user setting, you can make a DELETE request to the URL similar to the one used above for retrieval.

1.33. Configuration
To access configuration you can interact with the configuration resource. You can get XML and JSON responses
through the Accept header or by using the .json or .xml extensions. You can GET the configuration from:

/api/configuration

You can send GET requests to the following sub-resources:

/api/configuration/systemId

/api/configuration/feedbackRecipients

/api/configuration/offlineOrganisationUnitLevel

/api/configuration/infrastructuralDataElements

/api/configuration/infrastructuralPeriodType

/api/configuration/selfRegistrationRole

/api/configuration/selfRegistrationOrgUnit

1.34. Translations
In order to retrieve key-value pairs for translated strings you can use the i18n resource. The endpoint is located at api/
i18n and the request format is a simple array of the key-value pairs:

[
"access_denied",
"uploading_data_notification"
]

The request must be of type POST and use application/json as content-type. An example using curl, assuming the
request data is saved as a file keys.json:

curl -d @keys.json "apps.dhis2.org/demo/api/i18n" -X POST -H "Content-Type:


application/json" -u admin:district -v

The result will look like this:

{
"access_denied":"Access denied",
"uploading_data_notification":"Uploading locally stored data to the server"
}

92
Web API SVG conversion

1.35. SVG conversion


The Web API provides a resource which can be used to convert SVG content into more widely used formats such
as PNG and PDF. Ideally this conversion should happen on the client side, but not all client side technologies are
capable of performing this task. Currently PNG and PDF output formats are supported. The SVG content itself should
passed with a svg query parameter, and an optional query parameter filename can be used to specify the filename of the
response attachment file. Note that the file extension should be omitted. For PNG you can send a POST request to the
following URL with Content-type application/x-www-form-urlencoded, identical to a regular HTML form submission.

api/svg.png

For PDF you can send a POST request to the following URL with Content-type application/x-www-form-urlencoded.

api/svg.pdf

Table 1.53. Query parameters

Query parameter Required Description


svg Yes The SVG content
filename No The file name for the returned attachment without file extension

1.36. Tracked entity management


Tracked entity have full CRUD (create, read, update, delete) support in the Web-API. A tracked entity only have two
required property, name and description.

{
"name": "Name of tracked entity",
"description": "Description of tracked entity"
}

This payload can be sent to the trackedEntities resource, both POST and PUT are supported. For deleting a tracked
entity you must use the DELETE method at the /api/trackedEntities/UID resource.

1.37. Tracked entity instance management


Tracked entity instances have full CRUD (create, read, update, delete) support in the Web-API. Together with the API
for enrollment most operations needed for working with tracked entity instances and programs are supported.

1.37.1. Creating a new tracked entity instance


For creating a new person in the system, you will be working with the trackedEntityInstances resource. A template
payload can be seen below:

{
"trackedEntity": "tracked-entity-id",
"orgUnit": "org-unit-id",
"attributes": [ {
"attribute": "attribute-id",
"value": "attribute-value"
} ]
}

For getting the IDs for relationship, attributes you can have a look at the respective resources relationshipTypes,,
trackedEntityAttributes. To create a tracked entity instance you must use the HTTP POST method. You can post the
payload the the following URL:

/api/trackedEntityInstances

93
Web API Updating a tracked entity instance

For example, let us create a new instance of a person tracked entity and specify its first name and last name attributes:

{
"trackedEntity": "cyl5vuJ5ETQ",
"orgUnit": "DiszpKrYNg8",
"attributes": [
{
"attribute": "dv3nChNSIxy",
"value": "Joe"
},
{
"attribute": "hwlRTFIFSUq",
"value": "Smith"
}
]
}

To push this to the server you can use the cURL command like this:

curl -d @tei.json "apps.dhis2.org/demo/api/trackedEntityInstances" -X POST -H


"Content-Type: application/json" -u admin:district -v

1.37.2. Updating a tracked entity instance


For updating a tracked entity instance, the payload is the equal to the previous section. The difference is that you
must use the HTTP PUT method for the request when sending the payload. You will also need to append the person
identifier to the trackedEntityInstances resource in the URL like this, where <tracked-entity-instance-identifier> should
be replaced by the identifier of the tracked entity instance:

/api/trackedEntityInstances/<tracked-entity-instance-id>

1.37.3. Deleting a tracked entity instance


To delete a tracked entity instance you can make a request to the URL identifiying the tracked entity instance with the
HTTP DELETE method. The URL is equal to the one above used for update.

1.37.4. Enrolling a tracked entity instance into a program


For enrolling persons into a program, you will need to first get the identifier of the person from the
trackedEntityInstances resource. Then, you will need to get the program identifier from the programs resource. A
template payload can be seen below:

{
"trackedEntityInstance": "ZRyCnJ1qUXS",
"program": "S8uo8AlvYMz",
"dateOfEnrollment": "2013-09-17",
"dateOfIncident": "2013-09-17"
}

This payload should be used in a POST request to the enrollments resource identified by the following URL:

/api/enrollments

For cancelling or completing an enrollment, you can make a PUT request to the enrollments resource, including the
identifier and the action you want to perform. For cancelling an enrollment for a tracked entity instance:

/api/enrollments/<enrollment-id>/cancelled

For completing a enrollment for a tracked entity instance you can make a PUT request to the following URL:

/api/enrollments/<enrollment-id>/completed

94
Web API Update strategies

For deleting a enrollment, you can make a DELETE request to the following URL:

/api/enrollments/<enrollment-id>

1.37.5. Update strategies


Two update strategies for tracked entity instance are supported: enrollment and event creation. This is useful when you
have generated an identifier on the client side and are not sure if it was created or not on the server.

Table 1.54. Available tracker strategies

Parameter Description
CREATE Create only, this is the default behavior.
CREATE_AND_UPDATE Try and match the ID, if it exist then update, if not create.

To change the parameter, please use the strategy parameter:

POST /api/trackedEntityInstances?strategy=CREATE_AND_UPDATE

1.38. Tracked entity instance query


To query for tracked entity instances you can interact with the /api/trackedEntityInstances resource. There are two
types of queries: One where a query query parameter and optionally attribute parameters are defined, and one where
attribute and filter parameters are defined.

1.38.1. Request syntax


Table 1.55. Tracked entity instances query parameters

Query Description
parameter
query Query string. Attribute query parameter can be used to define which attributes to include in the
response. If no attributes but a program is defined, the attributes from the program will be used.
If no program is defined, all attributes will be used. There are two formats. The first is a plan
query string. The second is on the format <operator>:<query>. Operators can be EQ | LIKE. EQ
implies exact matches on words, LIKE implies partial matches on words. The query will be split
on space, where each word will form a logical AND query.
attribute Attributes to be included in the response. Can also be used a filter for the query. Param
can be repeated any number of times. Filters can be applied to a dimension on the format
<attribute-id>:<operator>:<filter>[:<operator>:<filter>]. Filter values are case-insensitive and
can be repeated together with operator any number of times. Operators can be EQ | GT | GE |
LT | LE | NE | LIKE | IN. Filters can be omitted in order to simply include the attribute in the
response without any constraints.
filter Attributes to use as a filter for the query. Param can be repeated any number of times. Filters can
be applied to a dimension on the format <attribute-id>:<operator>:<filter>[:<operator>:<filter>].
Filter values are case-insensitive and can be repeated together with operator any number of times.
Operators can be EQ | GT | GE | LT | LE | NE | LIKE | IN.
ou Organisation unit idenfiers, separated by ";".
ouMode The mode of selecting organisation units, can be SELECTED | CHILDREN | DESCENDANTS
| ACCESSIBLE | ALL. Default is SELECTED, which refers to the selected organisation units
only. See table below for explanations.
program Program identifier. Restricts instances to being enrolled in the given program.
programStatus Status of the instance for the given program. Can be ACTIVE | COMPLETED | CANCELLED.

95
Web API Request syntax

Query Description
parameter
followUp Follow up status of the instance for the given program. Can be true | false or omitted.
programStartDate Start date of enrollment in the given program for the tracked entity instance.
programEndDate End date of enrollment in the given program for the tracked entity instance.
trackedEntity Tracked entity identifer. Restricts instances to the given tracked instance type.
eventStatus Status of any event associated with the given program and the tracked entity instance. Can be
COMPLETED | VISITED | FUTURE_VISIT | LATE_VISIT | SKIPPED.
eventStartDate Start date of event associated with the given program and event status.
eventEndDate End date of event associated with the given program and event status.
skipMeta Indicates whether meta data for the response should be included.
page The page number. Default page is 1.
pageSize The page size. Default size is 50 rows per page.
skipPaging Indicates whether paging should be ignored and all rows should be returned.

The available organisation unit selection modes are explained in the following table.

Table 1.56. Organisation unit selection modes

Mode Description
SELECTED Organisation units defined in the request.
CHILDREN Immediate children, i.e. only the first level below, of the organisation units defined in the
request.
DESCENDANTS All children, i.e. at only levels below, e.g. including children of children, of the organisation
units defined in the request.
ACCESSIBLE All descendants of the data view organisation units associated with the current user. Will
fall back to data capture organisation units associated with the current user if the former
is not defined.
ALL All organisation units in the system. Requires authority.

Note that you can specify attributes with filters for constraining the instances to return, or attributes without filters in
order to include the attribute in the response without any constraints. Attributes will be included in the response, while
filters will only be used as criteria.

Certain rules apply to which attributes are defined when no attributes are specified in the request:
• If not specifying a program, the attributes defined to be displayed in lists with no program will be included in the
response.
• If specifying a program, the attributes linked to the program will be included in the response.

You can specify queries with words separated by space - in that situation the system will query for each word
independently and return records where each word is contained in any attribute. A query item can be specified once
as an attribute and once as a filter if needed. The query is case insensitive. The following rules apply to the query
parameters.
• At least one organisation unit must be specified using the ou parameter (one or many).
• Only one of the program and trackedEntity parameters can be specified (zero or one).
• If programStatus is specified then program must also be specified.
• If followUp is specified then program must also be specified.
• If programStartDate or programEndDate is specified then program must also be specified.
• If eventStatus is specified then eventStartDate and eventEndDate must also be specified.
• A query cannot be specified together with filters.

96
Web API Request syntax

• Attribute items can only be specified once.


• Filter items can only be specified once.

A query for all instances associated with a specific organisation unit can look like this:

api/trackedEntityInstances.json?ou=DiszpKrYNg8

A query on all attributes for a specific value and organisation unit, using an exact word match:

api/trackedEntityInstances.json?query=scott&ou=DiszpKrYNg8

A query on all attributes for a specific value, using a partial word match:

api/trackedEntityInstances.json?query=LIKE:scott&ou=DiszpKrYNg8

You can query on multiple words separated by the the URL character for space which is %20, will use a logical AND
query for each word:

api/trackedEntityInstances.json?query=isabel%20may&ou=DiszpKrYNg8

A query where the attributes to include in the response are specified:

api/trackedEntityInstances.json?
query=isabel&attribute=dv3nChNSIxy&attribute=AMpUYgxuCaE&ou=DiszpKrYNg8

To query for instances using one attribute with a filter and one attribute without a filter, with one organisation unit
using the descendants organisation unit query mode:

api/trackedEntityInstances.json?
attribute=zHXD5Ve1Efw:EQ:A&attribute=AMpUYgxuCaE&ou=DiszpKrYNg8;yMCshbaVExv

A query for instances where one attribute is included in the response and one attribute us used as a filter:

api/trackedEntityInstances.json?
attribute=zHXD5Ve1Efw:EQ:A&filter=AMpUYgxuCaE:LIKE:Road&ou=DiszpKrYNg8

A query where multiple operand and filters are specified for a filter item:

api/trackedEntityInstances.json?
ou=DiszpKrYNg8&program=ur1Edk5Oe2n&filter=lw1SqmMlnfh:GT:150:LT:190

To query on an attribute using multiple values in an IN filter:

api/trackedEntityInstances.json?
ou=DiszpKrYNg8&attribute=dv3nChNSIxy:IN:Scott;Jimmy;Santiago

To constrain the response to instances which are part of a specific program you can include a program query parameter:

api/trackedEntityInstances.json?
filter=zHXD5Ve1Efw:EQ:A&ou=O6uvpzGd5pu&ouMode=DESCENDANTS&program=ur1Edk5Oe2n

To specify program enrollment dates as part of the query:

api/trackedEntityInstances.json?filter=zHXD5Ve1Efw:EQ:A
&ou=O6uvpzGd5pu&program=ur1Edk5Oe2n
&programStartDate=2013-01-01&programEndDate=2013-09-01

To constrain the response to instances of a specific tracked entity you can include a tracked entity query parameter:

api/trackedEntityInstances.json?attribute=zHXD5Ve1Efw:EQ:A
&ou=O6uvpzGd5pu&ouMode=DESCENDANTS&trackedEntity=cyl5vuJ5ETQ

By default the instances are returned in pages of size 50, to change this you can use the page and pageSize query
parameters:

api/trackedEntityInstances.json?attribute=zHXD5Ve1Efw:EQ:A
&ou=O6uvpzGd5pu&ouMode=DESCENDANTS&page=2&pageSize=3

97
Web API Response format

To query for instances which have events of a given status within a given time span:

api/trackedEntityInstances.json?ou=O6uvpzGd5pu
&program=ur1Edk5Oe2n&eventStatus=LATE_VISIT
&eventStartDate=2014-01-01&eventEndDate=2014-09-01

You can use a range of operators for the filtering:

Table 1.57. Filter operators

Operator Description
EQ Equal to
GT Greater than
GE Greater than or equal to
LT Less than
LE Less than or equal to
NE Not equal to
LIKE Like (free text match)
IN Equal to one of multiple values separated by ";"

1.38.2. Response format


This resource supports JSON, JSONP, XLS and CSV resource representations.
• json (application/json)
• jsonp (application/javascript)
• xml (application/xml)
• csv (application/csv)
• xls (application/vnd.ms-excel)

The response in JSON comes is in a tabular format and can look like the following. The headers section describes the
content of each column. The instance, created, last updated, org unit and tracked entity columns are always present.
The following columns correspond to attributes specified in the query. The rows section contains one row per instance.

{
"headers": [{
"name": "instance",
"column": "Instance",
"type": "java.lang.String"
}, {
"name": "created",
"column": "Created",
"type": "java.lang.String"
}, {
"name": "lastupdated",
"column": "Last updated",
"type": "java.lang.String"
}, {
"name": "ou",
"column": "Org unit",
"type": "java.lang.String"
}, {
"name": "te",
"column": "Tracked entity",
"type": "java.lang.String"
}, {
"name": "zHXD5Ve1Efw",

98
Web API Email

"column": "Date of birth type",


"type": "java.lang.String"
}, {
"name": "AMpUYgxuCaE",
"column": "Address",
"type": "java.lang.String"
}],
"metaData": {
"names": {
"cyl5vuJ5ETQ": "Person"
}
},
"width": 7,
"height": 7,
"rows": [
["yNCtJ6vhRJu", "2013-09-08 21:40:28.0", "2014-01-09 19:39:32.19",
"DiszpKrYNg8", "cyl5vuJ5ETQ", "A", "21 Kenyatta Road"],
["fSofnQR6lAU", "2013-09-08 21:40:28.0", "2014-01-09 19:40:19.62",
"DiszpKrYNg8", "cyl5vuJ5ETQ", "A", "56 Upper Road"],
["X5wZwS5lgm2", "2013-09-08 21:40:28.0", "2014-01-09 19:40:31.11",
"DiszpKrYNg8", "cyl5vuJ5ETQ", "A", "56 Main Road"],
["pCbogmlIXga", "2013-09-08 21:40:28.0", "2014-01-09 19:40:45.02",
"DiszpKrYNg8", "cyl5vuJ5ETQ", "A", "12 Lower Main Road"],
["WnUXrY4XBMM", "2013-09-08 21:40:28.0", "2014-01-09 19:41:06.97",
"DiszpKrYNg8", "cyl5vuJ5ETQ", "A", "13 Main Road"],
["xLNXbDs9uDF", "2013-09-08 21:40:28.0", "2014-01-09 19:42:25.66",
"DiszpKrYNg8", "cyl5vuJ5ETQ", "A", "14 Mombasa Road"],
["foc5zag6gbE", "2013-09-08 21:40:28.0", "2014-01-09 19:42:36.93",
"DiszpKrYNg8", "cyl5vuJ5ETQ", "A", "15 Upper Hill"]
]
}

1.39. Email
The Web API features a resource for sending emails. For emails to be sent it is required that the SMTP configuration
has been properly set up and that a system notification email address for the DHIS 2 instance has been defined. You
can set SMTP settings from the email settings screen and system notification email address from the general settings
screen in DHIS 2.

1.39.1. System notification


The notification resource lets you send system email notifications with a given subject and text in JSON or XML. The
email will be sent to the notification email address as defined in the DHIS2 general system settings:

{
"subject": "Integrity check summary",
"text": "All checks ran successfully"
}

You can send a system email notification by posting to the notification resource like this:

curl -d @email.json "localhost/api/email/notification" -X POST -H "Content-


Type:application/json" -u admin:district -v

1.39.2. Test message


To test whether the SMTP setup is correct by sending a test email to yourself you can interact with the test resource.
To send test emails it is required that your DHIS 2 user account has a valid email address associated with it. You can
send a test email like this:

99
Web API Sharing

curl "localhost/api/email/test" -X POST -H "Content-Type:application/json" -u


admin:district -v

1.40. Sharing
The sharing solution allows you to share most objects in the system with specific user groups and to define whether
objects should be public and private. To get and set sharing for objects you can interact with the sharing resource. To
request the sharing status for an object use a GET request to:

api/sharing?type=dataElement&id=fbfJHSPpUQD

You can define the sharing status for an object using the same URL with a POST request, where the payload in JSON
format looks like this:

{
"meta": {
"allowPublicAccess": true,
"allowExternalAccess": false
},
"object": {
"id": "fbfJHSPpUQD",
"name": "ANC 1st visit",
"publicAccess": "rw------",
"externalAccess": false,
"user": {},
"userGroupAccesses": [
{
"id": "hj0nnsVsPLU",
"access": "rw------"
},
{
"id": "qMjBflJMOfB",
"access": "r-------"
}
]
}
}

In this example, the payload defines the object to have read-write public access, no external access (without login),
read-write access to one user group and read-only access to another user group. You can submit this to the sharing
resource using curl:

curl -d @sharing.json "localhost/api/sharing?type=dataElement&id=fbfJHSPpUQD" -H


"Content-Type:application/json" -u admin:district -v

1.41. Scheduling
To schedule tasks to run at fixed intervals you can interact with the scheduling resource. To configure tasks you can
do a POST request to the following resource:

/api/scheduling

The payload in JSON format is described below.

{
"resourceTableStrategy": "allDaily",
"analyticsStrategy": "allDaily",
"dataMartStrategy": "allDaily",
"monitoringStrategy": "allDaily",
"dataSynchStrategy": "enabled"

100
Web API Schema Resource

An example using curl:

curl "localhost/dhis/api/scheduling" -d @scheduling.json -X POST -u admin:district -H


"Content-Type:application/json" -v

The table below lists available task strategies.

Table 1.58. Task strategies

Task Strategies
Resource table task allDaily | allEvery15Min
Analytics task allDaily | last3YearsDaily
Data mart task allDaily
Monitoring allDaily
Data synch task enabled

1.42. Schema Resource


A new resource was included in DHIS 2.15 which can be used to introspect all available DXF2 classes, this resource
can be found on /api/schemas and for a specific resource, you can have a look at /api/schemas/TYPE.

Example 1: Get all available schemas in XML:

GET /api/schemas.xml

Example 2: Get all available schemas in JSON:

GET /api/schemas.json

Example 3: Get JSON schema for a specific class:

GET /api/schemas/dataElement.json

1.43. UI Customization
To customize the UI of the DHIS 2 application you can insert custom Javascript and CSS styles through the files
resource. The Javascript and CSS content inserted through this resource will be loaded by the DHIS 2 web application.
This can be particularly useful in certain situations:
• Overriding the CSS styles of the DHIS 2 application, such as the login page or main page.
• Defining Javascript functions which are common to several custom data entry forms and HTML-based reports.
• Including CSS styles which are used in custom data entry forms and HTML-based reports.

To insert Javascript from a file called script.js you can interact with the files/script resource with a POST-request:

curl --data-binary @script.js "localhost/api/files/script" -H "Content-


Type:application/javascript" -u admin:district -v

Note that we use the --data-binary option to preserve formatting of the file content. You can fetch the Javascript content
with a GET-request:

localhost/api/files/script

To insert CSS from a file called style.css you can interact with the files/style resource with a POST-request:

curl --data-binary @style.css "localhost/api/files/style" -H "Content-Type:text/css" -


u admin:district -v

101
Web API FRED API

You can fetch the CSS content with a GET-request:

localhost/api/files/style

1.44. FRED API


DHIS 2 from version 2.11 implements support for the current draft of the FRED API version 1.0. The project defines
itself as “open standard for sharing and updating health facility data”. The full specification, including representation
format and basic usage, can be found at http://facilityregistry.org/.

Since version 1.0 is not finalized there are parts of the current specification that has not been implemented as we found
it not to be in a stable enough state. Most notably we do not currently support sorting (we do however sort on name
by default) and filtering of facilities.

The entry point for the implementation can be found at http://<server-url>/api-fred and the current version is located
at http://<server-url>/api-fred/v1.

This section will give some simple examples of using the API.

Get all facilities:

curl -u username:password -X GET http://<server-url>/api-fred/v1/facilities.json

Get a specific facility based on either identifier or UUID:

curl -u username:password -X GET http://<server-url>/api-fred/v1/facilities/<id>.json


curl -u username:password -X GET http://<server-url>/api-fred/v1/facilities/
<uuid>.json

Create a new facility:

curl -u username:password -X POST -d @new_facility.json


-H "Content-Type: application/json" http://<server-url>/api-fred/v1/facilities.json

Update a facility:

curl -u username:password -X POST -d @updated_facility.json


-H "Content-Type: application/json" http://<server-url>/api-fred/v1/facilities/
<id>.json

curl -u username:password -X POST -d @updated_facility.json


-H "Content-Type: application/json" http://<server-url>/api-fred/v1/facilities/
<uuid>.json

102
Apps in DHIS2 Purpose of Packaged Apps

Chapter 2. Apps in DHIS2


A packaged app is an Open Web App that has all of its resources (HTML, CSS, JavaScript, app manifest, and so on)
contained in a zip file. It can be uploaded to a DHIS2 installation directly through the user interface at runtime. A
packaged app is a ZIP file with an app manifest in its root directory. The manifest must be named manifest.webapp.
A throrough description of apps can be obtained here.

2.1. Purpose of Packaged Apps


The purpose of packaged apps is to extend the web interface of DHIS2, without the need to modify the source code of
DHIS2 itself. A system deployment will often have custom and unique requirements. The apps provide a convenient
extension point to the user interface. Through apps, you can complement and customize the DHIS 2 core functionality
with custom solutions in a loosely coupled and clean manner.

Apps do not have permissions to interact directly with DHIS2 Java API. Instead, apps are expected to use functionality
and interact with the DHIS2 services and data by utilizing the DHIS2 Web API.

2.2. Creating Apps


DHIS2 apps are constructed with HTML, JavaScript and CSS files, similar to any other web application. Apps also
need a special file called manifest.webapp which describes the contents of the app. This file should be in the format
specified by the W3C Manifest for Web Applications. A basic example of the manifest.webapp is shown below:

{
"version": "0.1",
"name": "My App",
"description": "My App is a Packaged App",
"launch_path": "/index.html",
"icons": {
"16": "/img/icons/mortar-16.png",
"48": "/img/icons/mortar-48.png",
"128": "/img/icons/mortar-128.png"
},
"developer": {
"name": "Me",
"url": "http://me.com"
},
"default_locale": "en",
"activities": {
"dhis": {
"href": "*"
}
}
}

The manifest.webapp file must be located at the root of the project. Among the properties, the icons#48 property is
used for the icon that is displayed on the list of apps that are installed on a DHIS 2 instance. The activities property is
an dhis-specific extension meant to differentiate between a standard Open Web App and an app that can be installed
in DHIS 2. The * value for href is converted to the appropriate URL when the app is uploaded and installed in DHIS
2. This value can then be used by the application's JavaScript and HTML files to make calls to the DHIS 2 Web API
and identify the correct location of DHIS 2 server on which the app has been installed. To clarify, the activities part
will look similar to this after the app has been installed:

"activities": {
"dhis": {
"href": "http://apps.dhis2.org/demo"

103
Apps in DHIS2 Configuring DHIS2 for Apps Installation

}
}

To read the JSON structure into Javascript, you can use a regular AJAX request and parse the JSON into an object.
Most Javascript libraries provide some support, for instance with jQuery it can be done like this:

$.getJSON( "manifest.webapp", function( json ) {


var apiBaseUrl = json.activities.dhis.href + "/api";
} );

The app can contain HTML, Javascript, CSS, images and other files whic may be required to support it . The file
structure could look something like this:

/
/manifest.webapp #manifest file (mandatory)
/css/ #css stylesheets (optional)
/img/ #images (optional)
/js/ #javascripts (optional)

Note that it is only the manifest.webapp file which must be placed in the root. It is up the developer to organize
CSS, images and Javascript files inside the app as needed.

All the files in the project should be compressed into a standard zip archive. Note that the manifest.webapp file must
be located on the root of the zip archive (do not include a parent directory in the archive). The zip archive can then
be installed into DHIS2 as you will see in the next section.

2.3. Configuring DHIS2 for Apps Installation


The App Manager is found under Services # Apps. If your logged in user has permissions to view and edit settings
you will see the Settings link in the left menu.

The following settings can be configured:

1. App Installation Folder: The folder on the file system where apps are unpacked. By default this is under the expanded
DHIS folder suffixed by /apps. If you like to install your apps in another location, say www folder of Apache 2, you
can specify the absolute path to that directory on the server, making your apps to be unpacked at that location.

2. App Base URL: The URL through which the apps can be found on the Web. By default this is the same as your
DHIS 2 URL suffixed by /apps. If you are installing apps through a different web server you need to provide the
full URL for that web server.

2.4. Installing Apps into DHIS 2


Apps can be installed by uploading zip file into the App Manager. In, Services # Apps, click on the App Store menu item.

104
Apps in DHIS2 Launching Apps

The app can be uploaded by pressing the Browse button and after selecting the zip package, the file is uploaded
automatically and installed in DHIS 2. You can also browse through apps in the DHIS2 app store and download apps
from there. The DHIS2 app store allows for app searching, reviewing, commenting, requesting features, rating on the
apps by the community.

2.5. Launching Apps


After installation, your apps will be integrated with the menu system and can be accessed under services and from
the module overview page. It can also be accessed from the home page of the apps module. Click on an app in the
list in order to launch it.

2.6. Web-API for Apps


From version 2.14 there is also additional support for apps through the web-api. The /api/apps endpoint can be used
for installing, deleting and listing apps. The app key is derived from the name of the ZIP archive, exluding the file
extension.

You can read the keys for apps by listing all apps from the apps resource and look for the key property. To list all
installed apps in JSON:

curl -X GET -u user:pass -H "Accept: application/json" http://server.com/api/apps

You can also simply point your web browser to the resource URL:

http://server.com/api/apps

To install an app, the following command can be issued:

curl -X POST -u user:pass -F [email protected] http://server.com/api/apps

To delete an app, you can issue the following command:

curl -X DELETE -u user:pass http://server.com/api/apps/<app-key>

To force a reload of currently installed apps, you can issue the following command. This is useful if you added a file
manually directly to the file system, instead of uploading through the DHIS 2 user interface.

105
Apps in DHIS2 Adding the DHIS 2 menu to your app

curl -X PUT -u user:pass http://server.com/api/apps

To let DHIS 2 serve apps from the Web API make sure to set the "App base URL" to point to the apps resource, i.e.:

http://server.com/api/apps

To set the apps configuration you can make a POST request to the config resourec with a JSON payload:

{
"appFolderPath": "/home/dhis/config/apps",
"appBaseUrl": "http://server.com/api/apps"
}

curl -X POST -u user:pass -d @config.json http://server.com/api/apps/config

To restore the default app settings you can make a DELETE request to the config resource:

curl -X DELETE -u user:pass http://server.com/api/apps/config

Note that by default apps will be served through the apps Web API resource, and the file system folder will be
DHIS2_HOME/apps. These systems should be fine for most situations.

2.7. Adding the DHIS 2 menu to your app


In order to maintain a uniform appearance within DHIS 2 it is possible to add your app's icon to the top menu. To
begin, we start with a screenshot of the top-menu of the DHIS 2 user interface, where your app's icon will be placed.

The first step is to adding the menu is including the style-sheets and scripts that are required. All JavaScript files are
found in /dhis-web-commons/javascripts/dhis2/ while the CSS files are found in /dhis-web-commons/
font-awesome/css/font-awesome.min.css and /dhis-web-commons/css/menu.css

The following list provides a description of each file:

Scripts:
• jquery.min.js / jqLite / angular.element : One of the mentioned libraries needs to be present. DHIS2
employs a stripped-down version of jqLite that is present in Angular for the menu. This makes it compatible with
jqLite and jQuery.

106
Apps in DHIS2 Adding the DHIS 2 menu to your app

• dhis2.translate.js : Translate script that translates menu text to your current dhis language setting
• dhis2.menu.js : Menu logic that deals with all the ordering, searching of menu items etc.
• dhis2/dhis2.menu.ui.js : Menu ui code that has all the menu user interface related code for scrolling, shortcuts,
HTML markup etc.
Stylesheets:
• font-awesome.min.css : Used for various icons in the menu.
• menu.css : The CSS used for the menu.
• dhis2.translate.js : Translate script that translates menu text to your current DHIS2 language setting

For a app that will run using the same URL structure as the normal DHIS2 apps, only the JavaScript files and style-
sheets are required. If your app is running using a different URL structure than the default one, you will need to specify
a base URL before including the menu scripts. Including the scripts and style-sheets would look something like the
following:

<!-- DHIS2 Settings initialization for a baseUrl that is used for the menu -->
<script>
window.dhis2 = window.dhis2 || {};
dhis2.settings = dhis2.settings || {};
dhis2.settings.baseUrl = 'dhis';
</script>

<!-- Menu scripts -->


<script type="text/javascript" src="./dhis-web-commons/javascripts/dhis2/
dhis2.translate.js">
</script>
<script type="text/javascript" src="./dhis-web-commons/javascripts/dhis2/
dhis2.menu.js">
</script>
<script type="text/javascript" src="./dhis-web-commons/javascripts/dhis2/
dhis2.menu.ui.js"></script>

<!-- Stylesheets related to the menu -->


<link type="text/css" rel="stylesheet" href="./dhis-web-commons/font-awesome/css/font-
awesome.min.css"/>
<link type="text/css" rel="stylesheet" media="screen" href="./dhis-web-commons/css/
menu.css">

To clarify, the following part will initialize some variables. If you do not use any other DHIS2 libraries these will
not be set and therefore will have to be set by you first. After that the third line specifies a base URL of where your
DHIS 2 instance is running on your web server. For example: dhis in this case means the server is running at http://
localhost:8080/dhis/. Note that you will only have to specify the part after the web address. So if your instance is
running at http://www.example.com/myInstance/ you would only specify myInstance

<!-- Example setting for myInstance -->


<script>
window.dhis2 = window.dhis2 || {};
dhis2.settings = dhis2.settings || {};
dhis2.settings.baseUrl = 'myInstance';
</script>

The above will only include the necessary scripts to be able to show the menu. To actually make it show up we have
two possibilities. The first one is using a basic <div> element with an id attribute.

<div id="dhisDropDownMenu"></div>

An alternative is available when your application uses angular. We have included a directive to show the menu. This
would be used as follows:

<div d2-menu></div>

107
Apps in DHIS2 Adding the DHIS 2 menu to your app

The element type in this case does not really matter. As long as you include the d2-menu directive. To be able to use
the menu directive you would also have to include the directive in your angular app. The angular module containing
the directive is called d2Menu.

'use strict';

var appMenu = angular.module('appMenu',


['ngRoute',
'ngCookies',
'd2Menu']);

The minimum amount of code to show the menu is shown below. You could use this as a starting reference.

<!DOCTYPE html>
<html ><!--ng-app="appMenu"> -->
<head>
<title>Dhis2 Menu</title>

<!-- Stylesheets related to the menu -->


<link type="text/css" rel="stylesheet" href="./dhis-web-commons/font-awesome/
css/font-awesome.min.css"/>
<link type="text/css" rel="stylesheet" media="screen" href="./dhis-web-
commons/css/menu.css">
</head>

<body style="background-color: black;">

<div id="dhisDropDownMenu"></div>

<!-- DHIS2 Settings initialization for a baseUrl that is used for the menu -->
<script>
window.dhis2 = window.dhis2 || {};
dhis2.settings = dhis2.settings || {};
dhis2.settings.baseUrl = 'dhis';
</script>

<!-- Menu scripts -->


<script type="text/javascript" src="./dhis-web-commons/javascripts/jQuery/
jquery.min.js"></script>
<script type="text/javascript" src="./dhis-web-commons/javascripts/dhis2/
dhis2.translate.js"></script>
<script type="text/javascript" src="./dhis-web-commons/javascripts/dhis2/
dhis2.menu.js"></script>
<script type="text/javascript" src="./dhis-web-commons/javascripts/dhis2/
dhis2.menu.ui.js"></script>

</body>
</html>

108
Setting up report functionality Data sources for reporting

Chapter 3. Setting up report functionality

3.1. Data sources for reporting

3.1.1. Types of data and aggregation


In the bigger picture of HIS terminology all data in DHIS are usually called aggregated as they are aggregates
(e.g. monthly summaries) of medical records or some kind of service registers reported from the health facilities.
Aggregation inside DHIS however, which is the topic here, is concerned with how the raw data captured in DHIS
(through data entry or import)are further aggregated over time (e.g. from monthly to quarterly values) or up the
organisational hierarchy (e.g. from facility to district values).

3.1.1.1. Terminology
• Raw data refers to data that is registered into the DHIS 2 either through data entry or data import, and has not been
manipulated by the DHIS aggregation process. All these data are stored in the table (or Java object if you prefer)
called DataValue.
• Aggregated data refers to data that has been aggregated by the DHIS 2, meaning it is no longer raw data, but some
kind of aggregate of the raw data.
• Indicator values can also be understood as aggregated data, but these are special in the way that they are calculated
based on user defined formulas (factor * numerator/denominator). Indicator values are therefore processed data and
not raw data, and are located in the aggregatedindicatorvalue table/object. Indicators are calculated at any level of
the organisational hierarchy and these calculations are then based on the aggregated data values available at each
level. A level attribute in the aggregateddatavalue table refers to the organisational level of the orgunit the value
has been calculated for.
• Period and Period type are used to specify the time dimension of the raw or aggregated values, and data can be
aggregated from one period type to another, e.g. from monthly to quarterly, or daily to monthly. Each data value
has one period and that period has one period type. E.g. data values for the periods Jan, Feb, and Mar 2009, all
of the monthly period type can be aggregated together to an aggregated data value with the period Q1 2009 and
period type Quarterly.

3.1.1.2. Basic rules of aggregation

3.1.1.2.1. What is added together

Data (raw) can be registered at any organisational level, e.g. at national hospital at level 2, a health facility at level 5,
or at a bigger PHC at level 4. This varies form country to country, but DHIS is flexible in allowing data entry or data
import to take place at any level. This means that orgunits that themselves have children can register data, sometimes
the same data elements as their children units. The basic rule of aggregation in DHIS 2 is that all raw data is aggregated
together, meaning data registered at a facility on level 5 is added to the data registered for a PHC at level 4.

It is up to the user/system administrator/designer to make sure that no duplication of data entry is taking place and that
e.g. data entered at level 4 are not about the same services/visits that are reported by orgunit children at level 5. NOTE
that in some cases you want to have duplication of data in the system, but in a controlled manner. E.g. when you have
two different sources of data for population estimates, both level 5 catchment population data and another population
data source for level 4 based on census data (because sum of level 5 catchments is not always the same as level 4
census data). Then you can specify using advanced aggregation settings (see further down) that the system should e.g.
not add level 5 population data to the level 4 population data, and that level 3,2,1 population data aggregates are only
based on level 4 data and does not include level 5 data.

3.1.1.2.2. How data gets added together

How data is aggregated depends on the dimension of aggregation (see further down).

109
Setting up report functionality Types of data and aggregation

Along the orgunit level dimension data is always summed up, simply added together. Note that raw data is never
percentages, and therefore can be summed together. Indicator values that can be percentages are treated differently
(re-calculated at each level, never summed up).

Along the time dimension there are several possibilities, the two most common ways to aggregate are sum and average.
The user can specify for each data element which method to use by setting the aggregation operator (see further down).
Monthly service data are normally summed together over time, e.g. the number of vaccines given in a year is the sum
of the vaccines given for each month of that year. For population, equipment, staff and other kind of what is often
called semi-permanent data the average method is often the one to use, as, e.g. 'number of nurses' working at a facility
in a year would not be the sum of the two numbers reported in the six-monthly staffing report, but rather the average
of the two numbers. More details further down under 'aggregation operators'.

3.1.1.3. Dimensions of aggregation


3.1.1.3.1. Organisational units and levels
Organisational units are used to represent the 'where' dimension associated with data values. In DHIS 2, organisational
units are arranged in a hierarchy, which typically corresponds to the hierarchical nature of the organisation or country.
Organisational unit levels correspond to the distinct levels within the hierarchy. For instance, a country may be
organized into provinces, then districts, then facilities, and then sub-centers. This organisational hierarchy would have
five levels. Within each level, a number of organisational units would exist. During the aggregation process, data is
aggregated from the lower organisational unit levels to higher levels. Depending on the aggregation operator, data may
be 'summed' or 'averaged' within a given organisational unit level, to derive the aggregate total for all the organisational
units that are contained within a higher level organisational unit level. For instance, if there are ten districts contained
in a province and the aggregation operator for a given data element has been defined as 'SUM', the aggregate total for
the province would be calculated as the sum of the values of the individual ten districts contained in that province.

3.1.1.3.2. Period
Periods are used to represent the 'when' dimension associated with data values. Data can easily be aggregated from
weeks to months, from months to quarters, and from quarters to years. DHIS 2 uses known rules of how these different
intervals are contained within other intervals (for instance Quarter 1 2010 is known to contain January 2010, February
2010 an March 2010) in order to aggregate data from smaller time intervals, e.g. weeks, into longer time intervals,
e.g. months.

3.1.1.3.3. Data Elements and Categories


The data element dimension specifies 'what' is being recorded by a particular data value. Data element categories
are actually degenerated dimensions of the data element dimension, and are used to disaggregate the data element
dimension into finer categories. Data element categories, such as 'Age' and 'Gender', are used to record a particular
data element, typically for different population groups. These categories can then be used to calculate the overall total
for the category and the total of all categories.

3.1.1.4. Aggregation operators, methods for aggregation


3.1.1.4.1. Sum
The 'sum' operator simply calculates the sum of all data values that are contained within a particular aggregation matrix.
For instance, if data is recorded on a monthly basis at the district level and is aggregated to provincial quarterly totals,
all data contained in all districts for a given province and all weeks for the given quarter will be added together to
obtain the aggregate total.

3.1.1.4.2. Average
When the average aggregation operator is selected, the unweighted average of all data values within a given aggregation
matrix are calculated.

It is important to understand how DHIS 2 treats null values in the context of the average operator. It is fairly common
for some organisational units not to submit data for certain data elements. In the context of the average operator, the
average results from the number of data elements that are actually present (therefore NOT NULL) within a given
aggregation matrix. If there are 12 districts within a given province, but only 10 of these have submitted data, the

110
Setting up report functionality Data mart

average aggregate will result from these ten values that are actually present in the database, and will not take into
account the missing values.

3.1.1.5. Advanced aggregation settings (aggregation levels)


3.1.1.5.1. Aggregation levels
The normal rule of the system is to aggregate all raw data together when moving up the organisational hierarchy, and
the system assumes that data entry is not being duplicated by entering the same services provided to the same clients at
both facility level and also entering an 'aggregated' (sum of all facilities) number at a higher level. This is to more easily
facilitate aggregation when the same services are provided but to different clients/catchment populations at facilities
on level 5 and a PHC (the parent of the same facilities) at level 4. In this way a facility at level 5 and a PHC at level
4 can share the same data elements and simply add together their numbers to provide the total of services provided
in the geographical area.

Sometimes such an aggregation is not desired, simply because it would mean duplicating data about the same
population. This is the case when you have two different sources of data for two different orgunit levels. E.g. catchment
population for facilities can come from a different source than district populations and therefore the sum of the facility
catchment populations do not match the district population provided by e.g. census data. If this is the case we would
actually want duplicated data in the system so that each level can have as accurate numbers as possible, but then we
do NOT want to aggregate these data sources together.

In the Data Element section you can edit data elements and for each of them specify how aggregation is done for each
level. In the case described above we need to tell the system NOT to include facility data on population in any of the
aggregations above that level, as the level above, in this case the districts have registered their population directly as
raw data. The district population data should then be used at all levels above and including the district level, while
facility level should use its own data.

3.1.1.5.2. How to edit data element aggregation


This is controlled through something called aggregation levels and at the end of the edit data element screen there is a
tick-box called Aggregation Levels. If you tick that one you will see a list of aggregation levels, available and selected.
Default is to have no aggregation levels defined, then all raw data in the hierarchy will be added together. To specify
the rule described above, and given a hierarchy of Country, Province, District, Facility: select Facility and District as
your aggregation levels. Basically you select where you have data. Selecting Facility means that Facilities will use
data from facilities (given since this is the lowest level). Selecting District means that the District level raw data will
be used when aggregating data for District level (hence no aggregation will take place at that level), and the facility
data will not be part of the aggregated District values. When aggregating data at Province level the District level raw
data will be used since this is the highest available aggregation level selected. Also for Country level aggregates the
District raw data will be used. Just to repeat, if we had not specified that District level was an aggregation level, then
the facility data and district data would have been added together and caused duplicate (double) population data for
districts and all levels above.

3.1.2. Data mart


The purpose of the data mart is to provide pre-processed data to external data analysis and reporting tools. The
data mart consists of two tables, aggregateddatavalues and aggregatedindicatorvalues in the DHIS 2 database. The
values in the data mart are aggregated versions of the raw data found in the datavalue table as well as calculated
indicator values. Aggregation can take place over time (e.g. from monthly data to aggregated quarterly values), or
along the organisation unit hierarchy levels (e.g. from PHU data to aggregated district totals). The data mart can store
all kinds of such aggregated values. The data mart is as such just a processed 'copy' of the data values and it can be
emptied and regenerated at any time, and the tables are read only tables. The metadata in the two data mart tables are
referenced by internal identifiers, such as dataelementid, organisationunitid which refer to the tables like dataelement
and organisationunit, see 'How to make use of the data mart in external tools' for more on this.

3.1.2.1. The data mart export process


The DHIS2 Data Mart handles the aggregation of data across multiple dimensions (organisation unit hierarchy, orgunit
group hierarchy, time, etc) and can be controlled through this interface. Select the organisation unit levels which should

111
Setting up report functionality Resource tables

be aggregated, the start date and end date, and press "Start export". The data mart process will be executed in the
background, and a full report of the export process will be updated regularly so that you can determine the state of
the process. See the section on "Scheduling" in the data administration module for information on how this process
can be triggered to run automatically.

3.1.2.1.1. Data element categories in the data mart

Each data value for a data element has a reference to a category option combo, which is a combination of the
disaggregations for the data value, e.g. (male,<5y) or (In PHU, <1y). These disaggregations are exported as they are
to the data mart, and no aggregation is done on this dimension. See the data elements section for more on data element
categories and the resource tables section for more information on how to do aggregation on these categories.

3.1.2.1.2. Adding new data to an existing data mart

When you add new data to an existing data mart the new values will be appended to the existing so that the data mart
grows for each new process if new selections (such as new periods) have been made. If any of the selected values are
already in the data mart, then the old will be replaced by the newly generated values.

3.1.3. Resource tables


Resource tables provide additional information about the dimensions of the data in a format that is well suited for
external tools to combine with the data mart tables. By joining the data mart with these resource tables one can easily
aggregate along the data element category dimension or data element/indicator/organisation unit groups dimensions.
E.g. by tagging all the data values with the category option male or female and provide this in a separate column 'gender'
one can get subtotals of male and female based on data values that are collected for category option combinations like
(male, <5) and (male,>5). See the Pivot Tables section for more examples of how these can be used. orgunitstructure is

112
Setting up report functionality Report tables

another important table in the database that helps to provide the hierarchy of orgunits together with the data. By joining
the orgunitstructure table with the data mart tables you can get rows of data values with the full hierarchy, e.g. on the
form: OU1, OU2, OU3, OU4, DataElement, Period, Value (Sierra Leone, Bo, Badija, Ngelehun CHC, BCG <1, Jan-10,
32) This format makes it much easier for e.g. pivot tables or other OLAP tools to aggregate data up the hierarchy.

3.1.4. Report tables


Report tables are defined, cross-tabulated reports which can be used as the basis of further reports, such as Excel Pivot
Tables or simply downloaded as an Excel sheet. Report tables are intended to provide a specific view of data which
is required, such as "Monthly National ANC Indicators". This report table might provide all ANC indicators for a
country, aggregated by month for the entire country. This data could of course be retrieved from the main datamart,
but report tables generally perform faster and present well defined views of data to users.

Important
It is therefore important to keep in mind that when the aggregation strategy of the system is set to "Batch", the
data for each report table must also be present in the data mart.

3.2. How to create report tables


To create a new report table, go to the Report tables section of the Reports module (Reports -> Report Table). Above
the list of standard reports, use the "Add report table" or "Add Dataelement Dimension Table" buttons. A regular report
table can be used to hold data on data elements, indicators or dataset completeness, while Dataelement dimension tables
are used to include data element categories in report tables. Creating the tables are done in the same way, however,
the only exception being when choosing data.

To create a report table, you start by making some general choices for the table, the most important of which is the
crosstab dimension. Then, you choose which data elements, indicators, datasets or data element dimensions you want
to include. Finally you select which organisation units and time periods to use in the report table. Each of these steps
are described in detail below.

3.2.1. General options

Cross tab dimensions

You can cross-tab one or more of the following dimensions: data element/indicator, orgunit, and period, which means
that columns will be created based on the values of the dimensions chosen, e.g. if indicators is selected you will get
column names in the table reflecting the names of the selected indicators.

For example, if you cross-tab on indicators and periods, the column headers will say "<indicator title> <period>". The
organisation units will be listed as rows. See screenshot for clarification:

If you cross-tab on indicators and organisation units, the column headers of the table will say "<indicator title>
<organisation unit>". Now the periods will be listed as rows. See screenshot for clarification:

113
Setting up report functionality Selecting data

Note that the options made here regarding crosstab dimensions may have consequences for what options are available
when using the report table as a data source later, for example for standard reports.

Sort order

Affects the rightmost column in the table, allows you to choose to sort it low to high or high to low.

Top limit

Top limit allow you to set a maximum number of rows you want to include in the report table.

Include regression

This adds additional columns with regression values that can be included in the report design, e.g. in line charts.

3.2.2. Selecting data

Indicators/Data elements

Here you select the data elements/indicators that you want to include in the report. Use the group filter to more easily
find what you are looking for and double click on the items you want to include, or use the buttons to add/remove
elements. You can have both data elements and indicators in the same report.

Data sets

Here you select the data sets that you want to include in the report. Including a data set will give you data on the
data completeness of the given set, not data on its data elements. Double click on the items you want to include, or
use the buttons.

3.2.3. Selecting report parameters

There are two ways to select both what organisation units to include in a report, and what time periods should be
included: relative, or fixed. Fixed organisation units and/or periods means that you select the units/periods to include
in the report table when you create the report table. Using relative periods, you can select the time and/or units as

114
Setting up report functionality Selecting report parameters

parameters when the report table is populated, for example when running a standard report or creating a chart. A
combination is also possible, for example to add some organisation units in the report permanently while letting the
users choose additional. Report parameters is discussed below. In general, using fixed organisation units and/or time
periods are an unnecessary restriction.

Fixed Organisation Units

To add fixed organisation units, click "Toggle fixed organisation units". A panel will appear where you can choose
orgunits to always include in the report. If you leave it blank, the users select orgunits when running the report through
the use of report parameters. Use the drop down menu to filter organisation units by level, double click or use the
buttons to add/remove.

Fixed Periods

To add fixed periods, click "Toggle fixed organisation units". A panel will appear where you can choose periods to
always include in the report. If you leave it blank, the users select periods when running the report through the use
of report parameters. Use the drop down menu to choose period type (week, month, etc), the Prev and Next button to
choose year, and double click or use the buttons to add/remove.

Relative periods

Instead of using fixed/static periods like 'Jan-2010' or 'Q1-2010', more generic periods can be used to create reusable
report tables, e.g. for monthly reports the period 'Reporting month' will simply pick the current reporting month selected
by the user when running the report. Note that all relative periods are relative to a "reporting month". The reporting
month is either selected by the users, otherwise the current month is used. Here is a description of the possible relative
periods:
• Reporting month:

Use this for monthly reports. The month selected in the reporting month parameter will be used in the report.
• Months/Quarters this year:

This will provide one value per month or quarter in the year. This is well suited for standard monthly or quarterly
reports where all month/quarters need to be listed. Periods that still have no data will be empty, but will always
keep the same column name.
• This year:

This is the cumulative so far in the year, aggregating the periods from the beginning of the year up to and including
the selected reporting month.
• Months/Quarters last year:

This will provide one value per month or quarter last year, relative to the reporting month. This is well suited for
standard monthly or quarterly reports where all month/quarters need to be listed. Periods that still have no data will
be empty, but will always keep the same column name.
• Last year:

This is the cumulative last year, relative to the reporting month, aggregating all the periods from last year.

Example - relative periods

Let's say we have chosen three indicators: A, B and C, and we have also chosen to use the relative periods 'Reporting
month' and 'This year' when we created the report table. If the reporting month (selected automatically or by the user)
is for example May 2010, the report table will calculate the values for the three selected indicators for May 2010 (=
the 'Reporting month') and the accumulated values for the three selected indicators so far in 2010 (= so far 'This year').

Thus, we will end up with six values for each of the organisation units: "Indicator A May 2010", "Indicator B May 2010"
"Indicator C May 2010", "Indicator A so far in 2010", "Indicator B so far in 2010" and "Indicator C so far in 2010".

Report parameters

115
Setting up report functionality Data element dimension tables

Report parameters make the reports more generic and reusable over time and for different organisation units. These
parameters will pop up when generating the report table or running a report based on the report table. The users will
select what they want to see in the report. There are four possible report parameters, and you can select none, all, or
any combination.
• Reporting month:

This decides which month will be used when the system is choosing the relative periods. If the box it not checked,
the user will not be asked for the reporting month when the report is generated - the current month will then be used.
• Grand parent organisation unit:

Select the grand parent of all the orgunit children and grand children you want listed in the report. E.g. a selected
region will trigger the use of the region itself, all its district, and all their sub-districts.
• Parent organisation unit:

Select the parent of all the orgunit children you want listed in the report. E.g. a selected district will trigger the use
of the district itself and all its children/sub-districts.
• Organisation unit:

This triggers the use of this orgunit in the report. No children are listed.

Example - report parameters

Continuing with the example on relative periods just above, let's say that in addition to 'Reporting month', we have
chosen 'Parent organisation unit' as a report parameter when we created the report table. When we're running the report
table, we will be asked to select an organisation unit. Now, let's say we choose "Region R" as the organisation unit.
"Region R" has the children "District X" and "District Y".

When the report is run, the system will aggregate data for both "District X" and "District Y". The data will be aggregated
from the lowest level where they have been collected. The values for the districts will be aggregated further to give
an aggregated value for "Region R".

Thus, the report table will generate the six values presented in the previous example, for "District X", "District Y"
and "Region R".

3.2.4. Data element dimension tables


These tables enable the use of data element categories in report tables. There are two differences from regular report
tables. The first is that it is not possible to select crosstab dimensions, as the columns will always be the disaggregations
from the category combinations. The other is the actual choice of data. Only one category combination can be added
per report, and only data elements from the same category combo can be selected.

Subtotals and the total will also be included in the table, e.g. a gender (male, female) + EPI age(<1, >1) category combo
would give the following columns: male+<1, male+>1, Female+<1, female+>1, male, female,<1, >1, total.

Selecting data

116
Setting up report functionality Report table - best practices

Use the drop down menu to choose category combinations. The data elements using this category combination will be
listed. Double click to add to the report, or use the buttons.

3.2.5. Report table - best practices


To make the report tables reusable over time and across orgunits they can have parameters. Four types of parameters
are allowed; orgunit, parent orgunit (for listing of orgunits in one area), grand parent orgunit and reporting month. As
a side note it can be mentioned that we are looking into expanding this to include reporting quarter and year, or to
make that period parameter more generic with regard to period type somehow. The ability to use period as a parameter
makes the report table reusable over time and as such fits nicely with report needs such as monthly, quarterly or annual
reports. When a report is run by the user in DHIS 2, the user must specify the values for the report tables that are linked
to the report. First the report table is re-generated (deleted and re-created with updated data), and then the report is run
(in the background, in Jasper report engine).

Report tables can consist of values related to data elements, indicators or data completeness, which is related to
completeness of reporting across orgunits for a given month. Completeness reports will be covered in a separate section.

There are three dimensions in a report table that identify the data; indicators or data elements, orgunits and periods. For
each of these dimensions the user can select which metadata values to include in the report. The user must select one
or more data elements or indicators to appear in the report. The orgunit selection can be substituted with a parameter,
either one specific orgunit or an orgunit parent (making itself and all its children appear in the report). If one or more
orgunits are selected and no orgunit parameter is used, then the report is static with regard to which orgunits to include,
which in most cases is an unnecessary restriction to a report.

Using relative periods

The period selection is more advanced as it can in addition to specific periods like Jan-09, Q1-08, 2007 also contain
what is called relative periods. As report usually is run routinely over time a specific period like Jan-09 is not very
useful in a report. Instead, if you want to design a monthly report, you should use the relative period called Reporting
Month. Then you must also include Reporting Month as one of your report parameters to let the system know what
exactly is the Reporting Month on the time of report generation. There are many other relative periods available, and
they all relate to the report parameter Reporting Month. E.g. the relative period called So far this year refers to the
accumulative value for the year incl. the Reporting Month. If you want a trend report with multiple periods in stead
of one aggregated period, you can select e.g. 'Months this year', which would give you values for each month so far
in the year. You can do a similar report with quarters. The idea is to support as many generic report types as possible
using relative periods, so if you have other report needs, please suggest new relative periods on the mailing list, and
they might be added to the report table options.

Cross-tabbing dimensions

Cross tabbing is a very powerful functionality in report design, as the typical DHIS 2 data table with references to
period, data element/indicator and orgunit makes more advanced report design very difficult, as you cannot put e.g.
specific indicators, periods or orgunits on specific columns. E.g. by cross-tabbing on the indicator dimension in an
indicator report table you will get the indicator names on the column headers in your report, in addition to a column
referencing orgunit, and another column referencing period. With such a table design you could drag and drop indicator
names to specific columns or chart positions in the iReport software. Similarly you can cross tab on orgunits or periods
to make their names specifically available to report design. E.g. by cross-tabbing on periods and selecting the two
relative periods 'Reporting month' and 'This year', you can design reports with both the last month and the accumulative
annual value for given month as they will be available as column headers in your report table. It is also possible
to combine two dimensions in cross-tabbing, e.g. period and indicator, which makes it possible to e.g. look at three
selected indicators for two specific relative periods. This would e.g. make it possible to make a table or chart based
report with BCG, DPT3 and Measles coverage, both for the last month and the accumulative coverage so far in the year.

All in all, by combining the functionality of cross tabbing, relative periods and report table parameters you should have
a tool to support most report scenarios. If not, we would be very happy to receive suggestions to further improvements to
report tables. As already mentioned, we have started to look at more fine-grained parameters for the period dimension as
the 'Reporting month' does not cover enough, or at least is not intuitive enough, when it comes to e.g. quarterly reports.

117
Setting up report functionality Report table outcome

3.3. Report table outcome


When the report table is run, the system will calculate values for specified indicators/data elements/data sets, orgunits
and periods. The data will be presented in DHIS 2 in a table layout. The column headers will correspond to the cross-
tab dimension you have selected. An example report table showing ANC coverage for a district in The Gambia, is
shown below. Here the indicator and the periods are cross-tabbed, as can be seen from the column headers.

Above the table there are six buttons; five download buttons and one Back button. Clicking the Back button will simply
take you back to the previous screen. The function of the five download buttons, are presented below the screenshot:

The five download buttons

• Download as Excel:

Downloads a generated Excel file you can open in Excel.

• Download as CSV:

Downloads a generated .csv file. CSV stands for Comma Separated Values. It's a text file with the file ending .csv.
Each line in the file corresponds to a row in the table, while the columns are separated with semi colons (;). The file
can be opened in a text editor as well as in a spread sheet program (such as Excel).

• Download as PDF:

Downloads a generated PDF file. The data will be presented in a similar layout as the generated table you are already
viewing in DHIS 2.

• Download as Report:

Downloads a "styled" PDF file. In addition to present the data in a table layout, this file also presents a chart, showing
the aggregated data from all the chosen periods and the parent organisation unit chosen for the report table. The
report is generated using the Jasper report engine.

• Download as JRXML:

Downloads the design file for the generated Report described in the previous bullet. The design file (with the file
ending .jrxml) can be opened in the Jasper iReport Designer software. If you plan to design standard reports, this
is the starting point.

3.4. Standard reports

3.4.1. What is a standard report?


A standard report is a manually designed report that presents data in a manually specified layout. Standard reports can
be based either on report tables or SQL queries. Both approaches are described in the following sections. The main
advantage of using report tables is that of simplicity - no special development skills are required. In cases where you
have special requirements or need to utilize additional parts of the DHIS database, beyond the data mart, you might want
to use a SQL based standard report. In any case you will be able to utilize report parameters in order to create dynamic
reports. The following guide will use the report table approach, while the SQL approach is covered towards the end.

118
Setting up report functionality Designing Standard reports in iReport

3.4.2. Designing Standard reports in iReport


Jasper iReport Designer is a tool for creating reports that can be used as Standard Reports in DHIS 2. The tool allows
for the creation of standard report templates that can easily be exported from DHIS 2 with up to date data. The process
of creating reports involves four major steps:
1. A report table must be created in DHIS 2 with the indicators/data elements/datasets to be used in the report.
2. You have to run the report table and download the design file (Click the "Download as JRXML" button).
3. Open the downloaded .jrxml file using the free software Jasper iReport Designer to edit the layout of the report.
4. The edited report can then be uploaded to DHIS 2 to be used as a standard report.

If you want to preview your report during the design in iReport, you actually have to upload your file to DHIS 2 to
see how it looks.

These four steps will be describe in detail in the coming sections. In general, when you are making standard reports
you should have a clear idea of how it should look before you even make the report table, as how the report table
is designed has implications for how the report can be formatted in iReport. For example, what crosstab dimensions
are selected in the report table has consequences for what crosstabs are available for the standard report, and it has
consequences for what types of charts you can make.

3.4.2.1. Download and open the design file

Note: If you have not created a report table yet, you have to do so. See section "How to create report tables" to do so.

Locate your desired report table and run it by clicking the green circle with a white arrow inside. When the report is
shown, click the "Download as JRXML" button to download the design file. Then open that file in the Jasper iReport
Designer software.

3.4.2.2. Editing the report

You are now ready to edit the layout of the report. The main iReport window consists of a "Report Inspector" to the left,
the report document in the middle, a "Palette" area on the upper right hand side and a "Properties" area on the lower
right hand side. The "Report Inspector" are used for selecting and examining the various properties of the report, and
when selecting an item in the inspector, the "Properties" panel changes to display properties relating to the selection.
The "Palette" is used for adding various elements, e.g. text boxes, images and charts to the document.

119
Setting up report functionality Designing Standard reports in iReport

Note: If you cannot see the Palette or Properties sidebar, you can enable them from the menu item called "Window"
on the menu bar.

The iReport document is divided into seven main bands, divided by layout separators (the blue lines). These lines are
used to decide how big each of the areas should be on the report.

The areas all have different purposes:


• Title - area for the title of the report
• Page header - area for the page header
• Column header - area for column headers (for the table)
• Detail 1 - area where the actual report data will be placed
• Column footer - area to make footer of the table
• Page footer - area for the page footer
• Summary - elements in this area will be placed at the end of the report

By default you will see that only the Title, Column Header and the Detail 1 bands have data. For most reports this is
OK. The Title band is suitable for a title and e.g. a chart. Data fields entered into the Detail 1 area will be iterated over
to create a table. For example, if a field called "dataelementname" is placed in the Detail 1 band, all data elements in
the report table will be listed here. We'll come back to data fields management just a little below.

The unused bands in the report are shrinked to add more space for your report data. You can however increase/decrease
the band height as you like. There are two ways to do that. The first way is simply to drag the blue band-line as shown
below.

The other way to adjust the band height is to select a band in the "Report Inspector", and then adjust the "Band height"
value in the "Detail 1 - properties" area in the lower right corner.

As the fields are already present on the report, you probably don't want to do anything than just fix the layout and drag
fields around. You can also resize the fields by dragging the side, top or bottom lines. If you want to change the text
in the column headers, you simply double click the field and change the text.

To add the a field to the table, we simply drag it to the Detail 1 band from the "Report Inspector". The column header
will be added automatically.

By double clicking the box, the text can be edited. The format of the text, such as size, font and alignment, can be
adjusted with the tools above the document.

NOTE: Fields starting with "$F" present values that are retrieved from the database every time the report is run.
The values here will vary, so do not change these fields unless you want a static value here!

3.4.2.3. Text

There are two types of text in iReport: «Text labels» and «Text fields» (data fields). They work in different ways,
and should be used for different purposes. The main point is that text fields are just placeholders that will be filled

120
Setting up report functionality Designing Standard reports in iReport

with the correct text from the report table when the report is run, while text labels will stay the way they are when
the report is run.

3.4.2.3.1. Static text

Static text are text plain text labels that can be edited normally. There are two ways to edit text labels:
• By double clicking in the text box
• By using the Static text properties in the Properties panel

3.4.2.3.2. Text fields

Text fields are formulas that will be filled from the report table when the report is run. Unlike static text, these can not
be edited in a normal way. However, they can be manipulated in various ways to ensure that the desired output will
be produced. There are three ways to edit the text fields:

• By right clicking on the text box and selecting Edit expression


• By double clicking the text field (not recommended, as this will not bring up the expression editor)
• By using the Text field properties in the Properties panel

Text fields can represent either numbers or text, so that they can be used both for showing for example names of district
or for numeric values. It is therefore important the Expression class, seen in the Text field properties matches the Text
field expression. For the default text fields in the .jrxml file downloaded from DHIS 2 this is not a problem, but it is
important when making new text fields. The two most important Expression classes are java.lang.Double for numbers
and java.lang.String for text.

3.4.2.3.2.1. Example

For example, let us say you have a quarterly report where you would like to add a new column with the yearly total.
You therefore add a new Static text field to the column header band, and a Text field to the details band in. By default,
new Text fields are set to java.lang.String (text). However, the yearly total column will be filled with numbers. We
therefore have to change the Expression class for the new text field to java.lang.Double:

When we edit the text field expression, we see the Expression editor window with all the available columns from the
report table. We can see here that each of these are marked with what type they are - text or number. What we need to
make sure of is therefore that the expression class we choose for the text field matches the actual expression.

121
Setting up report functionality Designing Standard reports in iReport

3.4.2.4. Filtering the table rows

In the default table exported from DHIS 2, there are some rows that it might be better to leave out of the table, and
some that it would be preferable to have at the end. For example, when making a table based on a report table with the
«parent organisation unit» parameter, the default table might have a row with the national level somewhere in between
all the regions. In iReport, this can be changed so that the «parent organisation unit» appears at the bottom of the table.
This involves two steps that will be explained below. Note that this will not work where there is only one organisation
units, and it is therefore most useful when using the «parent organisation unit» or «grand parent organisation unit»
parameters in the report table.

3.4.2.4.1. Hiding the «parameter organisation unit» from the table

We exclude the "parameter organisation unit" from the table by using a property in the Details band called "Print when
expression". To set a Print when expression, start by selecting the Detail band in the Report inspector, then edit the
Print when expression in the properties panel.

122
Setting up report functionality Designing Standard reports in iReport

The Expression editor window should now appear. What we must do is to create an expression that checks if the row
being generated is the row with the organisation unit given as a parameter. The report table contains a column that we
can use for this called organisation_unit_is_parent. To exclude the row with the parameter organisation unit, double
click on organisation_unit_is_parent in the list to copy it to the expression area, then add .equals("No") at the end
so that the code is:

$F{organisation_unit_is_parent}.equals("No")

This tells the report engine to only print table rows where the organisation unit is not the parent organisation unit.

3.4.2.4.2. Putting the "param organisation unit" at the bottom of the table

Instead of removing the "param organisation unit" from the table entirely, it is also possible to put it at the bottom (or
top) of the table. This is done by using the sort functionality explained in the next section, and choosing to sort first by
"organisation_unit_is_parent". Other sorting options can be added in addition to this, for example to make a list where
the param organisation unit is at the bottom of the table, with the other organisation units listed alphabetically above it.

3.4.2.4.3. Hiding other rows

Using the expression editor it is also possible to exclude other rows from the table, in addition to the parent organisation
unit as was explained above. In Ghana, for example, all regions have a «fake district» which is the name of the region
in square brackets. This can also be excluded from the table using the Print when expression that was introduced above.
To to this, follow the instructions above to bring up the Expression editor window. Then, we use Java expressions to
test whether or not the row should be hidden.

123
Setting up report functionality Designing Standard reports in iReport

3.4.2.4.3.1. Example - removing rows with organisation units starting with [

Example - removing rows with organisation units starting with [

($F{organisationunitname}.charAt( 0 ) != '[')

This makes the report skip any rows where the first character of the organisation unit name is [.

It is also possible to combine several of these expressions. To do this we put the expressions in a parenthesis with the
two characters && in between. For example, to make a table that leaves both organisation units whose name starts
with [ and the parent organisation unit, we can use the following expression:

($F{organisationunitname}.charAt( 0 ) != '[')&&$F{organisation_unit_is_parent}.equals("No")

3.4.2.5. Sorting

Often you will be making reports where the first column is organisation unit names. However, it can be a problem that
the list of organisation units are not sorted alphabetically. This can be fixed in iReport through a few simple steps.

In the report inspector, right click on the name of the report (by default this is dpt) and select Edit query.

A Report query window will appear. Click on the Sort options button.

124
Setting up report functionality Designing Standard reports in iReport

A Sorting window as show below will appear. Here, we can add our sorting options. Click the Add field button. Another
small window will show up, with a drop down menu where you can choose Sort by organisationunitname to have the
table sorted alphabetically by name.

Click OK - Close - OK to close the three windows. The table should now be sorted.

125
Setting up report functionality Designing Standard reports in iReport

3.4.2.6. Changing indicator/data element names


By default, the reports from DHIS 2 uses the short names for indicators and data elements in reports and charts. In
some cases these are not always very meaningful for third parties, but with some work they can be given custom names
through iReport. This is useful for example if you are making a report with indicators as rows and periods as column,
or for charts with indicators.

To change the names of an indicator or data element, we have to edit its «expression» or formula, for example by right
clicking the text box and choosing Edit expression to bring up the Expression editor.

Next, we have to insert some Java code. In the following example, we will be replacing the shortname of three indicators
with their proper names. The code searches for the shortname, and then replaces it with a proper name.

($F{indicatorname}.equals("Bed Util All")) ? "Bed Utilisation - All Wards"

($F{indicatorname}.equals("Bed Util Mat")) ? "Bed Utilisation - Maternity"

($F{indicatorname}.equals("Bed Util Ped")) ? "Bed Utilisation - Paediatric"

126
Setting up report functionality Designing Standard reports in iReport

$F{indicatorname}

From this, we can see a pattern that is reusable for more general cases.
• For each indicator or data element we want to change the name for, we need one line
• Each line is separated by a colon :
• We finish the expression with a «regular» line

Each line has the same format, where the red text is the shortname, the blue text is what we want to insert instead.

The same expressions can be used for example when having indicator names along the category axis of a chart.

3.4.2.7. Adding horizontal totals


By using the expression editor, it is possible to add a column to the table with totals for each row. In the following
example, we will make a table with three months as columns as well as a column with the totals for the three months.

We start by dragging a text label into the table header and changing its text to "Total", and dragging a text field into
the details row.

As was discussed in the section on "Text field", we have to change the properties of the new text field so that it can
display numbers. To do this, change the "Expressions Class" in the properties panel to "java.lang.Double".

Right click the text field and choose "Edit Expression". This will bring up the "Expressions editor". As the expression,
we want to sum up all the columns. In this case we have three value expressions we want to sum up: "September",
"October 2010", "November 2010". The name of these fields will vary depending on the crosstab dimension you have
chosen in the report table. In our case, the expression we make is "$f{September}+$f{October 2010}+$f{November
2010}":

Each row of the table have a totals column to the right.

3.4.2.8. Groups of tables


There are cases when it can be useful to have several tables in one report. This can be done using Report groups. Using
this functionality, one can for example create a report one table for each indicator, or one table of each organisation
unit. In the following, we will go through the steps needed to make a report with three indicators, each represented in
one table. It is important that the report table does not crosstab on indicators when we want to make groups of tables
based on indicators.

In our example, the .jrxml file downloaded from DHIS 2 will by default have one column for organisation unit and on
for indicators (assuming we have chosen periods as the only crosstab dimension). We start by removing the indicator
column, since this in not needed in our case, and realign the other fields to fit the report.

127
Setting up report functionality Designing Standard reports in iReport

Next, we create out Report group. Go to the report inspector, right click on the report name (dpt is the default) and
choose Add Report Group.

A window will appear, with a report group wizard. Select a name for the group, in this case we choose «Indicator».
In the drop down menu, we can select what columns in the report table we want the groups to be based on. So, if we
wanted one table for each organisation unit, we would choose organisation unit name as the report object to group
according to. However, since we are grouping by indicators in this example, we choose indicatorname. Then click next.

The next step is to select whether or not we want a separate Group header and Group footer band for each report group.
In this case, we choose to include both. Click Finish, and the group bands should appear in the report.

If you upload and run the report, it will now create one table for each indicator. However, it will not look very good
as there will be no header row over each table - only one header at the top of each page. Also, there is no indication
as to which table is showing which indicator. In the following, we will fix this.

128
Setting up report functionality Designing Standard reports in iReport

Instead of having the title row in the column header, we can instead move it to the Group header. This will make the
heading show up above each individual table. Furthermore, we can add a heading to each table with the name of the
indicator.

Move the column headers from the Column header band to the Indicator group header band.

Next, add a text field to the Indicator group heading band, and edit it’s expression to display the indicator name.

The report should now have three tables, one for each indicator. Each table will have a heading with the name of the
indicator, and also a table header row.

3.4.2.8.1. Sorting and grouping

When using grouping, some precautions must be taken with regards to sorting. Notably, when adding sorting
parameters, whatever parameter is used as basis for the grouping must come first. Thus if you are grouping the report
by indicator, and want sort the organisation units alphabetically, you have to choose to sort first by indicator, then by
organisation unit name as shown below. For instructions on how to add sorting, see the sorting section above.

129
Setting up report functionality Designing Standard reports in iReport

3.4.2.9. Charts

By default, a 3D bar chart is included in the .jrxml file that is downloaded from DHIS 2. This is set up so that only
data from the «parameter organisation unit» (often the parent or grand parent) is used. Usually, this is a good solution.
Since it is the default, we will start by looking at bar charts, before looking at line charts.

3.4.2.9.1. Bar charts

Bar charts are the default chart type in DHIS 2. In this section, we will look at how to make a bar charts like the one
above, comparing the value of one indicator in several districts. To edit the default chart in iReport, right click on it
and choose Chart data.

130
Setting up report functionality Designing Standard reports in iReport

A window will appear. By default, the Filter expression is filled in so that only data for the parent organisation unit
will be displayed. If for some reason you do not want this, simply delete the text in the text box. In this case we do
NOT want the filter, as we are making a chart showing a comparison across districts. To continue, click the details tab.

Under details, you see the list of series for the chart. By default, one series is created per crosstab column. In this case,
we are looking at data for one indicator for the whole of 2010, for a number of districts. The indicator is along the
crosstab dimension.

131
Setting up report functionality Designing Standard reports in iReport

To make changes to a series, select it and click modify. Another window will appear where there are four areas that
can be edit. The three first are required, but it is sufficient to add an empty quote («») in one of the first two.

The first box is a text field where the name of the series can be inserted or edited. This is the field that will be used
to fill the text in the legend box (shown below).

132
Setting up report functionality Designing Standard reports in iReport

However, if you want to have the name of each bar along the x-axis of the chart instead of using the legend, this can
be done by adding whatever text you want to present in the Category expression field, or by inserting an expression
to have it filled automatically when the report is run. In this case, we want to have one bar for each organisation unit.
We therefore edit the category expression by clicking on the button to the right.

As the expression, we chose organisationunitname, as shown below.

When we are finished, the series editor should look like below. Click OK, then Close to close the Chart Details window.

133
Setting up report functionality Designing Standard reports in iReport

If you add a good description in the Category expression area, you can leave out the legend box. This is done in the
Report properties panel of iReport, where you can also edit many other details of the chart.

We can also add a title to the chart, for example the name of the indicator. This is also done in the Chart properties
panel, under Title expression.

The Expression editor window will appear, where you can enter the title. Note that the title must be in quotes, as
shown below.

134
Setting up report functionality Designing Standard reports in iReport

The chart is now ready.

3.4.2.9.2. Line charts


Line charts can be useful in many circumstances. However, to make line charts the report data (report table) must be
suited for it. Thus if you want to make a line chart, it is important that the report table does not have periods in the
crosstab dimension. Examples where this is useful is if you are making a report for a single organisation unit with one
or more indicators, or if you are making a report with one indicator and one or more organisation units.

Below, we will go though the steps needed to make a report with a line chart showing the development of three
indicators over one year, for one organisation unit. We start by making a report table with the choices shown below:

When we open the resulting .jrxml-file in iReport, the default line chart is included. Since we want to make a line chart,
we delete this chart and drag a new chart element into the report from the Palette panel.

135
Setting up report functionality Designing Standard reports in iReport

As soon as we drag the Chart element into the report, a window will appear. We choose the Line chart, as shown below.

A chart wizard will appear. Click next in the first step, then Finish in the next - we will add the data later.

136
Setting up report functionality Designing Standard reports in iReport

Next, adjust the size and position of the chart in your report. Then, we will add one data series for each of our
three indicators. Right-click on the chart and choose Chart data. If you are making a chart with one indicator and
several organisation units, you probably want to make a filter expression so that only data from the parameter/parent
organisation unit is used in the chart. To do this, add this line to the Filter expression area:

$F{organisation_unit_is_parent}.equals("Yes")

137
Setting up report functionality Designing Standard reports in iReport

In our example, we only have on organisation unit, so this is not necessary. Next, click the details tab to see a list of
the series in the chart. For now, this list is empty, but we will add one series for each of our three indicators. To add
a series, click the Add button.

138
Setting up report functionality Designing Standard reports in iReport

In the window that appears, enter the name of the first of the indicators in the Series expression window. Remember
to put the name in quotes. In the category expression (along the x-axis) we want the months, so we use the button next
to the field to open the Expression editor and add periodname.

139
Setting up report functionality Designing Standard reports in iReport

In the value expression, we add the actual data values for our first indicator. Use the Expression editor again to do this.
When we are finished, the window should look like the one below, only with different names according to the indicator.

You can then Click OK to close the window. Follow the same steps to add a series for the other indicators.

140
Setting up report functionality Designing Standard reports in iReport

Close the window, and the data for the line chart should be ready. However, some additional adjustments might be
needed - most of these can be found in the Line chart properties panel. For example, when making a month by month
chart as we have in example, there is often not enough space for the month names along the category axis. This can be
fixed by rotating the labels by for example -40 degrees, by using the property Category Axis Tick Label Rotation.

Many other options are available to give the chart the desired look.

141
Setting up report functionality Designing SQL based standard reports

3.4.2.10. Adding the Report to DHIS 2


We can now switch to DHIS 2 and import our report. Go to the Report Module in DHIS 2, and select "Standard Report".
In the "Standard Report" screen, click "Add new", or edit an existing one.

In the following screen, there are several actions we need to take. First, enter a name for the new "Standard Report".
Second, for design, click "Choose File" and find the .jrxml-file you have edited in iReport. Then we select the report
table that we used as a basis for the report in iReport. Click add, and it should move to the "Selected report tables"
area. Finally, click save.

The report is now available as a "Standard Report" in DHIS 2:

3.4.2.11. Some final guidelines


• Use the same version of iReport and DHIS 2's version of Jasper reports. See the About page in DHIS 2 for the
Jasper version in use.
• Use report tables with cross tab dimensions as your data source for your report designs. This will make it a lot easier
to design reports where you need to put specific indicators, periods, or orgunits on columns.
• Learn from others, there are many DHIS 2 report designs for Jasper on launchpad, see http://bazaar.launchpad.net/
~DHIS 2-devs-core/DHIS 2/trunk/files/head:/resources/

3.4.3. Designing SQL based standard reports


A standard report might be based on SQL queries. This is useful when you need to access multiple tables in the DHIS
database and do custom selects and joins.

- This step is optional, but handy when you need to debug your reports and when you have direct access to the database
you want to use. Click on the "report datasources" button, "New", "Database JDBC connection" and click "next". In
this window you can give you connection a name and select the JDBC driver. PostgreSQL and MySQL should come

142
Setting up report functionality Designing HTML based standard reports

included in your iReport. Then enter the JDBC connection URL, username and password. The last three refers to your
database and can be retrieved from your DHIS configuration file (hibernate.properties). Click "save". You have now
connected iReport to your database.

- Go to standard reports and click "add new", then "get report template". Open this template in iReport. This template
contains a series of report parameters which can be used to create dynamic SQL statements. These parameters will be
substituted based on the report parameters which we will later select and include in the standard report. The parameters
are:
• periods - string of comma-separated identifiers of the relative periods
• period_name - name of the reporting period
• organisationunits - identifier of the selected organisation units
• organisationunit_name - name of the reporting organisation unit
• organisationunit_level - level of the reporting organisation unit
• organisationunit_level_column - name of the corresponding column in the _orgunitstructure resource table

These parameters can be included in SQL statements using the $P!{periods} syntax, where "periods" represents the
parameter.

- To create a SQL query in iReport, click on the "report query" button. Write or paste your query into the textarea.
An example SQL query using parameters which will create a report displaying raw data values at the fourth level in
the org unit hierarchy is:

select district.name as district, chiefdom.name as chiefdom, ou.name as facility,


bcg.value as bcg, yellowfever.value as yellowfever, measles.value as measles
from organisationunit ou
left outer join _orgunitstructure ous
on (ou.organisationunitid=ous.organisationunitid)
left outer join organisationunit district
on (ous.idlevel2=district.organisationunitid)
left outer join organisationunit chiefdom
on (ous.idlevel3=chiefdom.organisationunitid)
left outer join (
select sourceid, sum(cast(value as double precision)) as value
from datavalue
where dataelementid=359706
and periodid=$P!{periods}
group by sourceid) as bcg on bcg.sourceid=ou.organisationunitid
left outer join (
select sourceid, sum(cast(value as double precision)) as value
from datavalue
where dataelementid=35
and periodid=$P!{periods}
group by sourceid) as yellowfever on yellowfever.sourceid=ou.organisationunitid
where ous.level=4
and ous.$P!{organisationunit_level_column}=$P!{organisationunits}
order by district.name, chiefdom.name, ou.name;

Notice how all parameters are used in the query, along with SQL joins of resource tables in the DHIS database.

- Finally, back in the add new report screen, we click on "Use JDBC data source". This enables you to select any
relative period and report parameters for your report. Relative periods are relative to today's date. Report parameters
will cause a prompt during report creation and makes it possible to dynamically select organisation units and periods
to use for your report during runtime. For the example above, we must select "reporting month" under relative periods
and both "reporting month" and "organisation unit" under report parameters. Click save. This will redirect you to the
list of reports, where you can click the green "create" icon next to your report to render it.

3.4.4. Designing HTML based standard reports


A standard report can be designed using purely HTML and JavaScript. This requires a little bit of development
experience in the mentioned subjects. The benefit of HTML based standard reports is that it allows for maximum

143
Setting up report functionality Designing HTML based standard reports

flexibility. Using HTML you can design exactly the report you want, positioning tables, logos and values on the page
according to your design needs. You can write and save your standard report design in a regular text file. To upload
your HTML based standard report to DHIS 2 do the following:
• Navigate to standard reports and click "Add new".
• Give the report a name.
• Select "HTML report" as type.
• If you want to you can download a report template by clicking on "Get HTML report template".
• Select desired relative periods - these will be available in JavaScript in your report.
• Select report parameters - these will be available in JavaScript in your report.

The report template, which you can download after selecting report type, is a useful starting point for developing
HTML based standard reports. It gives you the basic structure and suggests how you can use JavaScript and CSS in
the report. JavaScript and CSS can easily be included using standard script and style tags.

If you selected relative periods when creating the standard report you can access these in JavaScript like this:

var periods = dhis2.report.periods; // An array with period identifiers


var period = periods[0];

If you selected the organisation unit report parameter when creating the standard report you can access the selected
organisation unit in JavaScript like this:

var orgUnit = dhis2.report.organisationUnit; // An object


var id = orgUnit.id;
var name = orgUnit.name;
var code = orgUnit.code;

When designing these reports you can utilize the analytics Web API resource in order to retrieve aggregated data in
JavaScript. Have a look in the Web API chapter in this guide for a closer description. As a complete, minimal example
you can retrieve analytics data after the report has been loaded and use that data to set the inner text of an HTML
element like this:

<script type="text/javascript">
$( document ).ready( function() {
$.get( "../api/analytics?
dimension=dx:FnYCr2EAzWS;eTDtyyaSA7f&dimension=pe:THIS_YEAR&filter=ou:ImspTQPwCqd",
function( json ) {
$( "#bcg" ).html( json.rows[0][2] );
$( "#fic" ).html( json.rows[1][2] );
} );
} );
</script>

<div>BGG coverage: <span id="bcg"></span></div>


<div>FIC coverage: <span id="fic"></span></div>

A few other tips: To include graphics you can convert an image to SVG and embed that SVG content directly in the
report - DHIS 2 is based on HTML 5 where SVG tags are valid markup. To include charts and maps in your report you
can use the charts and maps resources in the Web API. You can use the full capability of the Web API from JavaScript
in your report - it may be useful to read through the Web API chapter to get an overview of all available resources.

144
Infrastructure Release process

Chapter 4. Infrastructure

4.1. Release process


Checklist for release.
• Tag trunk source code with new release. First temporarily add a dependency to dhis-web in the root pom.xml:

<module>dhis-web</module>

Use the mvn version plugin with:

mvn versions:set

This will prompt you to enter the version. Remove the dhis-web dependency. Update application cache manifests
in the various apps to new version. Commit the changes to trunk.
• Push a release branch to Launchad, e.g. with:

bzr push https://code.launchpad.net/~dhis2-devs-core/dhis2/2.18


• Tag source code with SNAPSHOT release.
• Enable email notifications for release branch.
• Create Jenkins for build for the release WAR file.
• Create automatic copy job from Jenkins to dhis2.org.
• Create automatic update of apps.dhis2.org/demo and apps.dhis2.org/dev systems.
• Update the database and WAR file on apps.dhis2.org/demo and apps.dhis2.org/dev instances.
• Create a new DHIS 2 Live package on dhis2.org and place it in download/live directory. Only the WAR file must
be updated. An uncompressed Live package is located on the demo server at:

/home/dhis/dhis-live-package

Replace the uncompressed WAR file with the new release. Make a compressed Live archive and move to /download/
live directory.
• Create Javadoc with:

mvn javadoc:aggregate

The doc will be put in target folder. Zip it, upload to dhis2.org, unzip and place it in download directory.
• Upload sample database to dhis2.org and place it in download/resources directory.
• Update download page at www.dhis2.org/downloads with links to new Live package, WAR file, source code branch
page and sample data including version.
• Write and send release email.

145
R and DHIS 2 Integration Introduction

Appendix A. R and DHIS 2 Integration

A.1. Introduction
R is freely available, open source statistical computing environment. R refers to both the computer programming
language, as well as the software which can be used to create and run R scripts. There are numerous sources on the
web which describe the extensive set of features of R.

R is a natural extension to DHIS2, as it provides powerful statistical routines, data manipulation functions, and
visualization tools. This chapter will describe how to setup R and DHIS2 on the same server, and will provide a simple
example of how to retrieve data from the DHIS2 database into an R data frame and perform some basic calculations.

A.2. Using ODBC to retrieve data from DHIS2 into R


In this example, we will use a system-wide ODBC connector which will be used to retrieve data from the DHIS2
database. There are some disadvantages with this approach, as ODBC is slower than other methods and it does raise
some security concerns by providing a system-wide connector to all users. However, it is a convenient method to
provide a connection to multiple users. The use of the R package RODBC will be used in this case. Other alternatives
would be the use of the RPostgreSQL package, which can interface directly through the Postgresql driver described
in Section A.4, “Mapping with R and PostgreSQL”

Assuming you have already installed R from the procedure in the previous section. Invoke the following command to
add the required libraries for this example.

apt-get install r-cran-rodbc r-cran-lattice odbc-postgresql

Next, we need to configure the ODBC connection. Edit the file to suit your local situation using the following template
as a guide. Lets create and edit a file called odbc.ini

[dhis2]
Description = DHIS2 Database
Driver = /usr/lib/odbc/psqlodbcw.so
Trace = No
TraceFile = /tmp/sql.log
Database = dhis2
Servername = 127.0.0.1
UserName = postgres
Password = SomethingSecure
Port = 5432
Protocol = 9.0
ReadOnly = Yes
RowVersioning = No
ShowSystemTables = No
ShowOidColumn = No
FakeOidIndex = No
ConnSettings =
Debug = 0

Finally, we need to install the ODBC connection with odbcinst -i -d -f odbc.ini

From the R prompt, execute the following commands to connect to the DHIS2 database.

> library(RODBC)
> channel<-odbcConnect("dhis2")#Note that the name must match the ODBC connector name
> sqlTest<-c("SELECT dataelementid, name FROM dataelement LIMIT 10;")

147
R and DHIS 2 Integration Using ODBC to retrieve data from DHIS2
into R
> sqlQuery(channel,sqlTest)
name
1 OPD First Attendances Under 5
2 OPD First Attendances Over 5
3 Deaths Anaemia Under 5 Years
4 Deaths Clinical Case of Malaria Under 5 Years
5 Inpatient discharges under 5
6 Inpatient Under 5 Admissions
7 Number ITNs
8 OPD 1st Attendance Clinical Case of Malaria Under 5
9 IP Discharge Clinical Case of Malaria Under 5 Years
10 Deaths of malaria case provided with anti-malarial treatment 1 to 5 Years
>

It seems R is able to retrieve data from the DHIS2 database.

As an illustrative example, lets say we have been asked to calculate the relative percentage of OPD male and female
under 5 attendances for the last twelve months.First, lets create an SQL query which will provide us the basic
information which will be required.

OPD<-sqlQuery(channel,"SELECT p.startdate, de.name as de, sum(dv.value::double


precision)
FROM datavalue dv
INNER JOIN period p on dv.periodid = p.periodid
INNER JOIN dataelement de on dv.dataelementid = de.dataelementid
WHERE p.startdate >= '2011-01-01'
and p.enddate <= '2011-12-31'
and de.name ~*('Attendance OPD')
GROUP BY p.startdate, de.name;")

We have stored the result of the SQL query in an R data frame called "OPD". Lets take a look at what the data looks like.

> head(OPD)
startdate de sum
1 2011-12-01 Attendance OPD <12 months female 42557
2 2011-02-01 Attendance OPD <12 months female 127485
3 2011-01-01 Attendance OPD 12-59 months male 200734
4 2011-04-01 Attendance OPD 12-59 months male 222649
5 2011-06-01 Attendance OPD 12-59 months male 168896
6 2011-03-01 Attendance OPD 12-59 months female 268141
> unique(OPD$de)
[1] Attendance OPD <12 months female Attendance OPD 12-59 months male
[3] Attendance OPD 12-59 months female Attendance OPD >5 years male
[5] Attendance OPD <12 months male Attendance OPD >5 years female
6 Levels: Attendance OPD 12-59 months female ... Attendance OPD >5 years male
>

We can see that we need to aggregate the two age groups (< 12 months and 12-59 months) into a single variable,
based on the gender. Lets reshape the data into a crosstabulated table to make this easier to visualize and calculate
the summaries.

>OPD.ct<-cast(OPD,startdate ~ de)
>colnames(OPD.ct)
[1] "startdate" "Attendance OPD 12-59 months female"
[3] "Attendance OPD 12-59 months male" "Attendance OPD <12 months female"
[5] "Attendance OPD <12 months male" "Attendance OPD >5 years female"
[7] "Attendance OPD >5 years male"

We have reshaped the data so that the data elements are individual columns. It looks like we need to aggregate the
second and fourth columns together to get the under 5 female attendance, and then the third and fifth columns to get
the male under 5 attendance.After this, lets subset the data into a new data frame just to get the required information
and display the results.

148
R and DHIS 2 Integration Using R with MyDatamart

> OPD.ct$OPDUnder5Female<-OPD.ct[,2]+OPD.ct[,4]#Females
> OPD.ct$OPDUnder5Male<-OPD.ct[,3]+OPD.ct[,5]#males
> OPD.ct.summary<-OPD.ct[,c(1,8,9)]#new summary data frame
>OPD.ct.summary$FemalePercent<-
OPD.ct.summary$OPDUnder5Female/
(OPD.ct.summary$OPDUnder5Female + OPD.ct.summary$OPDUnder5Male)*100#Females
>OPD.ct.summary$MalePercent<-
OPD.ct.summary$OPDUnder5Male/
(OPD.ct.summary$OPDUnder5Female + OPD.ct.summary$OPDUnder5Male)*100#Males

Of course, this could be accomplished much more elegantly, but for the purpose of the illustration, this code is rather
verbose.Finally, lets display the required information.

> OPD.ct.summary[,c(1,4,5)]
startdate FemalePercent MalePercent
1 2011-01-01 51.13360 48.86640
2 2011-02-01 51.49154 48.50846
3 2011-03-01 51.55651 48.44349
4 2011-04-01 51.19867 48.80133
5 2011-05-01 51.29902 48.70098
6 2011-06-01 51.66519 48.33481
7 2011-07-01 51.68762 48.31238
8 2011-08-01 51.49467 48.50533
9 2011-09-01 51.20394 48.79606
10 2011-10-01 51.34465 48.65535
11 2011-11-01 51.42526 48.57474
12 2011-12-01 50.68933 49.31067

We can see that the male and female attendances are very similar for each month of the year, with seemingly higher
male attendance relative to female attendance in the month of December.

In this example, we showed how to retrieve data from the DHIS2 database and manipulate in with some simple R
commands. The basic pattern for using DHIS2 and R together, will be the retrieval of data from the DHIS2 database
with an SQL query into an R data frame, followed by whatever routines (statistical analysis, plotting, etc) which may
be required.

A.3. Using R with MyDatamart


MyDatamart provides useful interface to the DHIS2 database by making a local copy of the database available on a
users desktop. This means that the user does not need direct access to the database and the data can be worked with
offline on the users local machine. In this example, we will have used the demo database. Data was downloaded at the
district level for Jan 2011-Dec 201l. Consult the MyDatamart section in this manual for more detailed information.

First, lets load some required R packages. If you do not have these packages already installed in your version of R,
you will need to do so before proceeding with the example.

library("DBI")
library("RSQLite")
library("lattice")
library("latticeExtra")

Next, we are going to connect to the local copy of the MyDatamart database. In this case, it was located at C:
\dhis2\sl.dmart.

dbPath<-"C:\\dhis2\\sl.dmart"
drv<-dbDriver("SQLite")
db<-dbConnect(drv,dbPath)

Let suppose we have been asked to compare ANC 1, 2, 3 coverage rates for each district for 2011. We can define an
SQL query to retrieve data from the MyDatamart database into an R data frame as follows.

149
R and DHIS 2 Integration Using R with MyDatamart

#An SQL query which will retreive all indicators


#at OU2 le
sql<-"SELECT * FROM pivotsource_indicator_ou2_m
WHERE year = '2011'"
#Execute the query into a new result set
rs<-dbSendQuery(db,sql)
#Put the entire result set into a new data frame
Inds<-fetch(rs,n=-1)
#Clean up a bit
dbClearResult(rs)
dbDisconnect(db)

We used one of the pre-existing Pivot Source queries in the database to get all of the indicator values. Of course, we
could have retrieved only the ANC indicators, but we did not exactly know how the data was structured, or how the
columns were named, so lets take a closer look.

#Get the name of the columns


colnames(Inds)
#output not shown for brevity
levels(as.factor(Inds$indshort))

We see from the colnames command that there is an column called "indshort" which looks like it contains some
indicator names. We can see the names using the second command. After we have determined which ones we need
(ANC 1, 2, and 3), lets further subset the data so that we only have these.

#Subset the data for ANC


ANC<-Inds[grep("ANC (1|2|3) Coverage",as.factor(Inds$indshort)),]

We just used R's grep function to retrieve all the rows and columns of the Inds data frame which matched the regular
expression "ANC (1|2|3) Coverage" and put this into a new data frame called "ANC".

By looking at the data with the str(ANC) command, we will notice that the time periods are not ordered correctly, so
lets fix this before we try and create a plot of the data.

#Lets reorder the months


MonthOrder<-c('Jan','Feb','Mar','Apr',
'May','Jun','Jul','Aug','Sep','Oct','Nov','Dec')
ANC$month<-factor(ANC$month,levels=MonthOrder)

Next, we need to actually calculate the indicator value from the numerator, factor and denominator.

#Calculate the indicator value


ANC$value<-ANC$numxfactor/ANC$denominatorvalue

Finally, lets create a simple trellis plot which compares ANC 1, 2, 3 for each district by month and save it to our local
working directory in a file called "District_ANC.png".

png(filename="District_ANC.png",width=1024,height=768)
plot.new()
xyplot(value ~ month | ou2, data=ANC, type="a", main="District ANC Comparison Sierra
Leone 2011",
groups=indshort,xlab="Month",ylab="ANC Coverage",
scales = list(x = list(rot=90)),
key = simpleKey(levels(factor(ANC$indshort)),
points=FALSE,lines=TRUE,corner=c(1,1)))
mtext(date(), side=1, line=3, outer=F, adj=0, cex=0.7)
dev.off()

The results of which are displayed below.

150
R and DHIS 2 Integration Mapping with R and PostgreSQL

A.4. Mapping with R and PostgreSQL


A somewhat more extended example, will use the RPostgreSQL library and several other libraries to produce a map
from the coordinates stored in the database. We will define a few helper functions to provide a layer of abstraction,
which will make the R code more reusable.

#load some dependent libraries


library(maps)
library(maptools)
library(ColorBrewer)
library(ClassInt)
library(RPostgreSQL)

#Define some helper functions

#Returns a dataframe from the connection for a valid statement


dfFromSQL<-function (con,sql){
rs<-dbSendQuery(con,sql)
result<-fetch(rs,n=-1)
return(result)
}
#Returns a list of latitudes and
longitudes from the orgunit table
dhisGetFacilityCoordinates<- function(con,levelLimit=4) {
sqlCoords<-paste("SELECT ou.organisationunitid, ou.name,
substring(ou.coordinates from E'(?=,?)-[0-9]+\\.[0-9]+')::double precision as
latitude,
substring(ou.coordinates from E'[0-9\\.]+')::double precision as
longitude FROM organisationunit ou where ou.organisationunitid
in (SELECT DISTINCT idlevel",levelLimit, " from _orgunitstructure)
and ou.featuretype = 'Point'
;",sep="")
result<-dfFromSQL(con,sqlCoords)
return(result)
}

151
R and DHIS 2 Integration Mapping with R and PostgreSQL

#Gets a dataframe of IndicatorValues,


# provided the name of the indicator,
# startdate, periodtype and level
dhisGetAggregatedIndicatorValues<-function(con,
indicatorName,
startdate,
periodtype="Yearly",
level=4)
{
sql<-paste("SELECT organisationunitid,dv.value FROM aggregatedindicatorvalue dv
where dv.indicatorid =
(SELECT indicatorid from indicator where name = \'",indicatorName,"\') and dv.level
=", level,"and
dv.periodid =
(SELECT periodid from period where
startdate = \'",startdate,"\'
and periodtypeid =
(SELECT periodtypeid from periodtype
where name = \'",periodtype,"\'));",sep="")
result<-dfFromSQL(con,sql)
return(result)
}

#Main function which handles the plotting.


#con is the database connection
#IndicatorName is the name of the Indicator
#StartDate is the startdate
#baselayer is the baselayer
plotIndicator<-function(con,
IndicatorName,
StartDate,
periodtype="Yearly",
level=4,baselayer)
{
#First, get the desired indicator data
myDF<-dhisGetAggregatedIndicatorValues(con,
IndicatorName,StartDate,periodtype,level)
#Next, get the coordinates
coords<-dhisGetFacilityCoordinates(con,level)
#Merge the indicataors with the coordinates data frame
myDF<-merge(myDF,coords)
#We need to cast the new data fram to a spatial data
#frame in order to utilize plot
myDF<-SpatialPointsDataFrame(myDF[,
c("longitude","latitude")],myDF)
#Define some color scales
IndColors<-c("firebrick4","firebrick1","gold"
,"darkolivegreen1","darkgreen")
#Define the class breaks. In this case, we are going
#to use 6 quantiles
class<-classIntervals(myDF$value,n=6,style="quantile"
,pal=IndColors)
#Define a vector for the color codes to be used for the
#coloring of points by class
colCode<-findColours(class,IndColors)
#Go ahead and make the plot
myPlot<-plot.new()
#First, plot the base layer
plot(baselayer)
#Next, add the points data frame
points(myDF,col=colCode,pch=19)
#Add the indicator name to the title of the map
title(main=IndicatorName,sub=StartDate)

152
R and DHIS 2 Integration Mapping with R and PostgreSQL

#Finally, return the plot from the function


return(myPlot) }

Up until this point, we have defined a few functions to help us make a map. We need to get the coordinates stored in
the database and merge these with the indicator which we plan to map. We then retrieve the data from the aggregated
indicator table, create a special type of data frame (SpatialPointsDataFrame), apply some styling to this, and then create
the plot.

#Now we define the actual thing to do


#Lets get a connection to the database
con <- dbConnect(PostgreSQL(), user= "dhis", password="SomethingSecure",
dbname="dhis")
#Define the name of the indicator to plot
MyIndicatorName<-"Total OPD Attendance"
MyPeriodType<-"Yearly"
#This should match the level where coordinates are stored
MyLevel<-4
#Given the startdate and period type, it is enough
#to determine the period
MyStartDate<-"2010-01-01"
#Get some Some Zambia district data from GADM
#This is going to be used as the background layer
con <- url("http://www.filefactory.com/file/c2a3898/n/ZMB_adm2_RData")
print(load(con))#saved as gadm object
#Make the map
plotIndicator(con,MyIndicatorName,MyStartDate,MyPeriodType,MyLevel,gadm)

The results of the plotIndicator function are shown below.

153
R and DHIS 2 Integration Using R, DHIS2 and the Google
Visualization API

In this example, we showed how to use the RPostgreSQL library and other helper libraries(Maptools, ColorBrewer)
to create a simple map from the DHIS2 data mart.

A.5. Using R, DHIS2 and the Google Visualization API


Google's Visualization API provides a very rich set of tools for the visualization of multi-dimensional data. In this
simple example, we will show how to create a simple motion chart with the Google Visualization API using the
"googleVis" R package. Full information on the package can be found here.. The basic principle, as with the other
examples, is to get some data from the DHIS2 database, and bring it into R, perform some minor alterations on the
data to make it easier to work with, and then create the chart. In this case, we will compare ANC1,2,3 data over time
and see how they are related with a motion chart.

#Load some libraries


library(RPostgreSQL)
library(googleVis)
library(reshape)
#A small helper function to get a data frame from some SQL
dfFromSQL<-function (con,sql){

154
R and DHIS 2 Integration Using R, DHIS2 and the Google
Visualization API
rs<-dbSendQuery(con,sql)
result<-fetch(rs,n=-1)
return(result)
}

#Get a database connection


user<-"postgres"
password<-"postgres"
host<-"127.0.0.1"
port<-"5432"
dbname<-"dhis2_demo"
con <- dbConnect(PostgreSQL(), user= user,
password=password,host=host, port=port,dbname=dbname)
#Let's retrieve some ANC data from the demo database
sql<-"SELECT ou.shortname as province,
i.shortname as indicator,
extract(year from p.startdate) as year,
a.value
FROM aggregatedindicatorvalue a
INNER JOIN organisationunit ou on a.organisationunitid = ou.organisationunitid
INNER JOIN indicator i on a.indicatorid = i.indicatorid
INNER JOIN period p on a.periodid = p.periodid
WHERE a.indicatorid IN
(SELECT DISTINCT indicatorid from indicator where shortname ~*('ANC [123] Coverage'))
AND a.organisationunitid IN
(SELECT DISTINCT idlevel2 from _orgunitstructure where idlevel2 is not null)
AND a.periodtypeid = (SELECT DISTINCT periodtypeid from periodtype where name =
'Yearly')"
#Store this in a data frame
anc<-dfFromSQL(con,sql)
#Change these some columns to factors so that the reshape will work more easily

anc$province<-as.factor(anc$province)
anc$indicator<-as.factor(anc$indicator)
#We need the time variable as numeric
anc$year<-as.numeric(as.character(anc$year))
#Need to cast the table into a slightly different format
anc<-cast(anc,province + year ~ indicator)
#Now, create the motion chart and plot it
M<-gvisMotionChart(anc,idvar="province",timevar="year")
plot(M)

The resulting graph is displayed below.

155
R and DHIS 2 Integration Using PL/R with DHIS2

Using packages like brew or Rapache, these types of graphs could be easily integrated into external web sites. A fully
functional version of the chart shown above can be accessed here.

A.6. Using PL/R with DHIS2


The procedural language for R is an extension to the core of PostgreSQL which allows data to be passed from the
database to R, where calculations in R can be performed. The data can then be passed back to the database for further
processing.. In this example, we will create a function to calculate some summary statistics which do not exist by
default in SQL by using R. We will then create an SQL View in DHIS2 to display the results. The advantage of utilizing
R in this context is that we do not need to write any significant amount of code to return these summary statistics, but
simply utilize the built-in functions of R to do the work for us.

First, you will need to install PL/R, which is described in detail here.. Following the example from the PL/R site, we
will create some custom aggregate functions as detailed here. We will create two functions, to return the median and
the skewness of a range of values.

CREATE OR REPLACE FUNCTION r_median(_float8) returns float as '


median(arg1)
' language 'plr';

CREATE AGGREGATE median (


sfunc = plr_array_accum,
basetype = float8,
stype = _float8,
finalfunc = r_median

156
R and DHIS 2 Integration Using this DHIS2 Web API with R

);

CREATE OR REPLACE FUNCTION r_skewness(_float8) returns float as '


require(e1071)
skewness(arg1)
' language 'plr';

CREATE AGGREGATE skewness (


sfunc = plr_array_accum,
basetype = float8,
stype = _float8,
finalfunc = r_skewness
);

Next, we will define an SQL query which will be used to retrieve the two new aggregate functions (median and
skewness) which will be calculated using R. In this case, we will just get a single indicator from the data mart at the
district level and calculate the summary values based on the name of the district which the values belong to. This query
is very specific, but could be easily adapted to your own database.

SELECT ou.shortname,avg(dv.value),
median(dv.value),skewness(dv.value) FROM aggregatedindicatorvalue dv
INNER JOIN period p on p.periodid = dv.periodid
INNER JOIN organisationunit ou on
dv.organisationunitid = ou.organisationunitid
WHERE dv.indicatorid = 112670
AND dv.level = 3
AND dv.periodtypeid = 3
AND p.startdate >='2009-01-01'
GROUP BY ou.shortname;

We can then save this query in the form of SQL View in DHIS2. A clipped version of the results are shown below.

In this simple example, we have shown how to use PL/R with the DHIS2 database and web interface to display some
summary statistics using R to perform the calculations.

A.7. Using this DHIS2 Web API with R


DHIS2 has a powerful Web API which can be used to integrate applications together. In this section, we will illustrate
a few trivial examples of the use of the Web API, and how we can retrieve data and metadata for use in R. The Web
API uses basic HTTP authentication (as described in the Web API section of this document). Using two R packages
"RCurl" and "XML", we will be able to work with the output of the API in R. In the first example, we will get some
metadata from the database.

#We are going to need these two libraries


require(RCurl)
require(XML)
#This is a URL endpoint for a report table which we can
#get from the WebAPI.

url<-"https://apps.dhis2.org/dev/api/reportTables/KJFbpIymTAo/data.csv"
#Lets get the response and we do not need the headers

157
R and DHIS 2 Integration Using this DHIS2 Web API with R

#This site has some issues with its SSL certificate


#so lets not verify it.
response<-getURL(url,userpwd="admin:district"
,httpauth = 1L, header=FALSE,ssl.verifypeer = FALSE)
#Unquote the data
data<-noquote(response)
#here is the data.
mydata<-read.table(textConnection(data),sep=",",header=T)
head(mydata)

Here, we have shown how to get some aggregate data from the DHIS2 demo database using the DHIS2's Web API.

In the next code example, we will retrieve some metadata, namely a list of data elements and their unique identifiers.

#Get the list of data elements. Turn off paging and links
#This site has some issues with its SSL certificate
#so lets not verify it.
url<-"https://apps.dhis2.org/dev/api/dataElements.xml?
paging=false&links=false"
response<-getURL(url,userpwd="admin:district",
httpauth = 1L, header=FALSE,ssl.verifypeer = FALSE)
#We ned to parse the result
bri<-xmlParse(response)
#And get the root
r<-xmlRoot(bri)
#Parse out what we need explicitly, in this case from the first node
#Just get the names and ids as separate arrays
de_names<-xmlSApply(r[['dataElements']],xmlGetAttr,"name")
de_id<-xmlSApply(r[['dataElements']],xmlGetAttr,"id")
#Lets bind them together
#but we need to be careful for missing attribute values
foo<-cbind(de_names,de_id)
#Recast this as a data frame
data_elements<-as.data.frame(foo,
stringsAsFactors=FALSE,row.names=1:nrow(foo))
head(data_elements)

Note that the values which we are interested in are stored as XML attributes and were parsed into two separate matrices
and then combined together into a single data frame.

158

You might also like