0% found this document useful (0 votes)
3K views12 pages

Performance Tuning Cognos

This document provides tips for improving the performance of Cognos reports by optimizing queries, calculations, filtering, indexing, and other aspects of report design and configuration. Some key recommendations include running queries against the database instead of locally whenever possible, moving calculations to the data model, using filters to minimize retrieved data, and leveraging database functions over Cognos functions. Proper use of tables and conditional formatting is also advised to control report output and sizing.

Uploaded by

kajapanisrikanth
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3K views12 pages

Performance Tuning Cognos

This document provides tips for improving the performance of Cognos reports by optimizing queries, calculations, filtering, indexing, and other aspects of report design and configuration. Some key recommendations include running queries against the database instead of locally whenever possible, moving calculations to the data model, using filters to minimize retrieved data, and leveraging database functions over Cognos functions. Proper use of tables and conditional formatting is also advised to control report output and sizing.

Uploaded by

kajapanisrikanth
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 12

Performance Tuning – Cognos Reports

1. For your queries, try to set the Processing property to Database Only. It runs faster
than Limited Local.

2. Try to use fixed column widths where possible. We have found that dynamic sizing
sometimes runs queries twice--once to find the largest data content, then again to
dynamically size the columns based on the largest data content.

3. Try to move as many calculations as possible into the model so your query doesn't have
to do them for every row of data pulled.

4. Try to use as many filters as possible up front to minimize the data you must pull in
and process.

5. Simple CASE statements run faster than Search CASE statements.

6. Place calculations strategically. Rather than include a calculation in a CASE statement


in multiple places, do it once in a separate field and then refer to it in multiple places.
This minimizes processing time.

7. Create logical and useful index fields and logical table structures to make searching and
pulling the data as efficient as possible.

8. When sorting a field, sort it either at the tabular model level OR the query levels but
NOT both.

9. Where possible, use database functions for calculations instead of their equivalent
Cognos functions.

10. When using direct facts in the report do not create shortcuts, use directly in facts
under Query.

11. Check the numerical query items usage in the model if we are not going to summarize
in the reports make those usage as a Attribute (by default all the numerical data items
having the usage as a Fact so that all the facts are sort and summarizing in reports)
Unnecessarily we don’t want to summarize the facts it is taking time.

Under cognos configuration there are some parameters you can set for medium
configuration of your reportnet services;
Default Font : Arial

Further, performance tuning of reportnet setting depends on environment to environment


I mean RAM and processor – Database side indexes and clusters…
which includes OS and java versions and so on.
1. Local Processing. In all queries, turn local processing off whenever possible - i.e. set up for
Database Processing only - this forces work to be done in the database and if we've defined
reporting tables properly will always be viable option

2. Auto Summarisation. In all queries which are putting one record out in the report per
database record, always turn the Auto Summarise option for the query to No(default is Yes)-
this can make a huge difference to query performance and complexity when picking up lots of
fields, as otherwise the SQL consists of a mass of SUM() and MAX() and then GROUP BY for
all dimensions/attributes - unnecessarily so. Again, in the majority of cases, we'll be using one
record per output row so this is generally going to work well.

3. Setting Sizes - especially heights. Don't try and control sizes when you don't need to -
particularly heights. It can be good practice to control the width of various columns sometimes
to create a level of symmetry and in particular to control widths when headings have multiple
words, or in tables that need to mutually line up - but in general web based apps should be
allowed to do what they need to do to fit the output window (or paper) as best they can.
Controlling heights is particularly messy and should rarely be necessary in multirow reports.
There WILL always be a better way with the powerful alignment, padding and margin settings

4. Using our traffic light images. Be careful around images - our standard traffic light and
arrow images are the right size to line up with our report text standards BUT you should make
sure that they are padded 0px above and 0px below, and that the text items on the row are set to
middle for vertical alignment. This allows the image to dictate the height of the row and
everything to line up properly across a row. You should also ensure that the default padding for
the whole row is set to either 1px above and 2px below or vice versa - then if the image is not
displayed, the height is still ok

5. Controlling when objects appear or are hidden. Quite often it is necessary to control
when an object appears and when it doesn't. If this is a simple yes or no, always use the
rendering control rather than conditional blocks/formatting - much simpler and less hassle
within report studio. If the display should be on 'no' create a second variable. Conditional
blocks are generally more relevant when trying to remove a column altogether so that when not
displayed there is no width (typically drill through hyperlinks)

6. Report Expressions vs Query Expressions - report expressions are quite limited in terms
of functionality and construction - generally better if constructing something to display to do it
in the query and use the query variable (in our older Branch InSight applications all images
were typically defined as report expressions - easier now to put the full value into an item in
the query subject in Framework Manager).

7. Tables are your friend. You really need to be comfortable with tables within tables within
tables as a way of laying out reports which align multiple elements with the developer really in
control. They can go anywhere - in list columns, in headings, on the background (and list
controls should generally always sit within a table cell so you can add items above and below
the list). There is no overhead with tables and they are a great way of keeping control of output
if used well. Additionally, tables can be used in single cut and paste operations to replicate
their content - table cells etc can't. And remember, when working within table cells, always set
the vertical and horizontal alignment the cell to control what happens if the objects within the
cell don't fill it - again, take control of the output. You can set cell attributes for a wide range of
cells in a table by control-clicking on opposite corners (a great step forward from ReportNet!).

8. Tables can be your enemy. However, if you lose control of tables things can get into a real
mess!!!! In particular, remember that the default width for tables is 100% of their parent or
container object. This is generally desirable 'within' other tables and list columns and the like -
however at the outer levels it is almost certainly something to delete and leave the width setting
to blank. The table (and/or list) will then size correctly to be as wide or narrow as it needs to be
rather than stretching unnecessarily.

9. Set formatting at the highest level. If you do have to override the default formats in a
report and in list controls etc, do so at the highest possible level (eg table cell rather than every
text item in the cell, and via the List Columns Body Style and List Columns Title Style options
rather than for each individual column or heading) - also includes padding, alignment, border
settings as well as colour and font etc. This simplifies your life as a developer, means that if
new objects are added or columns included they inherit the correct style and will keep the size
of the html passed into browsers down............

10. Keep it simple! Two basic elements here - on the data/query side put the complexity into
the actual data if the complexity really is essential, or by second choice into the model, but
keep it out of the reports; and on the report pages - avoid the use of Javascript or other complex
solutions as they will be difficult to maintain and will almost certainly be a problem during
Cognos upgrades - exception being the centrally provided and supported elements such as the
'Select Branch' prompt. The data/query elements can also be a problem at upgrade time if we
complicate them - certainly was from ReportNet to Cognos 8 around sorting and the like

I'm sure there are lots more people could add - this almost like a current 'top ten' which we
could maintain somewhere along with a longer and growing list of everything we come up with
as we hit things. Apologies where some of these seem like stating the obvious but sometimes
obvious is good!

Number 11 is always know what your queries are doing - if the generated SQL is complicated
then there is almost certainly something wrong with the model and/or the query!
1. Tables joins should be written first before any condition of WHERE Clause.
because the Condition which filter out the maximum records should be placed at the end.

2. Table Name sequence


Oracle always processes table names from right to left.
-- it uses an internal sort/merge procedure to join those tables.

first it scans & sorts the first table ( the one specified Last Table in From clause)
Next it scans the second table and merges all of the rows retrived from the second table with
those from the first table

3. Decode Statement provides a way to avoid having to scan the same rows repetitively or to
join the same table repetively

Cognos

1. Reuse the same data source connection to avoid local processing then the native SQL
contains only one Select Statement

2. Metadata catching
Framework Manager stores the metadata that is imported from the data source. However
depending on governor settings and certain actions you take in the model, this metadata might
not be used when preparing a query. If you enable the Allow enhanced model portability at run
time governor, Framework Manager always queries the data source for information about the
metadata before preparing a query. If you have not enabled this governor, in most cases
Framework Manager accesses the metadata that has been stored in the model instead of
querying the data source.

The main exceptions are:


1. The SQL in a data source query subject has been modified. This includes the use of macros.

2. A calculation or filter has been added to a data source query subject.

- > Stores Information on data items and query subjects in the model in most of the cases
information capured by Metadata import, cached with FM model and re-used at run
time.
 Allows for faster preparaton and validation of query SQL.
 Reduce Meta Data Call backs when running reports.
 Disable the "Enhanced model Portability" options forces metadata callbacks.Database
Layer import all tables as is
What Is Minimized SQL?
When you use minimized SQL, the generated SQL contains only the minimal set of
tables and joins needed to obtain values for the selected query items. If you are modeling a
normalized data source, you may be more concerned about minimized SQL because it will
reduce the number of tables used in some requests and perform better.

In this case, it would be best to create relationships and determinants between the data source
query subjects and then create model query subjects that do not have relationships.

What Is the Coalesce Statement?


A coalesce statement is simply an efficient means of dealing with query items from conformed
dimensions. It is used to accept the first non-null value returned from either query subject. This
statement allows a full list of keys with no repetitions when doing a full outer join.

Why Is There a Full Outer Join?


A full outer join is necessary to ensure that all the data from each fact table is retrieved. An
inner join gives results only if an item in inventory was sold. A right outer join gives all the
sales where the items were in inventory. A left outer join gives all the items in inventory that
had sales. A full outer join is the only way to learn what was in inventory and what was sold.

Determinants are designed to provide control over granularity in a similar, but not
identical, way as dimension information in Cognos ReportNet. A determinant can define the
set of database columns (query items) that uniquely identify a set of data, or it can identify a set
of columns that identify a non-unique set within the data.

Determinants are most closely related to the concept of keys and indexes in the data source and
are imported based on key and index information in the data source. We recommend that you
always review the determinants that are imported. There is no concept of hierarchy in
determinants. The order in which they are specified governs
the order in which they are evaluated.

Use determinants in the following cases:


• Joins exist at multiple levels of granularity for a single query subject. An example is the Time
dimension in the Go Data Warehouse sample model. There are joins to the Time dimension on
the day key and on the month key. Determinants are used for the Time dimension when you
want to prevent double-counting for multiple-fact queries. For example, some facts join to time
on month and some facts join to time on day. Specify determinants for time to clearly capture
the functional dependency between month and day as a minimum to prevent double-counting
for those facts that join at the month key.
• BLOB data types exist in the query subject. Querying blobs requires additional key or index
type information. If this information is not present in the data source, you can add it using
determinants. Override the determinants imported from the data source that conflict with
relationships created for reporting. For example, there are determinants on two query subjects
for multiple columns but the relationship between the query subjects uses only a subset of these
columns. Modify the determinant information of the query subject if it is not appropriate to use
the additional columns in the relationship.
• A join is specified that uses fewer keys than a unique determinant that is specified for a query
subject. If your join is built on fewer columns than what is stored in Framework Manager
within the determinants, there will be a conflict. Resolve this conflict by modifying the
relationship to fully agree with the determinant or by modifying the determinant to support the
relationship. ------------

Determinants – The Answer to a Framework


Manager Mystery
By Ralph Baker | Published: February 1, 2010

Determinants can play a crucial role in the overall performance and consistency of your
Framework Manager model but remain one of the most confusing aspects of the
application to most developers. This article will attempt to end the confusion.

Determinants are used so that a table of one grain (level of detail) behaves as if it were
another actually stored at another grain. They are primarily used for dimension tables
where fact tables join to dimension tables at more than one level in the dimension. (There
are other cases where you could use them, but they are less common and fairly specific
situations.)

The Situation

Let’s use the example of a date dimension table with day level grain. If all the fact tables
join at the day level, the most detailed level, then you do not need determinants. But as
many of us know from experience, this is not always the case. Fact table are often
aggregated or stored at different levels of granularity from a number of reasons.

The Problem

The trouble arises when you wish to join to the dimension table at a level that is not the
lowest level. Consider a monthly forecast fact table which is at the month level of detail (1
row per month). A join to the month_id (e.g. 2009-12) would return 28 to 31 records
(depending on the month) from the date dimension, and throw off the calculations.
Determinants solve this problem.

The SQL
Often when modeling, it’s useful to think about the SQL code you would like to generate.
Without determinants, the incorrect SQL code would look something like this.

SELECT
F.FORCAST_VALUE,
D.MONTH_ID,
D.MONTH_NAME
FROM SALES_FORECAST F INNER JOIN DATE_DIM D ON
F.MONTH_ID = D.MONTH_ID

This code will retrieve up to 31 records for each of the sales forecast records. Applying
mathematical functions, for example Sum and Count, would produce an incorrect result.
What you would like to generate is something along the following lines, which creates a
single row per month, AND THEN join to the fact table.

SELECT
F.FORCAST_VALUE,
D1.MONTH_ID,
D1.MONTH_NAME
FROM SALES_FORECAST F INNER JOIN
( SELECT DISTINCT
D.MONTH_ID,
D.MONTH_NAME
FROM DATE_DIM D ) AS D1
ON F.MONTH_ID = D1.MONTH_ID

As shown above, the trick is to understand which columns in the dimension table are
related to the month_id, and therefore are unique along with the key value. This is exactly
what determinants do for you.

Unraveling the Mystery in Framework Manager

Following Cognos best practices, determinants should be specified at the layer in the
model in which the joins are specified.

Here we see a date dimension with 4 levels in the dimension, Year, Quarter, Month and
day level.
This means we can have up to 4 determinants defined in the query subject depending on
the granularity of the fact tables present in your model. The first three levels, Year,
Quarter, Month, should be set to “group-by” as they do not define a unique row within the
table and Framework Manager needs to be made aware that the values will need to be
“Grouped” to this level. In other words, the SQL needs to “group by” a column or columns
in order to uniquely identify a row for that level of detail (such as Month or Year). The Day
level (often called the leaf level) should be set to “Uniquely Identified”, as it does uniquely
identify any row within the dimensional table. While there can be several levels of “group
by” determinants, there is typically only one uniquely identified determinant, identified by
the unique key of the table. The “uniquely identified” determinant by definition contains all
the non-key columns as attributes, and is automatically set at table import time, if it can be
determined.

The Key section identifies the column or columns which uniquely identify a level. Ideally,
this is one column, but in some cases may actually need to include more than one
column. For example, if your Year and Month values (1-12) are in separate columns. In
short, the key is whatever columns are necessary to uniquely identify that level.

Using our aforementioned table, the setup would look like this:
The Attributes section identifies all the other columns which are distinct at that level. For
example, at a month_id (e.g. 2009-12) level , columns such as month name, month
starting date, number of days in a month are all distinct at that level. And obviously items
from a lower level, such as date or day-of-week, are not included at that level.

Technically, the order of the determinants does not imply levels in the dimension.
However, columns used in a query are matched from the top down which can be very
important to understanding the SQL that will be generated for your report. If your report
uses Year, Quarter and Month, the query will group by the columns making up the Year-
key, Quarter-key and Month-key. But if the report uses just Year and Month (and not the
Quarter) then the group by will omit the Quarter-key.

How Many Levels Are Needed?

Do we need all 4 levels of determinants? Keep in mind that determinants are used to join
to dimensions at levels higher than the leaf level of the dimension. In this case, we’re
joining at the month level (via month_id). Unless there are additional joins at the year or
quarter level, we do not strictly need to specify those determinants. Remember that year
and quarter are uniquely defined by the month_id as well, and so should be included as
attributes related to the month, as shown.
The Result

Following these simple steps the following SQL will be generated for your report. The
highlighted section is generated by the determinant settings. Notice how it groups by the
Month_ID, and uses the min function to guarantee uniqueness at that level. (No, it
doesn’t trust you enough to simply do a SELECT DISTINCT.) The second level of group
by is the normal report aggregation by report row. So the result is that the join is done
correctly, which each monthly fact record joined to 1 dimensional record at the appropriate
level, to produce the correct values in the report.
Why does the filter defined in Framework Manager generate different SQL than in
Report Studio?

Technote (troubleshooting)

Problem(Abstract)
A filter is defined on a query subject within Framework Manager
then ReportStudio generates multiple SELECT statement but If the
filter is defined within ReportStudio then it will create a single
SELECT with outer join and WHERE clause as expected.

Resolving the problem


This is expected behaviour. A filter in a query subject is not
equivalent to a filter in a report, this is by design.

1. Filter Applied in Framework Manager:

A filter embedded in a query subject is always applied before the


query subject is joined to the rest of the query subjects in the model.

Whenever Report Query Processing (RQP) sees a query subject with


an embedded filter, it will treat the query subject as a derived table,
i.e. it will geenerate a separate select in the Cognos SQL.

2. Filter Applied in Report Studio:

A filter specified in the report is applied to the generated query.

The exact point where RQP applies the filter depends on many
factors, such as: filter type (detail vs. summary), query type (single-
fact vs. multi-fact), the usage property of the filtered query item
(identifier/attribute vs. fact), etc.

The query subject that required a separate select in case # 1 may or


may not require a separate select here. The behaviour depends on
"Generate SQL" property of the query subject specified in FM
model. If the propety is set to "As View", we generate a separate
select for the query subject. If the property is set to "Minimize", we
try to avoid a separate SQL.
-----------------------------------

You might also like