Mastering QlikView Data Visualization - Sample Chapter
Mastering QlikView Data Visualization - Sample Chapter
ee
Mastering QlikView
Data Visualization
$ 49.99 US
31.99 UK
P U B L I S H I N G
Karl Pover
Sa
pl
e
P r o f e s s i o n a l
E x p e r t i s e
D i s t i l l e d
Karl Pover
P U B L I S H I N G
provides QlikView consulting services throughout Mexico. Since 2006, he has been
dedicated to providing QlikView presales, implementation, and training for more
than 50 customers. He is the author of Learning QlikView Data Visualization, and he
has also been a Qlik Luminary since 2014. You can follow Karl on Twitter
(@karlpover) or on LinkedIn (https://mx.linkedin.com/in/karlpover).
He also blogs at http://poverconsulting.com/.
Preface
This may be a horrible way to start a book, but in all honesty my first real-world
QlikView experience was a failure. I was assigned to do a proof-of-concept with
a prospective client's IT department, and they insisted that I share every mouse
click and keystroke on a large projection screen with them. I had taken a QlikView
designer and developer course and was developing a QlikView template in my spare
time, but this hadn't prepared me for the live development of a real application.
I fumbled around the screen as I developed their first data model and charts. They
must have doubted my competence, and I was embarrassed. However, I was
surprised to hear that they were impressed with how little time it had taken me to
convert raw data to interactive data visualization and analysis. I had created the
required indicators and finished their first application within three days.
The goal of the proof-of-concept was to demonstrate the value that QlikView could
provide to the prospective client's company, and it all seemed to have gone well.
After all, I had created an attractive, functional QlikView application that was
filled with the indicators that the IT department had requested. However, I failed
to demonstrate QlikView's value directly to the business users; in the end, the
prospective client never purchased QlikView.
All was not lost because I ultimately learned that, although it is important to
understand all of QlikView's technical features, we can't display its value by only
memorizing the reference manual. If we really want to master QlikView, we have
to go beyond the technical functionality and learn what business value QlikView
enables us to deliver. Moreover, we must bring about a data discovery initiative
that changes a company's culture.
Preface
This first experience occurred ten years ago and these first failures have given way
to success. I am lucky to have the opportunity to work as a QlikView consultant
and participate in projects that encompass multiple organizations and various
functional areas. All of their difficult challenges and excellent ideas have helped me
to constantly learn from our mutual successes and failures.
During the last ten years that I've implemented QlikView projects, I've found
that many businesses share much of the same advanced data analysis goals. For
example, most sales departments in every company dream about having an easy
way to visualize and predict customer churn. We will go over these common, but
complicated, business requirements that you can apply to your own company.
As a QlikView master, you have to be just as comfortable discussing the most
appropriate performance indicator with a business user, as you are with scripting
out a data model that calculates it. For this reason, at one end, we will explain the
business reasons for a particular visualization or analysis and, at the other end, we
will explain the data model that is necessary to create it.
We will then develop different types of data visualization and analysis that look to
push the boundaries of what is possible in QlikView. We will not focus on QlikView
syntax or function definitions. Instead, we will see how to apply advanced functions
and set analysis to real business problems. Our focus on the business problem will also
lead us to look beyond QlikView and see what other tools we can integrate with it.
Practice leads to mastery, so I've included sample data models and exercises
throughout this book. If they apply to your business, I recommend that you copy and
paste these exercises over your own data to see what feedback you get from your
business users. This extra step of adjusting the exercise's code to make it work with
a different dataset will confirm your understanding of the concept and cement it in
your memory.
Ultimately, I hope that, by sharing my experience, I will help you succeed where I
first failed. In doing so, when you finally fail, it will be because you are attempting
to do something beyond what I have done. Then, when you finally overcome your
failure and succeed, I can learn from you, the master.
Preface
Sales Perspective
The success of all businesses is at some point determined by how well they can sell their
products and/or services. The large amount of time and money that companies spend
on software that facilitates the sales process is testament to its importance. Enterprise
Resource Planning (ERP), Customer Relationship Management (CRM), and Point
of Sales (PoS) software not only ease the sales process, but also gather a large amount
of sales-related data. Therefore, it is not uncommon that a company's first QlikView
application is designed to explore and discover sales data.
Before we begin to create data visualization and analysis for our sales perspective,
let's review the data model that supports it. In the process, we will resolve data
quality issues that can either distract users' attention away from a visualization's
data or distort how they interpret it. Next, we'll introduce two common sales
department user stories and build solutions to stratify customers and analyze
customer churn. Finally, let's take our first look at QlikView extensions and
overall application design.
In this chapter, let's review the following topics:
Let's get started and review the data model that we will use to create our sales
perspective in QlikView.
[ 17 ]
Sales Perspective
[ 18 ]
Chapter 2
Exercise 2.1
With the following steps, let's migrate the sales perspective container from the book's
exercise files to where we've installed QDF on our computers and start to explore the
data together:
1. In the Ch. 2 folder of the book's exercise files, copy the container called
1001.Sales_Perspective to the QDF folder that is located on your
computer. By default, the QDF folder will be C:\Qlik\SourceDate.
2. In the QDF folder, open the VariableEditor Shortcut in the
0.Administration container.
3. Click Container Map Editor. If the button hangs, then enable the Open
Databases in Read and Write mode in the Setting tab of the Edit Script
window and try again.
4. In the container map table, go to the empty line after 99.Shared_Folders,
and under the Container Folder Name column, click the black arrow
indicating that it is an input field.
5. Enter the name of the new container that we just copied, 1001.Sales_
Perspective, into the input field.
6. Continue along the row and enter the Variable Prefix as Sales and the
Container Comments as Container for Sales Perspective.
7. Click the Update Map and create Containers button that is located in the
top-left of the container map table, and when prompted, click Update
Container Map.
8. Save the QlikView file.
Now that we've finished migrating the container to our local QDF, let's open
Sales_Perspective_Sandbox.qvw in the 1.Application folder of the 1001.Sales_
Perspective container and explore the sales data in more detail.
The data model that we are using is a star schema and it includes a set of events
common to many companies. In the fact table at the center of the model, we store the
following events:
Sales invoices
Sales budget
The sales budget may not come from our ERP. It may exist in
Excel or in the database of a specific-planning software.
[ 19 ]
Sales Perspective
Sales invoices are the principal event of the data model. We don't use the general
journal entries that the sales invoices often generate in an ERP system because it does
not have the level of detail that a sales invoice does. For example, product details are
often not included in the general journal entry.
However, it is important that the total sales amount from our sales invoices matches
the total sales that we have in our financial reports. For that reason, it is important to
consider any sales cancelation or other sales adjustment. In this data model, sales credit
memos properly adjust our total sales amount to match the financial reports that we
will see in Chapter 3, Financial Perspective.
Finally, we cannot analyze or judge our sales performance without comparing it
with something. Basic sales analysis involves comparing current sales with either
historical or planned sales. Therefore, we should aim to have at least two years of
sales data or the sales budget in our data model. In this data model, we have both
historical sales and planned sales data.
Planned sales can be either a sales budget, a sales forecast,
or both.
All of these events are discrete events. In other words, they only exist at a discrete
point in time. The fact table that stores discrete events is called a transactional fact
table. The date dimension in a transactional fact table holds the date when the
event occurred.
Along with the date dimension, we use the 7Ws (who, what, where, when, how
many, why, and how) in the following table to describe an example set of metrics
and dimensions that we expect to find in a sales perspective data model:
Dimensions
7Ws
Who
Fields
Customer
Comments
Who
Sales Person
What
Item
[ 20 ]
Chapter 2
Dimensions
7Ws
Fields
Comments
Where
Billing Address,
When
Shipping Address
Date
Why
Promotion
Description
How
_OnlineOrderFlag
Fields
Net Sales
Comments
Metrics
7Ws
How
many
How
many
Quantity
How
many
Gross Profit
For more information on data modeling, read Data Warehouse Toolkit by Ralph
Kimball, and Agile Data Warehouse Design by Lawrence Corr.
[ 21 ]
Sales Perspective
Chapter 2
The way to give users the ability to select missing items values to replace incorrect
and null item keys in the fact table with a key to a fictitious item. The key to the
fictitious item is defined as negative one (-1). Our first step to replace incorrect and
null item keys is to create a mapping table using the Items table where we map all
the existing item keys with their own values:
MappingMissingIncorrectItemsKeys:
Mapping
LOAD _KEY_ItemID,
_KEY_ItemID
FROM
$(vG.QVDPath)\2.Transform\Items.qvd
(qvd);
The second step is to save the original value stored in _Key_ItemID in another field
and apply this map to the _Key_ItemID field when we load the Facts table:
Facts:
LOAD [Document ID],
_KEY_ItemID as Original_ItemID,
applymap('MappingMissingIncorrectItemsKeys',_KEY_ItemID,-1) as
_KEY_ItemID,
_KEY_Date,
...
FROM
$(vG.QVDPath)\2.Transform\Facts.qvd
(qvd);
Our final step is to create a fictitious item called 'Missing' with an item key of
negative one (-1) in the Items table:
Concatenate (Items)
LOAD -1 as _KEY_ItemID,
'Missing' as [Item ID],
'Missing' as Item,
'Missing' as [Item Source],
'Missing' as [Item Group],
...
AutoGenerate (1);
[ 23 ]
Sales Perspective
If you noticed that various months do not appear on the horizontal axis, then you are
correct. As Bamdax 126 is not sold during every month, there is no relation between
Bamdax 126 and the months when the item was not sold. The values are missing,
and these missing values distort the line chart.
In order to completely resolve this issue, we would have to complement the fact table
with the Cartesian product of any or all dimension key sets, and in effect, measure
nil events. However, we should take into account that this may cause a severe
degradation of our QlikView application's performance. Therefore, we should apply
this solution pragmatically to solve specific analytical needs.
In this case, we specifically want to see a more accurate net sales trend for Bamdax
126 that includes the months that we did not sell the item. We do this by adding the
following code to our load script after loading the Facts table. The code creates a
Cartesian product of the Product and Date dimension key sets and adds it to our
Facts table:
Missing_Facts_Tmp:
Load distinct makedate(Year(_KEY_Date),Month(_KEY_Date)) as _KEY_Date,
1 as _ActualFlag
Resident Facts;
[ 24 ]
Chapter 2
Left Join (Missing_Facts_Tmp)
Load distinct _KEY_ItemID
FROM
$(vG.QVDPath)\2.Transform\Items.qvd
(qvd);
Concatenate (Facts)
Load *
Resident Missing_Facts_Tmp;
DROP Table Missing_Facts_Tmp;
Finally, we untick the Suppress Zero-Values checkbox in the Presentation tab of the
line chart in order to see the correct net sales trend for Bamdax 126. You will notice
that the following line chart shows that Bamdax 126 is purchased almost every two
months. It is difficult to make this observation in the previous chart.
Again, be very careful when creating a Cartesian product in
QlikView. We create a Cartesian product by joining two or
more tables that do not have a field in common. If the tables
are large, then this may cause QlikView to use all the available
RAM memory and freeze the computer.
[ 25 ]
Sales Perspective
These steps to eliminate null and missing values in the data model will help improve
our data analysis and visualization. However, we will most likely not use all the
fields in the data model, so we shouldn't waste time to clean every field or create
every missing value until they've proven their business value.
Case
We read by identifying the overall shape of words. If we use text values with all
uppercase letters, then all the words have the same block shape. Which makes words
harder to identify and reduces readability. Also, all uppercase text values tend to be
less aesthetically appealing.
A quick search in Google reveals that some people have
begun to challenge this belief. Hopefully, future scientific
studies will soon allow us to make the best decision and
confirm how to optimize text readability.
An even worse scenario is when a field has some text values in all uppercase and
others in lowercase. This is common when we integrate two data sources, and it is an
unnecessary distraction when we visualize data.
First, we use the capitalize() function when the field is a proper noun, such as
customer name, employee name, or city. The function will return a mixed-case text
value with the first letter of every word being a capital letter. Secondly, we use the
upper() function to standardize text fields that are abbreviations, such as state or
units of measurement. Lastly, we use the lower() function to standardize all other
text fields.
[ 26 ]
Chapter 2
This solution is not perfect for some text values, such as a street
address that contains both proper nouns and abbreviations.
For example, Cedar St. NW requires a more nuanced approach.
However, a street address is rarely used for analysis, and any
extra effort to standardize this or any other field should be
weighed against its business value.
Unwanted characters
Text values with strange characters can also be an unnecessary distraction.
Characters, such as a number sign (#), an exclamation mark (!), a vertical bar (|), and
so on, can sometimes find their way into text descriptions where they don't belong.
We can eliminate them with the purgechar() function or the replace() function.
Also, extra spaces between words in a dimension value can make our charts look
sloppy. QlikView tends to eliminate leading and trailing spaces, but it doesn't
eliminate extra spaces between words. We can accomplish this using the following
expression, preferably in our load script:
replace(replace(replace(FieldName,' ','<>'),'><',''),'<>',' ')
Sales Perspective
Master calendar
Along with formatting field values, we also standardize the use of whole dimension
in order to facilitate analysis of tables. Those that we reuse between different data
models are called conformed dimensions. The date dimension is ubiquitous and
serves as a great example to create the first conformed dimension.
The range of dates that we use in each data model may change, so instead of using
the exact same table for each data model, we create a master calendar reusing the
same script. We call these reusable scripts subroutines, and in QDF we store script
subroutines in the following file path:
C:\Qlik\SourceData\99.Shared_Folders\3.Include\4.Sub
Although QDF has a master calendar subroutine, we will use the master
calendar subroutine that is available from QlikView Components (http://
qlikviewcomponents.org). Qlikview Components is a library of script subroutines
and functions that were developed by Rob Wunderlich and Matt Fryer. We prefer
this mastercalendar subroutine because it automatically creates several calendarbased set-analysis variables that we can use in our charts.
QDF is not the end but rather the means. It is designed to be
flexible so that we can adapt it to our needs. We can create,
import, and modify any reusable component that best fits our
business requirements.
We then add the following code to create the master calendar and the calendar-based
set-analysis variables:
SET Qvc.Calendar.v.CreateSetVariables = 1;
call Qvc.CalendarFromField('_KEY_Date');
[ 28 ]
Chapter 2
We finish the load script by running a subroutine that eliminates any temporary
variables that were used to create the master calendar:
CALL Qvc.Cleanup;
After running our load script, we now have the following master calendar:
Most of these columns look familiar. However, the columns that end with Serial
may be new to you. To those of us who have battled with defining date ranges with
set analysis, the Serial columns help make this an easier task.
For example, we can calculate year-to-date (YTD) sales easily with the following
expression:
sum({$<Year={$(=max(Year))},Month=,_DateSerial={"<=$(=max(
_DateSerial))"},_ActualFlag={1}>}[Net Sales])
However, instead of repeating this set analysis in every chart, we can use the
calendar-based set-analysis variables to calculate YTD sales. We can improve the
preceding expression using the set-analysis variable called vSetYTDModifier:
sum({$<$(vSetYTDModifier),_ActualFlag={1}>} [Net Sales])
[ 29 ]
Sales Perspective
Customer stratification.
Many of the user stories that we take into account when we start to use more advanced
data analysis and visualization techniques are not new. For example, we have probably
already used basic QlikView methods to resolve the following user story.
As a sales representative, I want to see who my most
important customers are so that I can focus my time and
effort on them.
The simplest way to define customer importance is to base it on how much they've
purchased or how much profit they've generated. In its simplest form, we can resolve
this user story with a bar chart that ranks customers by sales or gross profit.
However, given our increasing experience with QlikView, we'll take another look
at this user story and use a more advanced analysis technique called customer
stratification. This method groups customers according to their importance into bins.
The number of bins can vary, but for this exercise we will use four bins: A, B, C, and
D. We use two techniques to stratify customers. The first technique involves using
the Pareto principal, and the second involves using fractiles. We will review the first
technique in this chapter, and then in Chapter 5, Working Capital Perspective, we will
review the second technique.
Pareto analysis
Pareto analysis is based on the principle that most of the effects come from a few
causes. For example, most sales come from a few customers, most complaints come
from a few users and most gross profit come from a few products. Another name
for this analysis is the 80-20 rule, which refers to the rule of thumb that, for example,
80% of sales come from 20% of customers. However, it is important to note that the
exact percentages may vary.
We can visualize this phenomena using the following visualization. Each bar
represents the twelve-month rolling net sales of one customer. The customers are
sorted from greatest to least and their names appear along a line that represents the
accumulation of their sales. The customers whose names appear below the horizontal
reference line called 80% total sales line make up 80% of the total company's twelvemonth rolling net sales. These are the customers in which we want to dedicate more
of our time to provide great service:
[ 30 ]
Chapter 2
We also confirm that we don't depend on too few customers by including a reference
line that represents 20% of the total number of active customers. While the exact
percentage depends on the business, we usually hope to have 20% or more of our
customers make up 80% of our sales. The preceding chart clearly shows whether
this is true by verifying that the accumulated sales line crosses the 80% total sales
reference line to the right of where the 20% total customers reference line does.
Exercise 2.2
Let's construct this chart in Sales_Perspective_Sandbox.qvw using the following
chart properties. These are only the principal chart properties that are necessary to
create the chart. Adjust the color, number format, font, and text size as you like:
Chart Properties
Value
Dimensions / Used
Dimensions
[ 31 ]
Sales Perspective
Chart Properties
Value
Expressions
Presentation /
Reference Lines
[ 32 ]
Chapter 2
We avoid overlapping labels on the data points by adding some intelligence into
the expression called Customer and only show the label when the customer's sales
participation is greater than 5%.
While this is a powerful visualization, we simplify customer stratification for our
sales representatives and assign each customer a particular letter according to how
they are ranked as per the Pareto analysis. Those that are assigned the letter A are
our most important customers, while those that are assigned the letter D are our least
important customers. The following table details how we assign each letter to our
customers:
Assigned Letter
0-50%
50-80%
80-95%
95-100%
If we use the chart accumulation options like in the previous exercise or other
methods like inter-row chart functions to determine which group each customer
belongs to, we are forced to always show every customer. If we select any customer
or apply any other filter then we lose how that customer is classified. In order to
assign a letter to each customer and view their classification in any context, we use
a method that uses alternate states. Let's perform the following tasks to classify our
customers based on rolling twelve-month net sales.
This method was first introduced by Christof Schwarz in the Qlik
Community (https://community.qlik.com/docs/DOC-6088).
[ 33 ]
Sales Perspective
Exercise 2.3
Perform the following steps for this exercise:
1. Create an Input Box that contains three new variables: vPctSalesA,
vPctSalesB, and vPctSalesC. Assign the values 50%, 80%, and 95% to each
variable, respectively.
2. In Settings -> Document Properties, click Alternate States in the General
tab. Add three new alternate states: A_CustomerSales, AB_CustomerSales,
and ABC_CustomerSales.
3. Create a button named Calculate Stratification with the following
actions:
Actions
Values
Copy State
Contents
We leave the Source State empty and use the following Target State:
Pareto Select
A_CustomerSales
Customer
Copy State
Contents
We leave the Source State empty and use the following Target State:
Pareto Select
AB_CustomerSales
Customer
[ 34 ]
Chapter 2
Actions
Values
Copy State
Contents
We leave the Source State empty and use the following Target State:
Pareto Select
ABC_CustomerSales
Customer
4. Finally, create a straight table with Customer as the dimension and the
following two expressions:
Label
Expression
Rolling
12-month net
sales
=sum({$<$(vSetRolling12Modifier),_ActualFlag={1}>}
[Net Sales USD])
Classif.
5. Optionally, add a background color that corresponds to each letter with the
following expression:
if(len(only({A_CustomerSales} Customer)) <> 0, blue(100),
if(len(only({AB_CustomerSales} Customer)) <> 0, blue(75),
if(len(only({ABC_CustomerSales} Customer)) <> 0,
blue(50),blue(25)))))
[ 35 ]
Sales Perspective
Using this method we can select any customer and still observe how it is classified.
We can perform this same stratification technique using other additive metrics,
such as gross profit. Also, instead of customers, we can also stratify items or sales
representatives.
The second part of stratification involves using nonadditive metrics. For example,
we cannot use the Pareto principal to classify customers based on the average
number of days they their invoices. In Chapter 5, Working Capital Perspective we will
review how we can classify customers using fractiles and create a visualization that
gives us a general overview of how they are stratified.
Sales representatives can now easily see which customers have the most impact on
sales and dedicate more time to provide them with better service. At the
same time, they need to avoid losing these customers. So let's take a look at how we
can help them anticipate customer churn.
Customer churn
Customer churn is a measure of the company's tendency to lose customers. Our user
story speaks of the need to detect at-risk customers and prevent them from becoming
a lost customer.
[ 36 ]
Chapter 2
Surely, there are many variables that we may use to predict customer churn. In this
case we expect customers to consistently make a purchase every so many days, so we
will use a variable called customer purchase frequency to detect those that we are at
risk of losing.
We could calculate the average number of days between purchases and warn sales
representatives when the number of days since a customer's last purchase exceeds
that average.
However, a simple average may not always be an accurate measure of a customer's
true purchasing behavior. If we assume that their purchase frequency is normally
distributed then we use the t-test to determine within what range the average is
likely to fall. Moreover, we prefer the t-test because it can be used for customers that
have made less than thirty or so purchases.
If we want our model to be sensitive to customer inactivity then we send an alert
when the days since their last purchase exceeds the average's lower limit. Otherwise,
if we don't want to overwhelm the sales representatives with alerts then we use the
average's upper limit to determine whether we are at risk of losing a customer. We'll
apply the later case in the following example.
Before we calculate the upper limit of a t-distribution, we need to add a table to the
data model that contains the number of days that elapse between field the purchases
each customer makes. We add the Customer Purchase Frequency with the
following code that we add to the load script after having loaded the Facts table:
[Customer Purchase Frequency Tmp]:
Load distinct _KEY_Date as [Customer Purchase Date],
_KEY_Customer
Resident Facts
Where _ActualFlag = 1
and [Net Sales] > 0;
[Customer Purchase Frequency]:
Load [Customer Purchase Date],
_KEY_Customer,
if(_KEY_Customer <> peek(_KEY_Customer),0,[Customer Purchase
Date] - Peek([Customer Purchase Date])) as [Days Since Last Purchase]
Resident [Customer Purchase Frequency Tmp]
Order by _KEY_Customer,[Customer Purchase Date];
DROP Table [Customer Purchase Frequency Tmp];
[ 37 ]
Sales Perspective
Exercise 2.4
Chart Properties
Value
General / Chart
Type
Dimensions /
Used Dimensions
Expressions
[ 38 ]
Chapter 2
Chart Properties
Value
Expressions
Presentation /
Reference Lines
Use the following code to create a reference line called Mean Days
Since Last Purchase:
=Avg([Days Since Last Purchase])
[ 39 ]
Sales Perspective
After additional adjustments to the presentation, we have the following chart. This
particular chart compares the actual purchasing frequency distribution for customer
Gevee. with both a normal and a t-distribution curve:
If we alert the sales representatives any time that a customer waits more than the
mean number of days, then we could be sending too many false alarms, or in other
words false positives. However, if we define at-risk customers as those who wait
longer than the upper limit of the 95% confidence level, we have a higher probability
of alerting the sales representative about customers that are really at-risk, or true
positives.
Let's also keep in mind that not all lost customers have the same effect on the
company, so let's combine the stratification that we performed earlier in the
chapter with our churn-prediction analysis. In this way, sales representatives
know to focus their attention on A customers that are at-risk, and not invest too
much time to follow-up on D customers. The following table shows what this
analysis may look like:
[ 40 ]
Chapter 2
Exercise 2.5
Expressions
[ 41 ]
Sales Perspective
A cycle plot (Cleveland, Dunn, and Terpenning, 1978) offers a alternate way to compare
a large number of periods. The following cycle plot is a QlikView extension that
displays the average sales by weekday in each month and compares it to the total
average sales represented by a flat horizontal line:
[ 42 ]
Chapter 2
Exercise 2.6
Let's create this cycle plot in Sales_Perspective_Sandbox.qvw using the following
steps:
1. In the Ch. 2 folder of the book's exercise files, double-click the CyclePlot.
qar file. QlikView will automatically open and notify you that the extension
has been installed successfully.
2. In Sales_Perspective_Sandbox.qvw, activate WebView.
3. Right-click over an empty space and select New Sheet Object.
4. Click Extensions Objects and drag the extension called Cycle Plot to a
empty place in the sheet.
5. Define the following properties to the cycle plot. The expression is
sum({$<_ActualFlag={1}>} [Net Sales])
/
count(distinct _KEY_Date)
[ 43 ]
Sales Perspective
We should now see the cycle plot similar to the one previously shown. We will
continue to explore more QlikView extensions in later chapters.
[ 44 ]
Chapter 2
We convert the first QlikView application into a design template by first leaving only
the sheets with unique layouts. A layout may include a background, a logo, a sheet
title, and lines that separate sections. We may also leave a few example objects, such
as list boxes and charts, that serve as references when we create the actual objects
that are specific to the each perspective. We save this template into a new QVW file
and use a copy of it every time we create a new QlikView application. The following
image shows an example layout that we use as a design template:
When we create the actual objects for a QlikView application, we can either use the
Format Painter Tool to transfer the property options of the existing reference objects
to the new ones, or we can create a simple QlikView theme based on an existing
chart. The key to making an effective theme is to not over fit the design. We should
only be concerned with simple properties, such as borders and captions. Let's create
a simple theme and enable it to be used to create all new objects from this point on:
1. In the Properties dialog of the pareto analysis we created in Exercise 2.2, let's
click Theme Maker in the Layout tab.
2. We select New Theme and save our theme as Basic_Theme.qvt in
C:\Qlik\SourceData\99.Shared_Folders\9.Misc.
3. We select Object Type Specific and Caption Border.
[ 45 ]
Sales Perspective
4. In the Object Type Specific properties, we select only Axis Thickness, Axis
Font, Axis Color, and Chart Title Settings.
5. In the Caption and border settings, we leave the default selections.
6. In the last step, select the option to Set as default theme for this document.
We can also change this setting in the Presentation tab of the Document
Properties.
We will now save a few seconds every time we create a new chart object. We should
repeat the same procedure for any other objects we frequently create.Also, if we
notice any other repetitive design changes that we are making to new objects, we can
update the theme using the same Theme Maker wizard.
Summary
Our QlikView sales perspective is a great place to start to use more advanced data
visualization and analysis techniques. Sales departments traditionally have both the
resources and the data available to continue to improve their QlikView applications.
Apart from the sales data model that we reviewed, we should continue to include
additional data. Adding cross-functional data from finance, marketing, and operations
gives sales representatives the information that they need to succeed. We can also add
external data sources, such as census data or any other government data. When we add
this additional data, we should keep in mind the cleaning and standardization tips that
we learned in this chapter.
Like customer stratification and customer churn, we can often create minimally
viable solutions using basic QlikView. However, we can develop a better solution
by understanding and applying more advanced techniques like Pareto analysis and
statistical distributions.
We can also add more powerful visualizations and analysis if we use extensions. The
cycle plot is an excellent example of a useful visualization that is not available as a
native QlikView object. In the next chapter, let's review the data model, user stories,
analytical methods and visualization techniques for the financial perspective.
[ 46 ]
www.PacktPub.com
Stay Connected: