Ri 160 Impl

Download as pdf or txt
Download as pdf or txt
You are on page 1of 82

Oracle® Retail Insights

Implementation Guide
Release 16.0
E80950-01

December 2016
Oracle Retail Insights Implementation Guide, Release 16.0

E80950-01

Copyright © 2016, Oracle and/or its affiliates. All rights reserved.

Primary Author: Nathan Young

This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it
on behalf of the U.S. Government, then the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software,
any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users
are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and
agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and
adaptation of the programs, including any operating system, integrated software, any programs installed on
the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to
the programs. No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management
applications. It is not developed or intended for use in any inherently dangerous applications, including
applications that may create a risk of personal injury. If you use this software or hardware in dangerous
applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other
measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages
caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks
are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD,
Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced
Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content,
products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and
expressly disclaim all warranties of any kind with respect to third-party content, products, and services
unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its
affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of
third-party content, products, or services, except as set forth in an applicable agreement between you and
Oracle.

Value-Added Reseller (VAR) Language

Oracle Retail VAR Applications

The following restrictions and provisions only apply to the programs referred to in this section and licensed
to you. You acknowledge that the programs may contain third party software (VAR applications) licensed to
Oracle. Depending upon your product and its version number, the VAR applications may include:

(i) the MicroStrategy Components developed and licensed by MicroStrategy Services Corporation
(MicroStrategy) of McLean, Virginia to Oracle and imbedded in the MicroStrategy for Oracle Retail Data
Warehouse and MicroStrategy for Oracle Retail Planning & Optimization applications.

(ii) the Wavelink component developed and licensed by Wavelink Corporation (Wavelink) of Kirkland,
Washington, to Oracle and imbedded in Oracle Retail Mobile Store Inventory Management.

(iii) the software component known as Access Via™ licensed by Access Via of Seattle, Washington, and
imbedded in Oracle Retail Signs and Oracle Retail Labels and Tags.

(iv) the software component known as Adobe Flex™ licensed by Adobe Systems Incorporated of San Jose,
California, and imbedded in Oracle Retail Promotion Planning & Optimization application.

You acknowledge and confirm that Oracle grants you use of only the object code of the VAR Applications.
Oracle will not deliver source code to the VAR Applications to you. Notwithstanding any other term or
condition of the agreement and this ordering document, you shall not cause or permit alteration of any VAR
Applications. For purposes of this section, "alteration" refers to all alterations, translations, upgrades,
enhancements, customizations or modifications of all or any portion of the VAR Applications including all
reconfigurations, reassembly or reverse assembly, re-engineering or reverse engineering and recompilations
or reverse compilations of the VAR Applications or any derivatives of the VAR Applications. You
acknowledge that it shall be a breach of the agreement to utilize the relationship, and/or confidential
information of the VAR Applications for purposes of competitive discovery.
The VAR Applications contain trade secrets of Oracle and Oracle's licensors and Customer shall not attempt,
cause, or permit the alteration, decompilation, reverse engineering, disassembly or other reduction of the
VAR Applications to a human perceivable form. Oracle reserves the right to replace, with functional
equivalent software, any of the VAR Applications in future releases of the applicable program.
Contents

Send Us Your Comments ......................................................................................................................... ix

Preface ................................................................................................................................................................. xi
Audience....................................................................................................................................................... xi
Documentation Accessibility ..................................................................................................................... xi
Related Documents ..................................................................................................................................... xi
Customer Support ...................................................................................................................................... xii
Review Patch Documentation .................................................................................................................. xii
Improved Process for Oracle Retail Documentation Corrections ....................................................... xii
Oracle Retail Documentation on the Oracle Technology Network ................................................... xiii
Conventions ............................................................................................................................................... xiii

1 Oracle Retail Insights Implementation Guide Introduction


Business Intelligence and Retail Insights ........................................................................................... 1-2

2 Setup and Configuration


Sizing Information................................................................................................................................... 2-1
Factors to Consider ............................................................................................................................ 2-1
Data Seeding of Positional Facts............................................................................................... 2-3
Data Migration from a Legacy Data Warehouse System .................................................................. 2-4
Reporting Scenarios................................................................................................................................. 2-4
Data Initial Load from RDE ................................................................................................................... 2-6
Inventory Position Initial Loading .................................................................................................. 2-6
Pricing Initial Loading....................................................................................................................... 2-6
Net Cost Initial Loading.................................................................................................................... 2-7
Base Cost Initial Loading .................................................................................................................. 2-7
Rendering Item Images, Item Attribute Images, Product Hierarchy Image................................. 2-7

3 Internationalization
Translation ................................................................................................................................................. 3-1
Multi-Language Setup ............................................................................................................................ 3-2
Scenario 1............................................................................................................................................. 3-2
Data Scenario 1a .......................................................................................................................... 3-3
Data Scenario 1b.......................................................................................................................... 3-3

v
Scenario 2............................................................................................................................................. 3-3
Data Scenario 2a .......................................................................................................................... 3-3
Scenario 3............................................................................................................................................. 3-3

4 Compression and Partitioning


Overview of Compression...................................................................................................................... 4-1
What Compression Does................................................................................................................... 4-1
Mechanics of Compression............................................................................................................... 4-2
Compressed Tables and 'CURRENT' Tables ................................................................................. 4-3
Coping with Slowly Changing Dimension Type 2 ....................................................................... 4-3
Fact Close Program (factcloseplp.ksh)..................................................................................... 4-3
Fact Open Program (factopenplp.ksh)..................................................................................... 4-3
Oracle Table Compression...................................................................................................................... 4-4
Overview of Partitioning Strategies ..................................................................................................... 4-4
Implementing Retail Insights Partitioning ..................................................................................... 4-5
Setup and Maintenance for Partitioning Retail Insights Compressed Inventory Table .......... 4-5
Implementing Partitioning for Compressed Inventory Table.............................................. 4-6
How Oracle Implements Partitions ................................................................................................. 4-7
Summary ............................................................................................................................................. 4-8

5 Performance
Key Factors in Performance.................................................................................................................... 5-1
Purging and Archiving Strategy ...................................................................................................... 5-1
Flexible Aggregates............................................................................................................................ 5-2
ETL Programs Performance.............................................................................................................. 5-4
Setting ETL Program Multi-threading..................................................................................... 5-4
ODI Configuration...................................................................................................................... 5-4
ETL Batch Scheduling ................................................................................................................ 5-5
Additional Considerations ........................................................................................................ 5-5
Report Design ..................................................................................................................................... 5-5
Additional Factors.............................................................................................................................. 5-6
Partitioning Strategy.......................................................................................................................... 5-6
Data Base Configuration ................................................................................................................... 5-6
Adequate Hardware Resources ....................................................................................................... 5-6
Leading Practices ...................................................................................................................................... 5-6
Customizations................................................................................................................................... 5-7
ODI Best Practices .............................................................................................................................. 5-7
Oracle BI EE Best Practices ............................................................................................................... 5-7
Batch Schedule Best Practices........................................................................................................... 5-7
Automation .................................................................................................................................. 5-7
Recoverability .............................................................................................................................. 5-8
Retail Insights Loading Batch Execution Catch-Up ...................................................................... 5-8
High Availability ........................................................................................................................ 5-9
Batch Efficiency ........................................................................................................................... 5-9
Aggregates List.......................................................................................................................................... 5-9

vi
6 Retail Insights Aggregation Framework
Overview of Retail Insights Aggregation Framework...................................................................... 6-1
Aggregation Framework Initial Setup and Daily Execution ....................................................... 6-1
Importing ODI Components for Aggregation Framework .................................................. 6-2
Importing Aggregation Framework Shell script .................................................................... 6-2
Initial Aggregation Framework Setup ..................................................................................... 6-2
Aggregation Framework Verification...................................................................................... 6-4
Aggregation Framework Configuration......................................................................................... 6-4
Creating Customized Aggregation Table ............................................................................... 6-4
Configuring Framework in W_RTL_AGGREGATION_DAILY_TMP .............................. 6-5
Populating the Customized Aggregation Table ........................................................................... 6-8
Batch Process ............................................................................................................................... 6-8
Batch Status Control ................................................................................................................... 6-8
Batch Logging.............................................................................................................................. 6-8
Aggregation Framework Data Flow...................................................................................................... 6-9

7 Retail Insights Universal Adapter


Overview of Retail Insights Universal Adapter Framework........................................................... 7-1
Benefits................................................................................................................................................. 7-2
Universal Adapter Installation and Configuration ........................................................................... 7-2
Universal Adapter Execution ................................................................................................................. 7-3

8 Chief Marketing Officer Alerts Configuration


Configuration............................................................................................................................................ 8-1

9 Merchandise Financial Planning Configuration


Modify the .rpd File ................................................................................................................................. 9-1

10 Frequently Asked Questions

vii
viii
Send Us Your Comments

Oracle Retail Insights Implementation Guide, Release 16.0


Oracle welcomes customers' comments and suggestions on the quality and usefulness
of this document.
Your feedback is important, and helps us to best meet your needs as a user of our
products. For example:
■ Are the implementation steps correct and complete?
■ Did you understand the context of the procedures?
■ Did you find any errors in the information?
■ Does the structure of the information help you with your tasks?
■ Do you need different information or graphics? If so, where, and in what format?
■ Are the examples correct? Do you need more examples?
If you find any errors or have any other suggestions for improvement, then please tell
us your name, the name of the company who has licensed our products, the title and
part number of the documentation and the chapter, section, and page number (if
available).

Note: Before sending us your comments, you might like to check


that you have the latest version of the document and if any concerns
are already addressed. To do this, access the Online Documentation
available on the Oracle Technology Network Web site. It contains the
most current Documentation Library plus all documents revised or
released recently.

Send your comments to us using the electronic mail address: [email protected]


Please give your name, address, electronic mail address, and telephone number
(optional).
If you need assistance with Oracle software, then please contact your support
representative or Oracle Support Services.
If you require training or instruction in using Oracle software, then please contact your
Oracle local office and inquire about our Oracle University offerings. A list of Oracle
offices is available on our Web site at http://www.oracle.com.
x
Preface

The Oracle Retail Insights Implementation Guide provides detailed information useful
for implementing the application. It helps you to view and understand the
behind-the-scenes processing of the application.

Audience
The Implementation Guide is intended for Oracle Retail Insights application
integrators and implementation staff.

Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support


Oracle customers that have purchased support have access to electronic support
through My Oracle Support. For information, visit
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are
hearing impaired.

Related Documents
For more information, see the following documents in the Oracle Retail Insights
Release 16.0 documentation set:
■ Oracle Retail Insights Installation Guide
■ Oracle Retail Insights Data Model
■ Oracle Retail Insights User Guide
■ Oracle Retail Insights Operations Guide
■ Oracle Retail Insights Release Notes
For information about Oracle BI administration and end use, see the documentation
library for Oracle Business Intelligence Enterprise Edition, particularly the following
documents:
■ Oracle Fusion Middleware System Administrator's Guide for Oracle Business Intelligence
Enterprise Edition

xi
■ Oracle Fusion Middleware Metadata Repository Builder's Guide for Oracle Business
Intelligence Enterprise Edition
■ Oracle Fusion Middleware System User's Guide for Oracle Business Intelligence
Enterprise Edition

Customer Support
To contact Oracle Customer Support, access My Oracle Support at the following URL:
https://support.oracle.com

When contacting Customer Support, please provide the following:


■ Product version and program/module name
■ Functional and technical description of the problem (include business impact)
■ Detailed step-by-step instructions to re-create
■ Exact error message received
■ Screen shots of each step you take

Review Patch Documentation


When you install the application for the first time, you install either a base release (for
example, 16.0) or a later patch release (for example, 16.0.1). If you are installing the
base release and additional patch releases, read the documentation for all releases that
have occurred since the base release before you begin installation. Documentation for
patch releases can contain critical information related to the base release, as well as
information about code changes since the base release.

Improved Process for Oracle Retail Documentation Corrections


To more quickly address critical corrections to Oracle Retail documentation content,
Oracle Retail documentation may be republished whenever a critical correction is
needed. For critical corrections, the republication of an Oracle Retail document may at
times not be attached to a numbered software release; instead, the Oracle Retail
document will simply be replaced on the Oracle Technology Network Web site, or, in
the case of Data Models, to the applicable My Oracle Support Documentation
container where they reside.
This process will prevent delays in making critical corrections available to customers.
For the customer, it means that before you begin installation, you must verify that you
have the most recent version of the Oracle Retail documentation set. Oracle Retail
documentation is available on the Oracle Technology Network at the following URL:
http://www.oracle.com/technetwork/documentation/oracle-retail-100266.ht
ml

An updated version of the applicable Oracle Retail document is indicated by Oracle


part number, as well as print date (month and year). An updated version uses the
same part number, with a higher-numbered suffix. For example, part number
E123456-02 is an updated version of a document with part number E123456-01.
If a more recent version of a document is available, that version supersedes all
previous versions.

xii
Oracle Retail Documentation on the Oracle Technology Network
Oracle Retail product documentation is available on the following web site:
http://www.oracle.com/technetwork/documentation/oracle-retail-100266.ht
ml

(Data Model documents are not available through Oracle Technology Network. You
can obtain these documents through My Oracle Support.)

Conventions
The following text conventions are used in this document:

Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.

xiii
xiv
1
Oracle Retail Insights Implementation Guide
1

Introduction

Retail Insights offers cloud-based rich business intelligence solution to retail industry
users. Retail Insights is built on top of the latest Oracle technology stack and utilizes
Oracle Data Integrator (ODI) for extracting, transforming, and loading (ETL) the data
and Oracle Business Intelligence Enterprise Edition (BI EE) for end user reporting and
analysis needs.
Retail Insights architecture is designed to meet the retail industry's business
intelligence needs in both program and report performance.
The main characteristics of the Retail Insights product are:
■ Rich Reporting Capabilities: Retail Insights offers report creation capabilities in
three different flavors: Historical (As Was), Current (As Is) and Point-In-Time (PIT)
in same environment. Packaged reports are provided as reference examples for
users to create their own customized reports according to their needs.
■ Comprehensive Solution: Retail Insights includes an end-to-end solution for
reporting and BI needs of the retailer by providing data integration with source
applications, transforming and loading the fact and dimension data, rolling up the
data for improved query performance, Web-based graphical user interface (GUI)
for report creation, shell scripts for setting up the batch schedule, and an
automated installer by following business intelligence best practices.
■ Performant ETL Code: Retail Insights data processing tool, ODI, offers high
performance for the database batch processes on Oracle database.
■ Extensibility: Retail Insights ETL code can be customized and extended for client
specific needs.
■ Flexibility: Retail Insights ODI and Oracle BI EE code promote flexibility during
implementation based on client specific needs and help in improving batch and
report performance.
■ Performant Reports: Retail Insights metadata is built using Oracle BI EE and are
designed to work in complex reporting scenarios.
■ Robust Data Model: Retail Insights data model is designed for supporting a
retailers’ data needs in a business intelligence environment. Data model elements
are designed to work with Oracle BI EE architecture.

Oracle Retail Insights Implementation Guide Introduction 1-1


Business Intelligence and Retail Insights

Business Intelligence and Retail Insights


This section briefly explains the fundamentals of business intelligence and data
warehousing in general. It is important to understand the overall architecture and data
flow for implementing Retail Insights.
Business intelligence includes the processes, methods, and technologies adopted by
organizations to answer complex business questions and for building comprehensive
decision support systems. These systems help organizations in maintaining secure,
conformed, and highly available data for all levels of users from top executives who
make decisions based on corporate level information to managers/analysts who
analyze their area and take actions based on the information.
Business intelligence is built using several processes and applications that maintain
these processes by adopting latest tools and technologies. One of the main components
of business intelligence is a data warehouse. A data warehouse is the repository that
stores the data extracted from several source systems and modelled to perform for
data loading, reporting, and ad-hoc analysis needs.
Retail Insights uses sophisticated techniques to populate the data warehouse.
Explained in greater detail throughout this guide, these techniques include taking the
data provided by Oracle Retail Data Extractor (RDE) and then rapidly transforming
that data and loading it into the data warehouse. Techniques used to load data into the
warehouse vary depending upon whether the data consists of facts or dimensions.
There are several fact and dimension tables in the subject areas available in Retail
Insights. Some examples of subject areas that exist in Retail Insights include Sales,
Inventory Position, and Base Cost. Each subject area has its own data mart to support
reporting and analytic needs. At the center of each data mart is fact data (note that fact
data here corresponds to both base fact data and aggregated data). Facts are the
transactions that occur in your data warehouse's source systems, such as RMS. You
may want to look at sales transaction facts, inventory stock count facts at stores or
warehouses, or inventory movement facts.
Facts have little meaning by themselves because they are usually just values (for
example, six sales at a store, 15 items left at a warehouse, or 300 items transferred).
What gives fact data true meaning is the intersection of dimensions in which facts
exist. In other words, six sales on Wednesday at store B, or 15 dishwashers in stock last
Monday at the Chicago warehouse, or 300 blouses transferred during the last week in
February from the St. Louis warehouse to the Denver warehouse. Dimension data,
therefore, exists in the data warehouse to serve as reference data to facts.
The following diagram illustrates data elements of a generic data mart and their
inter-relationships:

1-2 Oracle Retail Insights Implementation Guide


Business Intelligence and Retail Insights

Figure 1–1 Data Element Relationships

Oracle Retail Insights Implementation Guide Introduction 1-3


Business Intelligence and Retail Insights

1-4 Oracle Retail Insights Implementation Guide


2
Setup and Configuration
2

The Setup and Configuration chapter provides parameters for setting up Retail
Insights. The following sections are included:
■ "Sizing Information"
■ "Data Migration from a Legacy Data Warehouse System"
■ "Reporting Scenarios"
■ "Data Initial Load from RDE"

Sizing Information
This section provides a list of factors that should be taken into account when making
sizing plans.
There are two major hardware components that make up the Retail Insights physical
environment:
■ Middle Tier Application Server - The middle tier application server hosts software
components such as Oracle WebLogic Server and Oracle Business Intelligence
Enterprise Edition (EE) or Oracle Business Intelligence Standard Edition One (SE
One).
■ Database - The Oracle Database stores large amounts of data that are queried in
generating Oracle BI reports. The daily data loading process and report query
processing process are both heavily dependent on the hardware sizing decision.
Sizing is customer-specific. The sizing of the Retail Insights application is sensitive to a
wide variety of factors. Therefore, sizing must be determined on an individual
installation basis.
Testing is essential. As with any large application, extensive testing is essential for
determining the best configuration of hardware.
Database tuning is essential, just like any other database. The Oracle database is the
most critical performance and sizing component of Retail Insights. As with any
database installation, regularly monitoring database performance and activity levels
and regularly tuning the database operation are essential for optimal performance.

Factors to Consider
■ Application Server
Report Complexity - Reports processed through Oracle BI can range from very
simple one-table reports to very complex reports with multiple-table joins and
in-line nested queries. The application server receives data from the database and

Setup and Configuration 2-1


Sizing Information

converts it into report screens. The mix of reports that will be run will heavily
influence the sizing decision.
Number of Concurrent Users - The Retail Insights application is designed to be a
multiple concurrent use system. When more users are running reports
simultaneously, more resources are necessary to handle the reporting workload.
For more details on Clustering and Load Balancing, refer to the Clustering, Load
Balancing and Failover section in Oracle Business Intelligence chapter of the Oracle
Business Intelligence Enterprise Edition Deployment Guide.
■ Back End Database
Functions Used Determine Tables to be populated - Retail Insights is designed to
be a functional system so that some functions (such as supplier compliance or
order processing) that are available do not have to be used. To the extent that some
functions are not used, the amount of resources may be reduced correspondingly.
Fact Tables and Indexes - Disk space is required for tables and indexes. To identify
the database objects necessary for the selected functions, refer to the Data Model.
Dimension Tables and Indexes - Dimension tables and indexes also require space
and generally indicate the size of the data to be stored. Disk space must be
planned on the basis of record counts in the dimension tables.
Data Purging Plan - How Much Data to be Stored - The number of years of data to
be stored also contributes to the amount of disk space required. Disk space to store
fact data is generally linear with the number of years to data to be stored.
Database Backup and Recovery - The importance of the data and the urgency with
which a recovery must be made will drive the backup and recovery plan. The
backup and recovery plan may have a significant impact on disk space
requirements.
■ Data Storage Requirements
Transaction Volume - Sales - the higher the number of sales records, the higher the
disk storage requirements and the higher the resource requirements to process
queries against sales-oriented tables and indexes.
Positional Data - Inventory, Price, Cost - Positional data (data that is a snapshot at
a specific point in time, such as inventory data "as of 9:00AM this morning") can
result in very large tables. The Retail Insights concept of data compression (not to
be confused with database table compression) is important in controlling the disk
space requirement. For more information, see Chapter 4, "Compression and
Partitioning".
Extract, Transform, Load - Daily Processing - The daily loading process is a batch
process of loading data from external files into various temporary tables, then into
permanent tables, and finally into aggregation tables. Disk space and other
resources are necessary to support the ETL process.
Data Reclassification Requirements - Frequent hierarchy reclassification impacts
resources.
Processing Report Queries - Report queries submitted to the back-end database
have the potential to be large and complex. The size of the temporary tablespace
and other resources are driven by the nature of the queries being processed.
■ Configuration issues
Archivelog mode - If the database is being operated in archivelog mode,
additional disk space is required to store archived redo logs.

2-2 Oracle Retail Insights Implementation Guide


Sizing Information

SGA and PGA sizing - the sizing of these memory structures is critical to efficient
processing of report queries, particularly when there are multiple queries running
simultaneously.
Initialization Parameters - The initialization parameter settings enable you to
optimize the daily data loading and report query processing.
Data Storage Density - As is the case with many data warehouses, the data stored
in the Retail Insights database is relatively static and dense storage of data in
database data blocks results in more efficient report query processing.
■ Hardware Architecture
Number and Speed of Processors - More and faster processors speed both daily
data loading and report query processing in the database. The application server
needs fewer resources than the database.
Memory - More memory enables a larger SGA in the database and will reduce the
processing time for report queries.
Network - Since the data from the report queries needs to go from the back-end
database to the application server, a faster network is better than a slower
network, but this is a relatively low priority.
Disk - RPMs, spindles, cache, cabling, JFS - I/O considerations are very critical to
optimal performance. Selection of disk drives in the disk array should focus on
speed. For example, faster RPMs, more spindles, larger cache, fiber optic cabling,
JFS2 or equivalent.
RAID - The selection of a RAID configuration is also a critical decision. In
particular, RAID5 involves computations that slows Disk I/O. The key is to select
the RAID configuration that maximizes I/O while meeting redundancy
requirements for data protection.
Backup and Recovery - The backup and recovery strategy drives disk
configuration, disk size, and possibly the number of servers, if Dataguard or Real
Application Clusters are used.

Data Seeding of Positional Facts


For base level positional fact data, Retail Insights uses a compression approach to
reduce the data volume. Compression in Retail Insights refers to storing physical data
that only reflects changes to the underlying data source and filling in the gaps between
actual data records through the use of database views. For detailed information about
compression, refer to Chapter 4, "Compression and Partitioning".
To report positional data correctly in the Retail Insights user interface, data seeding is
required if clients launch Retail Insights later than a source system RDE (Extracted
data from RMS). For performance reasons, it is recommended that all date range
partitioned positional fact tables must seed data on the first date or first week of each
partition. This avoids searching the data across partitions.Data used for seeding can
come from RMS or from client legacy systems. The following are some
recommendations to seed data:
■ If seeding data is for new added partition, you can run the Retail Insights script
retailpartseedfactplp.ksh to seed new partition. This script moves seed data from
Retail Insights CUR tables to new partitions.
■ If seeding data is for new tables, you may need to provide snapshots of your
positional fact data. See "Data Initial Load from RDE" on page 2-6 for how to
provide initial snapshots of positional fact data.

Setup and Configuration 2-3


Data Migration from a Legacy Data Warehouse System

Data Migration from a Legacy Data Warehouse System


Retail Insights fact tables may not have data at the same granularity as a client has in
its legacy system. The granularity of client history data can be higher or lower than
what the Retail Insights data model supports.
■ If the granularity of client history data is lower than what the Retail Insights data
model supports, the client can aggregate data to the same level that the Retail
Insights data model supports, and then populate the Retail Insights base table.
■ If the granularity of client history is higher than what the Retail Insights data
model supports, the client can aggregate data to the Retail Insights aggregation
tables (if they are available). This could cause inconsistencies between the Retail
Insights base tables and Retail Insights aggregation tables within the legacy time
period. When the client reports data on those time periods, the client has to be
aware of the inconsistencies between base level and high aggregation level.
■ Retail Insights provided APIs can be used for designing and developing the data
extraction programs from legacy system. For more information on the APIs, refer
to the Oracle Retail Interfaces Document.

Reporting Scenarios
By default, Retail Insights provides the features to use the following types of BI
reporting scenarios:
■ As-Was
■ As-Is
■ Point in Time
For more information on the reporting scenarios, refer to the Oracle Retail Insights User
Guide.
Based on business needs, you can configure to have one or all of these scenarios, or the
combination. These configuration changes will be in ODI (the change is only in the
batch scheduler, which is not available by default with Retail Insights), Oracle BI EE,
and the Oracle Retail Insights Data Model.
If the business requirement is to see the history as it happened all the time, which is
the As-Was scenario, then it is recommended that you disable all ODI jobs related to
As-Is and vice versa. The reason for this is to reduce the load and avoid unnecessary
jobs to improve the batch time. For more information on identifying these jobs, refer to
chapter 6 ODI Program Dependency in the Oracle Retail Insights Operations Guide.
Once the ODI jobs are disabled, the appropriate tables/objects must be disabled in
Oracle BI EE and the data model.
Lets take an example of Inventory Receipts fact and following are the required steps if
As-Is is not needed.
ODI
Disable the jobs related to the following tables:
W_RTL_INVRC_SC_DY_CUR_A
W_RTL_INVRC_SC_LC_WK_CUR_A
W_RTL_INVRC_SC_WK_CUR_A

This information can be found in the Oracle Retail Insights Operations Guide.

2-4 Oracle Retail Insights Implementation Guide


Reporting Scenarios

Oracle BI EE
In the "Fact - Retail Inventory Receipts" logical table, disable the following sources:
Fact_W_RTL_INVRC_SC_DY_CUR_A
Fact_W_RTL_INVRC_SC_LC_WK_CUR_A
Fact_W_RTL_INVRC_SC_WK_CUR_A

With this change, these tables can never be accessed and As-Is reporting cannot be
done for inventory receipts. The same changes must be done for all the other fact areas
as well. Since Retail Insights has three subject areas (Retail As-Was, Retail As-Is and
Retail Point in Time), all three are available to the user. Since As-Is components should
be disabled, go to the Administration -> Manage Privileges on Oracle BI EE web and,
for all the users/roles, change the permission setting to "Denied" for the As-Is subject
area. This way, the As-Is subject area will not be available for reporting. The following
figure displays the Administration screen.

Figure 2–1 Administration Screen

Setup and Configuration 2-5


Data Initial Load from RDE

Data Model
All the unused tables can still be in the schema and will be not used by ODI and
Oracle BI EE programs. It can be maintained for future changes.
Point in Time reporting can be done with As-Was, As-Is, or with both. There are no
separate ODI processes for PIT. PIT can be derived from either As-Is or As-Was data
and the processing will happen during report execution. Only difference is, PIT reports
cannot use aggregate tables and will always be reported from the base fact tables.
There are some limitations to do this reporting in some fact areas owing to
performance. For example, with the Positional facts, PIT is possible only for Product
hierarchy because there are corporate aggregates for product which have the
decompressed data.
The real differentiation of As-Is and As-Was in the data happens above the base fact
table or when reporting is done at the parent level. Otherwise if the reporting is done
at the lowest grain level, the result set will be the same for both.

Data Initial Load from RDE


In order to report Retail Insights positional data correctly, all Retail Insights positional
compressed tables need to be seeded with source data (RDE) correctly before they can
be loaded using Retail Insights batch ETL with daily data. This seeding process is to
load positional fact data for each item location combination available from RDE to
Retail Insights as initial data. This can be done by using following recommended
approach. This approach assumes that user uses RDE (extract of RMS) as Retail
Insights source system and the required data are available from RMS.

Inventory Position Initial Loading


This initial inventory position data loading includes loading seeding date from RDE to
RA W_RTL_INV_IT_LC_DY_F, W_RTL_INV_IT_LC_WK_A, and W_RTL_INV_IT_
LC_G tables. Perform the following steps:
1. In the data extract received from RDE, make sure the Retail Insights table W_RTL_
CURR_MCAL_G has the business date and week for the current Retail Insights
business date. This is used as the date for seeding data. This date should match the
RMS vdate set in the RDE environment.
2. Execute the Retail Insights SIL script invildsil.ksh to load the inventory seeding
data from the staging table to the Retail Insights base fact table W_RTL_INV_IT_
LC_DY_F and W_RTL_INV_IT_LC_G.
3. Execute the Retail Insights PLP script invildwplp.ksh to load inventory seeding
data to the Retail Insights table W_RTL_INV_IT_LC_WK_A.
4. Execute other inventory PLP scripts to populate the Retail Insights inventory
aggregation tables. These are chosen by the client for reporting purposes.

Pricing Initial Loading


This initial Pricing data loading includes loading seeding date from RDE to Retail
Insights W_RTL_PRICE_IT_LC_DY_F and W_RTL_PRICE_IT_LC_G tables. Perform
the following steps:
1. In the data extract received from RDE, make sure the Retail Insights table W_RTL_
CURR_MCAL_G has the business date and week for the current Retail Insights
business date. This is used as the date for seeding data. This date should match
RMS vdate set for the SDE program.

2-6 Oracle Retail Insights Implementation Guide


Rendering Item Images, Item Attribute Images, Product Hierarchy Image

2. Execute the Retail Insights SIL script prcilsil.ksh to load pricing seeding data from
the staging table to the Retail Insights base fact tables W_RTL_PRICE_IT_LC_DY_
F and W_RTL_PRICE_IT_LC_G.
3. Execute the other Price PLP scripts to populate the Retail Insights Price
aggregation tables. These are chosen by the client for reporting purposes.
4. When the initial loading is complete, change the filter condition back and
regenerate the two scenarios.

Net Cost Initial Loading


This initial Net Cost data loading includes loading seeding date from RDE to the Retail
Insights W_RTL_NCOST_IT_LC_DY_F and W_RTL_NCOST_IT_LC_G tables. Perform
the following steps:
1. In the data extract received from RDE, make sure the Retail Insights table W_RTL_
CURR_MCAL_G has the business date and week for the current Retail Insights
business date. This is used as the date for seeding data. This date should match the
RMS vdate set for the SDE program.
2. Execute the Retail Insights SIL script ncstildsil.ksh to load Net Cost seeding data
from the staging table to the Retail Insights base fact table W_RTL_NCOST_IT_
LC_DY_F and W_RTL_NCOST_IT_LC_G.
3. Execute other Net Cost PLP scripts to populate the Retail Insights Net Cost
aggregation tables. These are chosen by the client for reporting purpose.
4. When the initial loading is complete, change the filter condition back and
regenerate the two scenarios.

Base Cost Initial Loading


This initial Base Cost data loading includes loading seeding date from RDE to Retail
Insights W_RTL_BCOST_IT_LC_DY_F and W_RTL_BCOST_IT_LC_G tables. Perform
the following steps:
1. In the data extract received from RDE, make sure the Retail Insights table W_RTL_
CURR_MCAL_G has the business date and week for the current Retail Insights
business date. This is used as the date for seeding data. This date should match the
RMS vdate set for the SDE program.
2. Execute the Retail Insights SIL script cstildsil.ksh to load Base Cost seeding data
from staging table to the Retail Insights base fact table W_RTL_BCOST_IT_LC_
DY_F and W_RTL_BCOST_IT_LC_G.
3. Execute the other Base Cost PLP scripts to populate the Retail Insights Base Cost
aggregation tables. These are chosen by the client for reporting purposes.
4. When the initial loading is complete, change the filter condition back and
regenerate the two scenarios.

Rendering Item Images, Item Attribute Images, Product Hierarchy Image


This section provides the setup and configuration of item images, item attribute image,
and product hierarchy image in Oracle BI EE. Retail Insights only holds iImage
location in a format of URL of a server where the images are hosted.
IThe following items need to be configured in Oracle BI EE in order to render these
Item Images.

Setup and Configuration 2-7


Rendering Item Images, Item Attribute Images, Product Hierarchy Image

1. Create an attribute in the rpd mapping to the column where URL is stored.
2. In answers, add this new attribute to required report. Go to the column properties
of this attribute and change the data format to Image URL. Save the report.
3. Bounce all the services (including WebLogic).
4. Run the report to see images.

2-8 Oracle Retail Insights Implementation Guide


3
Internationalization
3

Internationalization is the process of creating software that is able to be translated


more easily. Changes to the code are not specific to any particular market. Retail
Insights has been internationalized to support multiple languages.

Note: Retail Insights uses DB language code and not ISO codes for
all the supported languages. Retail Insights will look up language
codes from RDE. If, in the case a language supported by Retail
Insights is not available in the source system, then the language under
SRC_LANGUAGE_CODE will be used as the local language.

This section describes configuration settings and features of the software that ensure
that the base application can handle multiple languages.

Translation
Translation is the process of interpreting and adapting text from one language into
another. Although the code itself is not translated, components of the application that
are translated may include the following:
■ Graphical user interface (GUI)
■ Error messages
■ Reports
The following components are not translated:
■ Documentation (online help, release notes, installation guide, user guide,
operations guide)
■ Batch programs and messages
■ Log files
■ Configuration tools
■ Demonstration data
■ Training materials
The user interface for Retail Insights has been translated into:
■ Chinese (simplified)
■ Chinese (traditional)
■ Croatian

Internationalization 3-1
Multi-Language Setup

■ Dutch
■ French
■ German
■ Greek
■ Hungarian
■ Italian
■ Japanese
■ Korean
■ Polish
■ Portuguese (Brazilian)
■ Russian
■ Spanish
■ Swedish
■ Turkish

Multi-Language Setup
Retail Insights data is supported in 18 languages. This section provides details of
various scenarios that may come across during implementation. See "Translation" on
page 3-1 for a list of supported languages.
Since multi-language data support in Retail Insights is dependent on the availability of
the multi-language data in the source system, it is important to understand various
scenarios the user may encounter. Before proceeding review the following facts about
multi-language support:
■ Retail Insights programs extracts multi-language data from source systems.
■ A list of languages for multi-language data support can be chosen during the
installation process. Please refer to the Oracle Retail Insights Installation Guide for
more details.
■ Depending on the implementation, the source system may or may not have data
for particular supported language(s). For example, RMS supports Item
Descriptions in multiple languages but the item's description may not be available
in the translated languages.
■ For source system released languages, please refer to source system Operations
Guides.
■ You must select a Retail Insights primary language for data purposes to be
supported within the source system.

Scenario 1
All the supported languages are implemented in Retail Insights and the same set of
languages are supported in the source system as well.
Multi-lingual data sets are enabled in both Retail Insights and the source system.

3-2 Oracle Retail Insights Implementation Guide


Multi-Language Setup

Data Scenario 1a
Translated data exists for all records in Source System: This is an ideal scenario where
the source system supports data for the same set of languages as Retail Insights and
data for the required column exists in all the languages in the source system.
In this scenario the attributes that are supported for multi-languages will get all the
multi-language data in Retail Insights.

Data Scenario 1b
Translated data does not exist for some of the records in the source system.
For the attributes for which data is not available in the source system, Retail Insights
will display the attribute in source system primary language. For example, Retail
Insights requests data in German and English languages. In RMS the Item attribute
description is not available in the German language but is available in English
language.
Retail Insights will display Item description in English to a user who is logged into
Oracle BI EE (assuming English is the primary language of RMS for that
implementation).

Scenario 2
All or a subset of languages are implemented in Retail Insights and some of these are
not supported in the source system:

Data Scenario 2a
Translated data does not exist for some of the languages in the source system. In this
case, the data is displayed in Retail Insights' primary language.

Scenario 3
Source system supports more languages than are supported for Retail Insights. In this
case Retail Insights filters out the additional languages' data. This data will not be
loaded into Retail Insights tables and cannot be used for reporting.

Internationalization 3-3
Multi-Language Setup

3-4 Oracle Retail Insights Implementation Guide


4
Compression and Partitioning
4

This chapter describes how Retail Insights implements compression and offers a
discussion of Oracle partitioning.

Overview of Compression
Although data warehouses are often very large, the amount of detail generated in
some Retail Insights tables is enormous even by usual standards. That is, a retailer
with 500,000 items and 500 locations would generate 250,000,000 new rows each day.
Storing this amount of uncompressed data is impractical from a disk storage
perspective, in the cost to store the rows, the cost to perform backups, and other
database maintenance operations.
One approach that Retail Insights uses to reduce the data volume is compression. This
chapter describes:
■ What compression does
■ Mechanics of compression
■ Which tables are currently compressed
■ Oracle features that are related to compression
■ Strategies for implementing compressed tables

What Compression Does


Compression refers to storing physical data that only reflects changes to the
underlying data source, and filling in the gaps between actual data records through
the use of database views. This method is engaged primarily for subject areas that are
perpetual, such as inventory. That is, when querying sales data, a valid sale record
exists (a sale occurred) or a record does not exist (no sale occurred). However, when
querying for on-hand inventory, even if no change occurred to the inventory on the
date desired, a valid value is still required. One way to resolve this discrepancy is to
store a record for every day and a valid item-location combination as mentioned
above. Another method, compression, allows for the storage of only changes to the
inventory position. The query is resolved by looking backward through time from the
desired date (if no change record exists on that date) until an actual change record is
found. This method returns the correct current data with the minimum requirements
necessary for processing and storing data.
Retail Insights compression is different with Oracle DB table compression. Oracle DB
table compression compress data by eliminating duplicate values within a data block.
Any repetitive occurrence of a value in a block is replaced by a symbol entry in a
"symbol table" within the data block. So for example DEPT_NUM=10 is repeated five

Compression and Partitioning 4-1


Overview of Compression

times within a data block, it will be only stored once and for the other four times a
symbol entry will be stored in symbol table. Oracle database table compression can
also significantly reduce disk and buffer cache requirements for database tables while
improving query performance. Oracle database compressed tables use fewer data
blocks on disk, reducing disk space requirement.

Mechanics of Compression
The purpose of decompression views is to give the application the illusion that there is
a record for each possible combination (that is, an item-location-day record for each
permutation), when in fact there is not. Thus, the fact of whether a table is compressed
or not should not be visible to the application that queries data from that table.
A compressed table is made up of two distinct parts: a 'seed' that consists of all
existing combinations at a point in time (typically the first day or week of the table or
partition) and the changed data since that time. Retail Insights compressed tables use
FROM_DT_WID and TO_DT_WID columns to indicate the time range in which
records are valid.
When resolving a query for a particular record, the decompression view provides the
latest record for the requested item and location with the maximum day that is less
than or equal to the requested day. A decompression view needs to encompass both
the seed and all of the changed data since that seed. A decompression view compares
FROM_DT_WID and TO_DT_WID of records with FROM_VALUE and TO_VALUE on
partition mapping table W_RTL_PARTITION_MAP_G to make sure that a right
partition is used by the decompression view.
To illustrate how the decompression views actually work, assume the following:
■ The user is interested in the inventory position of item 10 at location 10 on
1/23/02.
■ The seed was done on 1/1/02. Changes were posted on 1/4/02, 1/15/02, and
1/30/02.
■ The row that is presented to the application by the decompression view is the row
on 1/15/02, because it is the latest date that is less than or equal to the requested
date.
As a second example, assume that the inventory position of item 10, location 10, day
1/3/02 was desired. Because there was no change record less than or equal to the
desired date, the seed record from 1/1/02 will be presented to the application.
Compression's performance is excellent when the user is querying for a single day (as
in the example above). When querying over a group of days, however (that is, all of
the inventory positions at a given location on a given day), the performance can be
unacceptable. Even though the user is requesting a group of information back, and in
most cases the database can process groups of information efficiently, each individual
row must be evaluated individually by the decompression view and cannot be
processed as a group. To counteract the slow performance of these summary
operations, you may take advantage of compressed table partition seeding (see
"Overview of Partitioning Strategies" on page 4-4).
This partition seeding utilizes the latest position status tables (also known as 'current'
tables). An example is the W_RTL_INV_IT_LC_G table, which holds the current
decompressed position for every item and location on the W_RTL_INV_IT_LC_DY_F
table. This position can be used as a partition seed. This position is also utilized by
base Retail Insights code during major change fact seeding.

4-2 Oracle Retail Insights Implementation Guide


Overview of Compression

Compressed Tables and 'CURRENT' Tables


The table below illustrates the compressed tables within Retail Insights, along with
their corresponding 'CURRENT' tables.

Table 4–1 Compressed Tables and CURRENT Tables


Compressed Tables Current Tables
W_RTL_INV_IT_LC_DY_F W_RTL_INV_IT_LC_G
W_RTL_INV_IT_LC_WK_A W_RTL_INV_IT_LC_G
W_RTL_PRICE_IT_LC_DY_F W_RTL_PRICE_IT_LC_G
W_RTL_BCOST_IT_LC_DY_F W_RTL_BCOST_IT_LC_G
W_RTL_NCOST_IT_LC_DY_F W_RTL_NCOST_IT_LC_G
W_RTL_CO_LINE_STATUS_F W_RTL_CO_LINE_STATUS_G
W_RTL_COMP_PRICE_IT_LC_DY_F W_RTL_COMP_PRICE_IT_LC_G
W_RTL_PO_ONALC_IT_LC_DY_F W_RTL_PO_ONALC_IT_LC_G
W_RTL_PO_ONORD_IT_LC_DY_F W_RTL_PO_ONORD_IT_LC_G

Coping with Slowly Changing Dimension Type 2

Fact Close Program (factcloseplp.ksh)


On a compressed fact table, a record is only posted to the table when there is a change
in one of the fact attributes. If there is no activity, no record is posted. Decompression
views then fill in the gaps between physically posted records to ensure that a fact
record appears for each item-location-day combination in the user interface. However,
when an item, location, or department is closed or major-changed, any fact record with
those dimensions becomes inactive. The decompression views need to be informed to
stop filling in the gap after the last record was posted. To accomplish this instruction,
scenario PLP_RetailFactCloseFact (called by factcloseplp.ksh) first queries the W_RTL_
PROD_RECLASS_TMP and W_RTL_ORG_RECLASS_TMP tables to determine the
compressed item-location facts that need to be closed today. The PLP_
RetailFactCloseFact scenario then updates TO_DT_WID to the current date WID to
stop the record. The decompression view fills in records up to the day that is in the
range between FROM_DT_WID and TO_DT_WID.

Fact Open Program (factopenplp.ksh)


Retail Insights Data Compression tables require seeding when a major change in the
product and organization dimension causes new surrogate keys to be created for items
or locations. Seeding the compressed tables is required because the new key represents
a new hierarchy relationship. If the new key is not represented on the compressed
table, the compression view does not pick up any data from the day the old
dimensions were closed to the day a record with the new dimensions is posted to the
compressed fact tables. This missed data causes inaccuracy in query results and
incorrect data aggregation.
To accomplish this seeding scenario, PLP_RetailFactOpenFact (called by
factopenplp.ksh) first queries the W_RTL_PROD_RECLASS_TMP and W_RTL_ORG_
RECLASS_TMP tables to determine what compressed item-location facts need to be
closed today. The PLP_RetailFactOpenFact scenario then inserts seeded (closed)
records for tomorrow's FROM_DT_WID, indicating that the closed fact records are no

Compression and Partitioning 4-3


Oracle Table Compression

longer valid beginning tomorrow, when the newly seeded records (from PLP_
RetailFactOpenFact) become active. In the case of the compressed week table, W_RTL_
INV_IT_LC_WK_A, PLP_RetailFactOpenFact inserts seeded records with next week’s
warehouse ID.

Oracle Table Compression


Oracle table compression not only helps customers save disk space, it also helps to
increase cache efficiency since more blocks can fit in the memory. Advanced
Compression is available for Oracle 11g Enterprise Edition and Hybrid Columnar
Compression is available for Exadata only.
Since compression could cause contention when tables get updated, it is suggested
users only compress non-current partitions and leave the current partition
uncompressed. This partial compression approach has proven to be a valuable
implementation option.

Overview of Partitioning Strategies


This section describes partitioning strategies for Retail Insights data marts. Although
optional, partitioning provides powerful performance benefits, and therefore is highly
recommended. Tables in the RA_partitioned_tables.xls spreadsheet (see the Oracle
Retail Insights Installation Guide) are highly recommended to be partitioned. If a report
runs slowly and a fact table in the query is not partitioned, that fact table may be a
good candidate for partitioning. For large tables, such as the inventory, pricing, cost
and sales tables, splitting them into table partitions can provide the following benefits:
■ Partitions are smaller and therefore easier to manage.
■ Management operations on multiple partitions can occur in parallel.
■ Partition maintenance operations (such as index rebuilds) are faster than full table
operations.
■ Partition availability is higher than table availability (that is, when recovering a
particular partition, users may access all other partitions of the table at the same
time).
■ The optimizer can prune queries to access data in only the partition of interest, not
the entire table (that is, if you are interested only in February's data, you do not
need to look at any of the table's data outside of the February partition).
■ Partitions are separate database objects, and can be managed accordingly (that is,
if December sales are frequently accessed throughout the year whereas other
months are not, the December sales partition could be located in a special
tablespace that allows for faster disk access).
■ In some situations, the Oracle database can create parallel operations on partitions
that it cannot on tables; an example is joining between two different tables if they
are partitioned on the same key (this feature is called a 'parallel partition-wise
join').
Indexes, as well as tables, can be partitioned. Index partitions can be global (one index
over the table, regardless of whether the table is partitioned or not) or local (there is a
one-to-one correspondence between index partitions and table partitions). In general,
when tables are partitioned, local indexes should be preferred to global indexes for the
following reasons:
■ Maintenance operations involve only one index partition instead of the entire
index (that is, if the oldest table partition is aged out, a local index partition can be

4-4 Oracle Retail Insights Implementation Guide


Overview of Partitioning Strategies

dropped along with its corresponding table partition, whereas an entire global
index will need to be rebuilt after it becomes unusable when a table partition is
dropped).
■ The optimizer can generate better query access plans that use only an individual
partition.
■ When multiple index partitions are accessed, the optimizer may choose to use
multiple parallel processes rather than just one.

Implementing Retail Insights Partitioning


For retailers who choose to partition a fact table, the figure on the following page
illustrate some of the possibilities for table and index layout.
In general, option 2 is the preferred solution for large regular or compressed tables (for
example, the W_RTL_INV_IT_LC_DY_F and W_RTL_INV_IT_LC_WK_A tables). It
uses table partitions and local indexes, thus minimizing the impact of index
maintenance and the deletion of old table partitions. Global indexes on partitioned
tables are not recommended.
Option 1 can be used for smaller compressed tables. The disadvantage is that,
functionally, there is no way to delete historical data and the table continues to grow.

Figure 4–1 Retail Insights Partitioning Options

Setup and Maintenance for Partitioning Retail Insights Compressed Inventory Table
The following procedure describes how to setup and maintain Retail Insights
partitioning of the compressed inventory table (W_RTL_INV_IT_LC_DY_F) using
Retail Insights partitioning:
1. Make the following determinations, among others (see the Oracle Retail Insights
Installation Guide for details):
■ Your partitioning strategy.
■ The time period your partitions will use.
■ The 'values less than' boundaries according to your multi business calendar
WID values.
■ How many partitions are to be used.

Compression and Partitioning 4-5


Overview of Partitioning Strategies

■ The partition naming standard.


2. On the database, create the partitions and indexes for the tables you want to
partition.
3. Verify you have populated the Time Calendar Dimension. See the Oracle Retail
Insights Installation Guide for details.
4. Perform step numbers 2 and 3 whenever any of the following events occur:
■ Records are added to or deleted from the Time Calendar tables
W_MCAL_DAY_D (extending time calendar for a new time period).
■ Partitions are added to the Inventory Position table
W_RTL_INV_IT_LC_DY_F.
Other maintenance activities include archiving and removing of partitions.

Implementing Partitioning for Compressed Inventory Table


Once the tables (including partitions) and indexes have been created, the data must be
loaded. For tables that have a corresponding current status table (such as W_RTL_
INV_IT_LC_DY_F and W_RTL_INV_IT_LC_G), the following steps are recommended:

Note: All these steps can be performed automatically by the Retail


Insights seeding program PLP_RetailPartSeed.ksh. See the Oracle
Retail Insights Operations Guide for detail about how to execute this
script.

1. In the partition mapping table W_RTL_PARTITION_MAP_G, update column TO_


VALUE with the current business date WID for the latest partition on the target
table W_RTL_INV_IT_LC_DY_F.
2. Insert a new record to partition mapping table W_RTL_PARTITION_MAP_G with
next business date WID or week WID as FROM_VALUE and dummy value
'999999999999999' as TO_VALUE. Column TABLE_NAME must be populated
with target table W_RTL_INV_IT_LC_DY_F and column PARTITION_NAME
must be populated with 'P_XX'.

Note: XX is the number part of current partition name on the same


target table plus 1. This partition name 'P_XX' can be different from
the real partition name used in the database.

3. Copy data in W_RTL_INV_IT_LC_G table as the seed to the first partition or a


new partition that is going to be used on the next day.
At this point, only the changed records are added to the
W_RTL_INV_IT_LC_DY_F table. Whereas the W_RTL_INV_IT_LC_G table is a
full and uncompressed version that holds the current inventory position as of the
last time period.
4. When a partition boundary is crossed, the W_RTL_INV_IT_LC_G table is copied
as the seed to the new partition, via the PLP_RetailPartSeed.ksh program.
If you have questions about how to implement partitioning with compression or
require assistance implementing partitioning, contact Oracle Customer Support or
Oracle Retail Services.

4-6 Oracle Retail Insights Implementation Guide


Overview of Partitioning Strategies

How Oracle Implements Partitions


This section highlights how partitions are implemented in an Oracle data warehouse.
For details on partitioning concepts, refer to the chapter Partitioning in Data
Warehouses in the Oracle Database Data Warehousing Guide 11g Release 1 (11.1).
Range partitions in the Oracle data warehouse/database are split by a range of values
on the partition key. Examples include partitions by month, partitions by department
number, and partitions by item range. Partitioning options also include hash partitions
(spreading the rows across a fixed number of partitions by applying a hash function to
the partition key), and composite partitioning (a combination of range partitioning
and hash partitioning). It is recommended that you partition the tables using range
partitioning. Oracle Retail also recommends that the partition key be the date field in
the primary key to allow partitions to be aged out when no longer needed.
As a general guideline, partitioning must be considered for tables listed in the
RA_partitioned_tables.xls spreadsheet (see the Oracle Retail Insights Installation Guide)
and any fact tables in a slow-running query. There is an administrative trade-off
between having more partitions to manage and obtaining the benefits of partitioning.
The actual physical layout of partitions varies from site to site. A general approach is
to put each partition into its own tablespace and map each tablespace to a separate
mount point. This has several advantages:
■ Maintenance operations, as well as tablespace recovery, can occur on a partition
while other partitions are unaffected.
■ If manual performance tuning of the data files is being done, tablespaces and their
files can be moved around to achieve optimal performance.
■ If partitions are no longer being updated, their tablespaces can be changed to
READ ONLY, which significantly reduces backup requirements.
■ Separate mount points pointing to a separate set of physical drives significantly
reduces I/O time.
Partitions are ordered from low values to high values. The partition key value for a
partition is a non-inclusive upper bound (high value) for that partition. That is, if the
W_RTL_SLS_IT_LC_DY_A table is partitioned by month, the high value for January,
2010, partition is 01-Feb-2010. A low value can always be inserted into the lowest
partition. However, you may not be able to insert a high value depending on the high
value of the highest partition. For instance, if the highest partition has a high value of
01-Feb-2000, and you attempt to insert a record with a date of 01-Feb-2010, the row
will not be inserted into the table (the high value of 01-Feb-2010 is a non-inclusive
upper bound). For this reason, a special high value partition with a key of
MAXVALUE is available in the Oracle database. It is recommended that all partitioned
tables include a dummy partition with a MAXVALUE high value.
There are special considerations for the partitioning of Retail Insights compressed
tables. The following is a brief description of the different partition maintenance
commands. Refer to the current Oracle database documentation set for more details:
■ ADD PARTITION: Adds a new partition to the high end of a partitioned table.
Because it is recommended to have a MAXVALUE partition, and this is the highest
partition, the ADD PARTITION functionality can be achieved by performing a
SPLIT of the MAXVALUE partition instead.
■ DROP PARTITION: Drops the partition. This is the typical method to delete the
oldest partitions (those with the lowest values) as they age to maintain a rolling
window of data.

Compression and Partitioning 4-7


Overview of Partitioning Strategies

■ EXCHANGE PARTITION: Converts a non-partitioned table into a partitioned


table or converts a partitioned table into a non-partitioned table.
■ MERGE PARTITION: Merges two adjacent partitions into one.
■ MOVE PARTITION: Moves a partition to another segment; this is used to
defragment a partition or to change its storage characteristics.
■ SPLIT PARTITION: Splits an existing partition by adding a new partition at its low
end.
■ TRUNCATE PARTITION: Removes all rows from the partition.
Oracle database automatically maintains local index partitions in a 1-to-1
correspondence with their underlying table partitions. Any table partition operations,
such as ADD PARTITION, also affect the relevant index partitions.

Summary
Partitions are useful for breaking up large tables into smaller, more manageable pieces.
Take note of the following partitioning recommendations:
■ Consider partitioning tables that are in the RA_partitioned_tables.xls spreadsheet
(see the Oracle Retail Insights Installation Guide) and fact tables that are in a slow-
running query.
■ Use the date as the partition key for range partitioning.
■ When tables are partitioned, make their indexes local.
■ Consider putting each partition in its own tablespace and each tablespace on its
own mount point.
■ After updates on a partition cease, consider changing its tablespace to READ
ONLY to reduce backup requirements.
■ If partitioning compressed tables, be sure to address any special requirements for
seeding.

4-8 Oracle Retail Insights Implementation Guide


5
Performance
5

Retail Insights is a high performance data warehouse, capable of moving and storing
massive amounts of data, and providing efficient access to that data via the delivered
and custom built reports. For any BI solution, including Retail Insights, smart
decisions on how to implement and run your data warehouse will ensure that you are
getting the most out of it. This chapter contains information that will help you get the
best performance out of Retail Insights and identifies common contributors that can
weaken performance, as well as best practices that will ensure Retail Insights is
running in the most optimal manner.
All implementations are unique and the factors that are beneficial for one
implementation may not have the same effect for all the implementations. It is a good
practice to test several settings/approaches for the factors and recommendations listed
below and use the ones that work best for your environment. The factors listed in this
chapter are the key factors that impact performance but no absolute values or settings
can be provided for implementation purposes due to the uniqueness of each
environment.
Oracle Retail Insights includes ODI for extract, transform and load and Oracle
Business Intelligence (BI EE) for analytic reporting purposes. The recommendations in
this chapter will focus on both back end (ETL) and front end (Oracle BI EE)
components of Retail Insights.

Key Factors in Performance


Based on the complexity of the report, Oracle BI EE sometimes generates complex
SQL, causing the Oracle Database to pick a less than optimized execution plan. In
order to avoid this scenario, it is recommended that the "SQL Plan Baseline"
functionality of the Oracle 12c be enabled (it is disabled by default). For more details
refer to the Oracle 12c Performance Tuning Guide.

Purging and Archiving Strategy


With an increased use of the Retail Insights application, the data volumes will grow
and may result in slower performance. The performance impact can be on Retail
Insights batch that loads data to data warehouse tables, Retail Insights reports, and
storage.
Adoption of purging and archiving strategy help in reducing data volumes, resulting
in better performance. Consider the following recommendations while implementing
these strategies in a data warehouse:

Performance 5-1
Key Factors in Performance

■ Design your archiving and purging strategy as early as possible in the Retail
Insights implementation. This helps in designing the most optimal table
partitioning for large tables.
■ Ensure that the data is deleted in the most optimal manner. SQL delete statements
may not be the most efficient way of removing unnecessary data from Retail
Insights tables. Consult with your database administrator to discuss purging and
archiving techniques.
■ Purging and archiving of tables must be carefully designed as it requires a good
understanding of analytic reports required by business users or regulatory
requirements that require companies to retain certain data for a required duration.
For example, in certain cases, aggregated data may be kept longer as compared to
the base level fact data because the users are interested in summary level reports
as compared to detailed (base level) reports for data older than two years.
■ Automation of the archiving and purging processes ensures that a consistent
approach is being followed in maintaining tables with large data volumes and
provides consistent report performance to the users.
■ While designing purging programs, make sure that dimensional data is not
deleted for which fact data is available or will be available.
■ An important consideration during purging is to make sure that Retail Insights
seed data (where applicable) is not deleted accidentally.

Flexible Aggregates
Retail Insights, by default, provides several aggregate tables. For the complete list, see
"Aggregates List" on page 5-9. These pre-built aggregate tables are selected based on
the following:
■ General usage patterns of the data
■ Reporting needs (As-Is or As-Was or both)
■ General aggregation ratio
The ratio between data in the base fact table versus data in the potential aggregate
table should be considered while deciding whether the fact table should be aggregated
or not. A ratio of 1:5 through 1:10 is a good starting point, a ratio of 1:10 through 1:20 is
good to aggregate, and a ratio of 1 to more than 20 must be aggregated.
During implementation or before, it is expected for the retailer to identify these
scenarios and select the appropriate aggregate tables for best performance and
usability. All the aggregate tables which are pre-packaged will have the ODI and
Oracle BI EE mappings. It is highly recommended not to use Retail Insights with all
the available aggregates.
Using all these aggregations improves report performance but the improved report
performance should be weighed against reduced ETL batch performance and
increased storage requirement.
The reason for providing these aggregates is to give flexibility for the customer to pick
appropriate levels and doesn't have to invest in customizing the product.
Below are the different groupings of aggregations. See "Aggregates List" on page 5-9
for additional details.
■ As-Was aggregates
■ As-Is aggregates

5-2 Oracle Retail Insights Implementation Guide


Key Factors in Performance

■ As-Was Corporate aggregates


■ As-Is Corporate aggregates
■ Season aggregates
Even though there are different flavors of aggregations based on As-Is and As-Was,
there will be few aggregate tables which will be commonly used for both As-Is and
As-Was. That is because, As-Is and As-Was differentiation is only across Product and
Organization Hierarchy. If the aggregation for a fact table is based on Time dimension
or any dimension other than Product and Organization, then that aggregate table can
be used for both As-Is and As-Was. For example, since the
W_RTL_SLS_IT_LC_WK_A table is at Item and Location, it can used for either
As-Was, As-Is, or both.
When the aggregations happen on any level of the Product or Organization hierarchy,
there will be separate aggregates for As-Is and As-Was. For example, the W_RTL_SLS_
SC_LC_DY_A and W_RTL_SLS_SC_LC_DY_CUR_A aggregates are on the subclass
level of product dimension. The W_RTL_SLS_SC_LC_DY_A aggregate is for As-Was
and the W_RTL_SLS_SC_LC_DY_CUR_A aggregate is for As-Is. All the As-Is
aggregates are suffixed by 'CUR_A', which means 'Current.'

Note: With the exception of a Corporate aggregate in Sales, Retail


Insights out of the box does not have any aggregates across the
Organization Dimension.

When the aggregation is only on a level from Product or Organization dimension then
those are referred as Corporate Aggregates. These kinds of aggregates are very useful
when reporting is done on any level of Product and Calendar hierarchy or
Organization and Calendar hierarchy. For the list of this type of aggregates for every
fact area see "Aggregates List" on page 5-9. Corporate aggregates are also classified
into As-Is and As-Was because they need to be processed separately to capture the
current as opposed to historical parent information.
Season aggregates are useful to do reporting specific to Season dimension. All the
aggregates on Season can be used for both As-Is and As-Was.
For each group of aggregations, as mentioned above, there is a mandatory aggregate
table that needs to be used for other selections of the aggregates and that can be
identified in the FlexAggregates document with the highlighted text.
For example, if the business only needs As-Is, the following points need to be
considered:
1. Get the general usage patterns of the data and aggregation ratio. Based on that,
select the list of aggregate tables. This may not be accurate for the first time but
can always be changed over a period of time based on the usage and the changing
data.
2. Ensure that you disable/freeze all the As-Was aggregate jobs and some of the
As-Is aggregates which were not selected, in ODI and disable the same in Oracle
BI EE as well. See the Oracle Retail Insights Operations Guide for more information.
This section only covers As-Is and As-Was aggregates, but Retail Insights also offers
PIT (Point in Time) reporting, which does not require any special processing of data.
There are no special tables or ODI jobs for PIT. In Oracle BI EE there is a separate
subject area for PIT reporting. For additional information on PIT see the Oracle Retail
Insights User Guide. PIT reporting is always done from the base fact tables or the
corporate aggregate tables. If PIT is required along with As-Is, As-Was, or both then

Performance 5-3
Key Factors in Performance

choose the corporate aggregates so that all the three reporting scenarios will be
benefited.

ETL Programs Performance

Setting ETL Program Multi-threading


Retail Insights base fact load programs can be configured to run using multiple
threads. The default number of threads for these programs is set to one and can be
configured based on requirements. For additional information on how multi-threading
works, see the Program Overview chapter of the Oracle Retail Insights Operations Guide.
1. Finalize the multi-threading strategy for the base fact extract programs.
2. Number of threads for each program may vary based on the data volume that
program handles and resource availability. Different thread numbers should be
tested by clients during implementation to achieve optimal results from
multi-threading.
3. In the C_ODI_PARAM table, update the value of the PARAM_VALUE column to
the desired number of threads. This applies to all records with the value 'LOC_
NUM_OF_THREAD' in the PARAM_NAME column and the name of the program
that requires multi-threading set in the SCENARIO_NAME column. See an
example below for scenario named SIL_Test, where the desired number of threads
needs to be set to 2 from 1 (default).
UPDATE C_ODI_PARAM
SET PARAM_VALUE = 2
WHERE PARAM_NAME = 'LOC_NUM_OF_THREAD'
AND SCENARIO_NAME = 'SIL_Test'

4. If the number of thread required is more than 10, you need to modify the DDL for
intermediate temp tables used by the ODI scenario. DDL changes require adding
extra partitions to hold the data. The number of partitions on the intermediate
temp table must be the same or higher than the required number of threads (which
is the value for LOC_NUM_OF_THREADS set in the previous step).
5. The value setup in the C_ODI_PARAM (in step 3) should be bigger or equal than
the max value of column ETL_THREAD_VAL in the staging tables. Otherwise,
some records could get missing.
6. If RDE SDE programs are not used, it will be the client's responsibility to assign
the data evenly across partitions in the staging tables based on the partition key
column ETL_THREAD_VAL. The following is the things that need to be
considered when data partition is manually made:
■ To get the most benefit of multi-threading, the data in the staging tables
should be evenly partitioned by column ETL_THREAD_VAL.
■ Records with same location (store or warehouse) should have same ETL_
THREAD_VAL, otherwise, unique constrain could be violated.

ODI Configuration
ODI must be configured prior to implementing Retail Insights. See the Oracle Retail
Insights Installation Guide for details on configuring ODI.

5-4 Oracle Retail Insights Implementation Guide


Key Factors in Performance

ETL Batch Scheduling


■ Set up the proper dependencies between the applications to ensure resources are
fully utilized, which helps the nightly batch finish earlier.
■ Retail Insights load programs (SIL programs) must not wait for all the extraction
programs (sde) to finish before starting. Some of them can start executing as soon
as the corresponding staging table is populated. For more information on setting
up dependencies, refer to the Oracle Retail Insights Operations Guide.
■ Allocate resources to the most important batch jobs (ones that populate the tables)
that support your critical reports (the reports you need first thing in the morning).
You can assign job priority in most batch scheduling tools.
■ Ensure that your source applications batch is optimized. Retail Insights runs
towards the end of the nightly batch. Retail Insights jobs are often the last jobs to
start due to the dependencies on the source system jobs, so Retail Insights is often
the last to finish. Optimizing the source applications batch helps Retail Insights
jobs start earlier.

Additional Considerations
■ Sort the W_RTL_INV_IT_LC_G table data after the data is seeded for the first time
to improve ETL performance.
■ In a production environment, fact tables with large data volume can be created
with the No Logging option. This improves the ETL performance and can be
implemented on a case by case basis.

Report Design
Report design can affect the performance of a report. While creating custom reports,
refer to the following guidelines:
■ Report developers should be trained in Oracle BI to learn how to design reports in
the most optimal manner.
■ Design reports at the highest level possible and allow drill down to more detailed
levels when required.
■ Design reports in a manner that multiple users can utilize a single report output
rather than multiple users running the same report. A best practice is to run one
report and distribute that report to multiple users. For more information on how
to distribute reports, refer to the Delivering Content chapter of the Oracle Business
Intelligence Enterprise Edition User Guide and the Configuring Oracle BI Scheduler
chapter of the Oracle Business Intelligence System Administrator's Guide.
■ Do not design reports to request data at a level lower than the minimum level that
a metric can be reported. In addition, drilling must not be performed at these
levels. This ensures that reports do not produce misleading or invalid results. For
example, reports must not be designed to request planning data at the item level
because planning data is only available at the subclass level and above.
■ As-Is reporting for all the positional facts such as inventory, cost and so on is only
possible at the corporate level aggregates.
■ Evaluate and purge reports periodically to eliminate any outdated or duplicate
reports.
■ Design reports to use the least amount of fact areas necessary. This reduces the
number of fact table joins and in turn reduces the risk of poor report performance.
For example, a best practice is not to design a single report with all sales,

Performance 5-5
Leading Practices

inventory, pricing and cost metrics, as this report will perform poorly due to joins
on big fact tables. In this type of scenario, try creating separate reports with one or
two fact areas on the report at a time and combining the results after these reports
have run successfully.
■ Design reports with the least number of metrics necessary.
■ Schedule reports according to priority. This ensures that critical reports are
available when needed. For more information on how to schedule reports, refer to
the Configuring Oracle BI Scheduler chapter of the Oracle Business Intelligence
System Administrator's Guide.

Additional Factors
Decision support queries sometimes require retrieval of large amounts of data. The
Oracle BI server can save the results of a query in cache files and then reuse those
results later when a similar query is requested. Using the middle-tier cache permits a
query to be run one time for multiple runnings of a query and not necessarily every
time the query is run. The query cache allows the Oracle BI Server to satisfy many
subsequent query requests without having to access back-end data sources (such as
Oracle database). This reduction in communication costs can dramatically decrease
query response time.
To summarize, query caching has the following advantages only when the same report
is run repeatedly:
■ Improvement of query performance
■ Less network traffic
■ Reduction in database processing and charge back
■ Reduction in Oracle BI server processing overhead
For more details on Caching refer to the Managing Performance Tuning and Query
Caching chapter in the Oracle BI EE System Administrator's Guide.

Partitioning Strategy
Database level table partitioning is very important for ETL batch and report
performance. For more information, see Chapter 4, "Compression and Partitioning".

Data Base Configuration


Retail Insights is built on Oracle Database 12c and must be optimized and configured
for a retailers' needs. Refer to the Setting up your Data Warehouse System chapter of
the Oracle 12c Data Warehouse Guide.

Adequate Hardware Resources


ETL program and report performance are highly dependent on the hardware
resources. For more information, see Chapter 2, "Setup and Configuration".

Leading Practices

5-6 Oracle Retail Insights Implementation Guide


Leading Practices

Customizations
Changes and modifications to the Retail Insights delivered code or development of
new code is considered customization. Retail Insights does not support custom code
developed by clients unless the issue related to customization can be recreated using
Retail Insights delivered objects. Listed below are recommendations that will help you
in maintaining Retail Insights code:
■ Naming convention: it is recommended that you use a good and consistent
naming convention when customizing Retail Insights delivered code or building
new code in the Retail Insights environment.
This strategy is helpful in identifying custom code and also helps when merging a
retailer's Retail Insights repository with future releases of the Retail Insights
repository. There is a possibility of losing customizations to Retail Insights
provided ODI scripts or Oracle BI EE repository, if the customized code uses the
same object/script names that are used by Retail Insights.
■ As a best practice, keep all the documentation up-to-date for capturing any
changes or new code that has been developed at a site. For example, if table
structure has been customized, create or update the custom Data Model Guide
with these changes.
■ While customizing the rpd, do not make any changes directly on the main
shipped/original rpd. Make a copy of the original rpd and start the changes on the
copied rpd which will be the modified version. This is useful while applying any
patches in future releases of Retail Insights through Oracle BI EE's merge utility.
For more details refer to the Managing Oracle BI Repository Files chapter of the
Oracle BI EE Metadata Repository Builder's Guide.

ODI Best Practices


For customizations to existing ODI code or while creating new ODI code, refer to the
ODI Best Practices Guide included with your product code.

Oracle BI EE Best Practices


■ Create aliases for the objects created in the physical layer for usability purposes.
■ Do not design the business layer model as a snow-flake model.
■ Any level key on ident's must be set to non-drillable.
■ In the presentation layer, fact folders (presentation tables) must contain only
metrics and dimension folders (presentation tables) must contain only attributes.
■ For a development environment, it is recommended to use a multi-user
environment. For more information on setting up a multi-user environment, refer
to the Completing Setup and Managing Oracle BI Repository Files chapter of the
Oracle Business Intelligence Server Administration Guide.

Batch Schedule Best Practices


The following best practices are recommended for Retail Insights:

Automation
The batch schedule should be automated as per the Oracle Retail Insights Operations
Guide. Any manual intervention should be avoided.

Performance 5-7
Leading Practices

Recoverability
Set up the batch schedule in such a manner that the batch can resume from the point
where it failed.

Retail Insights Loading Batch Execution Catch-Up


Loading batch (sil) execution catch-up can be achieved by backing up the staging table
data. The following scenarios explain when users can benefit from this. This approach
is considered a customization on Retail Insights programs and is not supported.
■ Catch-up: When Retail Insights is not ready for implementation, users can use
history data stored in the staging backup tables (explained later in this section) to
catch-up on the data loading once the system is implemented or becomes
available.
■ Retail Insights database systems are down: When Retail Insights database systems
are down, users can use history data stored in the staging backup tables (explained
later in this section) to load the data once the system becomes available.
The following steps illustrate/show how the Loading Batch Execution catch-up
solution works:
1. Create Retail Insights staging tables for each corresponding staging table using the
DDL for the staging tables provided. For more information, see the Oracle Retail
Insights Administration Guide.
2. Set up the Retail Insights ODI Universal Adapter programs, so they are ready to be
executed against source files provided by the source system and load into the
staging tables created in previous setup.
3. Create a one-to-one backup table for each staging table. This backup table uses the
same DDL as the staging table along with an additional field (load_date) which
can be mapped to source system business date. This date is used as a filter when
the backup data is ready to be moved to the staging table.
4. Execute the Universal Adapter program to populate the staging tables created in
the first step.
5. Move staging table data into backup staging table with the correct business date.
By default, Retail Insights does not provide the backup table DDL or backup data
population scripts.
6. Repeat the process of executing the Universal Adapter program and taking the
backup to the staging backup table periodically (daily, once in two days, weekly,
and so on), until the Retail Insights systems are available. Note that the staging
table only contains the current business day's data, while the backup staging table
contains data for all the business dates when the program was executed.
7. Once the Retail Insights systems become available, move the backup staging table
data to the staging tables, one day at a time. This can be done by using 'load_date'
as a filter on the source backup staging table data.
8. Once one business date data is moved to the staging table, SIL programs and
corresponding PLP programs need to be executed for loading data into final data
warehouse tables.
9. Repeat the process of moving data from backup staging to staging and executing
SIL and PLP programs until all the data for all business dates from backup staging
tables is loaded into the fact and dimension tables.

5-8 Oracle Retail Insights Implementation Guide


Aggregates List

High Availability
Depending on your specific requirements and for facilitating performance
improvement, a reporting mirror (exact copy of existing data warehouse) can be
created. With this approach, one database can be used for ETL processes and the
second database instance can be used by users for running their reports. There are
several ways (database level solutions, operating system level solutions and hardware
level solutions) of creating a database mirror. Consult with your IT resources or
database administrator for evaluating available options.
If this approach is adopted, you must run your queries from the reporting mirror area,
not from core data warehouse area. Take the following into consideration:
■ Consider this approach for large data warehouse implementations.
■ Creating data marts can be a good option when implementing mirroring.
■ Build a user notification mechanism should be built to notify users after the data
has been refreshed on the mirror.

Advantages
■ High availability of data warehouse. When batch is running, the users access the
mirror and the only downtime is when data is copied over from the core data
warehouse to the mirror.
■ There are no conflicts between user queries and the ETL batch schedule.

Disadvantages
■ Storage requirements are increased.
■ Additional database maintenance is required.

Batch Efficiency
Keep revisiting the batch timings on a periodic basis to identify the candidates for
performance improvements.

Aggregates List
The table below lists Retail Insights aggregates grouped by subject area and
aggregation type.

Performance 5-9
Aggregates List

5-10 Oracle Retail Insights Implementation Guide


Aggregates List

Table 5–1 Retail Insights Aggregates


Base Fact Corporate Aggregate Tables Corporate Aggregate Tables Season Aggregates
Fact Table Aggregate Tables As-Was Aggregate Tables As-Is (As-Was) (As-Is) (As-Is/As-Was)
Inventory W_RTL_INV_ W_RTL_INV_IT_LC_WK_A, W_RTL_INV_IT_DY_A, W_RTL_INV_IT_DY_A,
Position IT_LC_DY_F W_RTL_INV_SC_LC_DY_A, W_RTL_INV_IT_WK_A, W_RTL_INV_IT_WK_A,
W_RTL_INV_SC_LC_WK_A, W_RTL_INV_SC_DY_A, W_RTL_INV_SC_DY_CUR_A,
W_RTL_INV_CL_LC_DY_A, W_RTL_INV_SC_WK_A, W_RTL_INV_SC_WK_CUR_A
W_RTL_INV_CL_LC_WK_A, W_RTL_INV_SC_DY_CUR_A,
W_RTL_INV_DP_LC_DY_A, W_RTL_INV_SC_WK_CUR_A
W_RTL_INV_DP_LC_WK_A
Inventory W_RTL_ W_RTL_INVRC_IT_LC_WK_ W_RTL_INVRC_SC_ W_RTL_INVRC_IT_DY_A, W_RTL_INVRC_IT_DY_A,
Receipts INVRC_IT_LC_ A, LC_DY_CUR_A, W_ W_RTL_INVRC_IT_WK_A, W_RTL_INVRC_IT_WK_A,
DY_F W_RTL_INVRC_SC_LC_DY_ RTL_INVRC_SC_LC_ W_RTL_INVRC_SC_DY_A, W_RTL_INVRC_SC_DY_
A, WK_CUR_A W_RTL_INVRC_SC_WK_A, CUR_A,
W_RTL_INVRC_SC_LC_WK_ W_RTL_INVRC_SC_DY_CUR_ W_RTL_INVRC_SC_WK_
A, A, CUR_A
W_RTL_INVRC_CL_LC_WK_ W_RTL_INVRC_SC_WK_CUR_
A, A
W_RTL_INVRC_CL_LC_DY_
A,
W_RTL_INVRC_DP_LC_DY_
A,
W_RTL_INVRC_DP_LC_WK_
A
Markdow W_RTL_ W_RTL_MKDN_IT_LC_WK_ W_RTL_MKDN_SC_ W_RTL_MKDN_IT_DY_A, W_RTL_MKDN_IT_DY_A, W_RTL_MKDN_IT_
n MKDN_IT_LC_ A, LC_DY_CUR_A, W_ W_RTL_MKDN_IT_WK_A, W_RTL_MKDN_IT_WK_A, LC_DY_SN_A, W_
DY_F W_RTL_MKDN_SC_LC_DY_ RTL_MKDN_SC_LC_ W_RTL_MKDN_SC_DY_A, W_RTL_MKDN_SC_DY_ RTL_MKDN_IT_LC_
A, WK_CUR_A, W_RTL_ W_RTL_MKDN_SC_WK_A, CURR_A, WK_SN_A, W_RTL_
W_RTL_MKDN_SC_LC_WK_ MKDN_CL_LC_DY_ W_RTL_MKDN_SC_DY_ W_RTL_MKDN_SC_WK_ MKDN_IT_DY_SN_A,
A, CURR_A, W_RTL_ CURR_A, CURR_A W_RTL_MKDN_IT_
W_RTL_MKDN_CL_LC_DY_ MKDN_CL_LC_WK_ W_RTL_MKDN_SC_WK_ WK_SN_A
A, CURR_A, W_RTL_ CURR_A
W_RTL_MKDN_CL_LC_WK_ MKDN_DP_LC_DY_
A, CURR_A, W_RTL_
W_RTL_MKDN_DP_LC_DY_ MKDN_DP_LC_WK_
A, CURR_A
W_RTL_MKDN_DP_LC_WK_
A
Net Cost W_RTL_ W_RTL_NCOST_IT_DY_A W_RTL_NCOST_IT_DY_A
NCOST_IT_
LC_DY_F

Performance 5-11
Aggregates List

Table 5–1 (Cont.) Retail Insights Aggregates


Base Fact Corporate Aggregate Tables Corporate Aggregate Tables Season Aggregates
Fact Table Aggregate Tables As-Was Aggregate Tables As-Is (As-Was) (As-Is) (As-Is/As-Was)
Net Profit W_RTL_ W_RTL_NPROF_IT_LC_WK_ W_RTL_NPROF_IT_DY_A, W_RTL_NPROF_IT_DY_A,
NPROF_IT_ A, W_RTL_NPROF_IT_WK_A, W_RTL_NPROF_IT_WK_A
LC_DY_F W_RTL_NPROF_SC_LC_DY_ W_RTL_NPROF_SC_DY_A,
A, W_RTL_NPROF_SC_WK_A
W_RTL_NPROF_SC_LC_WK_
A,
W_RTL_NPROF_CL_LC_DY_
A,
W_RTL_NPROF_CL_LC_WK_
A,
W_RTL_NPROF_DP_LC_DY_
A,
W_RTL_NPROF_DP_LC_WK_
A
Planning W_RTL_
MFPCPC_SC_
CH_WK_F, W_
RTL_MFPOPC_
SC_CH_WK_F,
W_RTL_
MFPCPR_SC_
CH_WK_F, W_
RTL_MFPOPR_
SC_CH_WK_F
Pricing W_RTL_ W_RTL_PRICE_IT_DY_A W_RTL_PRICE_IT_DY_A
PRICE_IT_LC_
DY_F
Sales W_RTL_SLS_ W_RTL_SLS_IT_LC_DY_A, W_RTL_SLS_IT_LC_ W_RTL_SLS_IT_DY_A, W_RTL_SLS_IT_DY_A, W_RTL_SLS_IT_LC_
Transacti TRX_IT_LC_ W_RTL_SLS_IT_LC_WK_A, DY_A, W_RTL_SLS_IT_ W_RTL_SLS_IT_WK_A, W_RTL_SLS_IT_WK_A, DY_SN_A, W_RTL_
on DY_F W_RTL_SLS_SC_LC_DY_A, LC_WK_A, W_RTL_ W_RTL_SLS_SC_DY_A, W_RTL_SLS_SC_DY_CUR_A, SLS_IT_LC_WK_SN_
W_RTL_SLS_SC_LC_WK_A, SLS_SC_LC_DY_CUR_ W_RTL_SLS_SC_WK_A, W_RTL_SLS_SC_WK_CUR_A, A, W_RTL_SLS_IT_
W_RTL_SLS_CL_LC_DY_A, A, W_RTL_SLS_SC_LC_ W_RTL_SLS_SC_DY_CUR_A, W_RTL_SLS_LC_DY_A, DY_SN_A, W_RTL_
W_RTL_SLS_CL_LC_WK_A, WK_CUR_A, W_RTL_ W_RTL_SLS_SC_WK_CUR_A, W_RTL_SLS_LC_WK_A SLS_IT_WK_SN_A
W_RTL_SLS_DP_LC_DY_A, SLS_CL_LC_DY_CUR_ W_RTL_SLS_LC_DY_A,
W_RTL_SLS_DP_LC_WK_A A, W_RTL_SLS_CL_LC_ W_RTL_SLS_LC_WK_A
WK_CUR_A, W_RTL_
SLS_DP_LC_DY_CUR_
A, W_RTL_SLS_DP_LC_
WK_CUR_A

Performance 5-12
Aggregates List

Table 5–1 (Cont.) Retail Insights Aggregates


Base Fact Corporate Aggregate Tables Corporate Aggregate Tables Season Aggregates
Fact Table Aggregate Tables As-Was Aggregate Tables As-Is (As-Was) (As-Is) (As-Is/As-Was)
Sales W_RTL_ W_RTL_SLSFC_SC_LC_DY_ W_RTL_SLSFC_SC_LC_ W_RTL_SLSFC_IT_DY_A, W_RTL_SLSFC_IT_DY_A, W_RTL_SLSFC_IT_
Forecast SLSFC_IT_LC_ A, DY_CUR_A, W_RTL_ W_RTL_SLSFC_IT_WK_A, W_RTL_SLSFC_IT_WK_A, LC_DY_SN_A, W_
DY_F, W_RTL_ W_RTL_SLSFC_SC_LC_WK_ SLSFC_SC_LC_WK_ W_RTL_SLSFC_SC_DY_A, W_RTL_SLSFC_SC_DY_CUR_ RTL_SLSFC_IT_LC_
SLSFC_IT_LC_ A CUR_A W_RTL_SLSFC_SC_WK_A A, WK_SN_A, W_RTL_
WK_F W_RTL_SLSFC_SC_WK_ SLSFC_IT_DY_SN_A,
CUR_A W_RTL_SLSFC_IT_
WK_SN_A
Sales W_RTL_ W_RTL_SLSPK_IT_LC_WK_A W_RTL_SLSPK_IT_DY_A, W_RTL_SLSPK_IT_DY_A, W_RTL_SLSPK_IT_
Pack SLSPK_IT_LC_ W_RTL_SLSPK_IT_WK_A W_RTL_SLSPK_IT_WK_A LC_DY_SN_A, W_
DY_F RTL_SLSPK_IT_LC_
WK_SN_A, W_RTL_
SLSPK_IT_DY_SN_A,
W_RTL_SLSPK_IT_
WK_SN_A
Sales W_RTL_ W_RTL_SLSPR_PC_CS_IT_
Promotio SLSPR_TRX_ LC_DY_A
n IT_LC_DY_F
W_RTL_SLSPR_PC_CUST_
LC_DY_A
W_RTL_SLSPR_PC_HH_WK_
A
W_RTL_SLSPR_PC_IT_LC_
DY_A
W_RTL_SLSPR_PE_CS_IT_
LC_DY_A
W_RTL_SLSPR_PE_CUST_
LC_DY_A
W_RTL_SLSPR_PE_IT_LC_
DY_A
W_RTL_SLSPR_PP_CS_IT_
LC_DY_A
W_RTL_SLSPR_PP_CUST_
LC_DY_A
W_RTL_SLSPR_PP_IT_LC_
DY_A

Performance 5-13
Aggregates List

Table 5–1 (Cont.) Retail Insights Aggregates


Base Fact Corporate Aggregate Tables Corporate Aggregate Tables Season Aggregates
Fact Table Aggregate Tables As-Was Aggregate Tables As-Is (As-Was) (As-Is) (As-Is/As-Was)
Stock W_RTL_STCK_
Ledger LDGR_SC_LC_
WK_F, W_RTL_
STCK_LDGR_
SC_LC_MH_F
Supplier W_RTL_ W_RTL_SUPPCM_IT_LC_ W_RTL_SUPPCM_IT_ W_RTL_SUPPCM_LC_WK_A W_RTL_SUPPCM_LC_WK_A
Complian SUPPCM_IT_ WK_A, W_RTL_SUPPCMUF_ LC_WK_A, W_RTL_
ce LC_DY_F, W_ LC_WK_A SUPPCMUF_LC_WK_A
RTL_
SUPPCMUF_
LC_DY_F
Supplier W_RTL_SUPP_
Invoice IVC_PO_IT_F
Unit Cost W_RTL_ W_RTL_BCOST_IT_DY_A W_RTL_BCOST_IT_DY_A
BCOST_IT_LC_
DY_F
Wholesal W_RTL_ W_RTL_SLSWF_IT_LC_WK_ W_RTL_SLSWF_IT_LC_ W_RTL_SLSWF_IT_DY_A, W_RTL_SLSWF_IT_DY_A,
e SLSWF_IT_LC_ A, WK_A, W_RTL_SLSWF_ W_RTL_SLSWF_IT_WK_A W_RTL_SLSWF_IT_WK_A
Franchise DY_F W_RTL_SLSWF_SC_LC_DY_ SC_LC_DY_CUR_A, W_
A, RTL_SLSWF_SC_LC_
W_RTL_SLSWF_SC_LC_WK_ WK_CUR_A
A

Performance 5-14
6
6Retail Insights Aggregation Framework

This chapter describes how Retail Insights implements customized aggregation by


using the Retail Insights Aggregation Framework.

Overview of Retail Insights Aggregation Framework


The Retail Insights Aggregation Framework is a PL/SQL-based tool designed to
simplify the Retail Insights aggregation process by leveraging the existing Retail
Insights aggregation programs that are mandatory to the Retail Insights application. It
is designed to provide a framework for end users to populate customized aggregation
tables to gain better performance on front-end re-porting.
The framework is designed to either generate SQL DML file, or to execute the SQL
DML statement, or to do both based on the setting in the ra.env file to populate
customized aggregation tables. The client needs to populate the configuration table to
provide enough mapping information for the framework to generate the DML
statement. This customized process by using the Aggregation Framework can be
included in the client's batch scheduler by calling the wrap script aggplp.ksh. Besides
providing regular Retail Insights ETL logging and program status control capability,
the framework also generates SQL DML file, message file, and error file under Retail
Insights database utlfile folder to help the end user for the verification.
The framework also has the capability to aggregate data across the attributes along
with the product hierarchy and either generate the SQL DML file, or execute the SQL
DML statement, or do both based on the setting in ra.env file. This attribute
aggregation process can be included in the client's batch scheduler by calling the wrap
script attraggplp.ksh.
Due to the security concern, the database connection for the framework is managed by
Retail Insights ODI in a same approach which is utilized by Retail Insights regular
batch programs.

Aggregation Framework Initial Setup and Daily Execution


Perform the following steps for the Aggregation Framework installation and initial
setup:

Note: The <STAGING_DIR> mentioned below is the Retail Insights


installer staging directory. Please refer to the Oracle Retail Insights
Administration Guide for additional information.

Retail Insights Aggregation Framework 6-1


Overview of Retail Insights Aggregation Framework

Importing ODI Components for Aggregation Framework


Perform the following procedure to import Aggregation Framework ODI components:
1. Make sure $ODI_HOME/bin/odiparams.sh is configured correctly.
2. Copy odi_import.ksh to different folders:
a. Copy <STAGING_DIR>/ora/installer/ora142/mmhome/full/src/odi_
import.ksh to <STAGING_DIR>/ora/installer/Aggregation_Framework/odi
folder.
b. Copy <STAGING_DIR>/ora/installer/ora142/mmhome/full/src/odi_
import.ksh to <STAGING_DIR>/ora/installer/Aggregation_
Framework/odi/odi_parent folder.
3. Execute odi_import.ksh from <STAGING_DIR>/ora/installer/Aggregation_
Framework/odi/odi_parent/odi_import.ksh, which will import the below
Aggregation Framework ODI components
■ FOLD_Aggregation_Framework.xml

Note: Before executing odi_import.ksh, please read the comments


inside of the script on how to use the script and set up the ODI_
HOME and LOGDIR environment variables correctly.

4. Execute odi_import.ksh from <STAGING_DIR>/ora/installer/Aggregation_


Framework/odi/odi_import.ksh, which will import the below Aggregation
Framework ODI components
■ VAR_RA_UTLPATH.xml
■ VAR_RA_AGG_EXEC_MODE.xml
■ FOLD_PLP_RetailAggregationDaily.xml
■ PACK_PLP_RetailAggregationDaily.xml
■ PACK_PLP_RetailAggregationReclass.xml
■ PACK_PLP_RetailAttrAggregationDaily.xml
■ TRT_RetailAggregationDaily.xml
■ TRT_RetailAggregationReclass.xml
■ TRT_RetailAttrAggregationDaily.xml
■ TRT_RetailAggregationDaily_Debug.xml
■ TRT_RetailAggregationReclass_Debug.xml
■ TRT_RetailAttrAggregationDaily_Debug.xml

Importing Aggregation Framework Shell script


Perform the following step to import the Aggregation Framework shell script:
1. Copy aggplp.ksh, aggrcplp.ksh, and attraggplp.ksh from <STAGING_
DIR>/ora/installer/Aggregation_Framework/ to $MMHOME/src directory.

Initial Aggregation Framework Setup


Perform the following procedure for initial set up of the Aggregation Framework:

6-2 Oracle Retail Insights Implementation Guide


Overview of Retail Insights Aggregation Framework

Note: The sql files mentioned below can be found under the
<STAGING_DIR>/ora/istall/Aggregation_Framework folder.

1. Under the Retail Insights batch user schema, execute the provided script W_RTL_
AGGREGATION_DAILY_TMP.sql and Alter_W_RTL_AGGREGATION_DAILY_
TMP.sql in the same sequence as mentioned to create the configuration table W_
RTL_AGGREGATION_DAILY_TMP.
2. Under the Retail Insights batch user schema, execute the provided script W_RTL_
AGGREGATION_MSG_TMP.sql to create the Staging Log table W_RTL_
AGGREGATION_MSG_TMP.
3. Under the Retail Insights batch user schema, execute the ra_aggregation_daily.sql,
ra_aggregation_rec.sql, and RA_ATTR_AGGREGATION_DAILY_PROC.sql scripts
to create a PL/SQL stored procedure RA_AGGREGATION_DAILY RA_
AGGREGATION_REC and RA_ATTR_AGGREGATION_DAILY_PROC.
4. Create customized aggregation tables under the Retail Insights data mart schema.
5. Configure database UTLFILE folder and execution mode in the ra.env file. The
UTLFILE folder location will be setup on the application server.

Figure 6–1 Configuring the UTLFILE Folder

6. Populate the W_RTL_AGGREGATION_DAILY_TMP configuration table. There is


one row for each customized aggregation table.
7. Populate the Retail Insights program control table C_ODI_PARAM to include the
program/customized target tables. The SCENARIO_NAME column should be
populated with either PLP_RETAILAGGREGATIONDAILY if it is regular
aggregation process or PLP_RETAILAGGREGATIONRECLASS if it is
reclassification related process or PLP_RETAILATTRAGGREGATIONDAILY if it
is regular attribute aggregation process. For each program/target table, the
PARAM_NAME must be populated with value of 'TARGET_TABLE_NAME' and
PARAM_VALUE with the actual customized table name. The current version does
not get ETL_PROC_WID and EXECUTION_ID from this table. The end user can
use SP_MAPPING column to populate these columns if these columns are
required by the end users.

Retail Insights Aggregation Framework 6-3


Overview of Retail Insights Aggregation Framework

Aggregation Framework Verification


Perform the following steps for the Aggregation Framework verification:
1. For daily batch process, execute script aggplp.ksh with the name of customized
aggregation table name and execution mode as parameters. For the reclassification
only batch process, execute script aggrcplp.ksh with the name of customized
aggregation table name and execution mode as parameters.
2. For daily batch process, execute script attragplp.ksh with the name of customized
attribute aggregation table name and execution mode as parameters.
3. If the execution mode is not specified in the command line, or the value is not in
("F", "B", "E"), then the value defined in the ra.env file will be used.
4. Verify result by using the generated sql file with the DML statement. If it is not
correct, then go to step 6 in the initial setup to reconfigure the configuration table.
5. Add step 7 into the batch scheduler, so the populating of the aggregation table will
be part of daily ETL job.

Aggregation Framework Configuration


The Aggregation Framework must be properly setup and configured before the
process can be included in the daily ETL process.

Creating Customized Aggregation Table


In order to use the framework, the end users must create the aggregation table with
the following rules:
■ The customized aggregation table has to be created under the Retail Insights data
mart schema.
■ The user should use the following table naming standard:
■ For day level aggregation table, the table name must contain _DY_.
■ For day and location level aggregation table, the table name must contain _LC_
DY_.

Note: It is a good practice to keep the column name as the same with
the column name in the source table. This will reduce the task on the
column mapping under the SP_MAPPING column.

■ For Attribute aggregation, the attribute column name in aggregate table should be
same as that of the attribute tables. SP_MAPPING on attribute columns is not
supported. 'ITEMDIFF' is one of the exceptions, for 'ITEMDIFF' attribute the
attribute column name in aggregate table should be the differentiator name .i.e the
attribute value of FLEX_ATTRIB_10_CHAR from W_RTL_ITEM_GRP1_D
attribute table.
■ The attribute aggregation framework only supports attributes from W_RTL_
ITEM_GRP1_D, W_PRODUCT_ATTR_D and W_PRODUCT_D, hence the
attributes columns from one of these tables should be used while creating the
Aggregate tables, if the aggregation is at attribute level.
■ The name of aggregateable columns (using sum or average) and only the name of
aggregatable columns should end with _AMT, _LCL, _GLOBAL1, _GLOBAL2, _

6-4 Oracle Retail Insights Implementation Guide


Overview of Retail Insights Aggregation Framework

GLOBAL3, _QTY, _COUNT. Otherwise, a column mapping should be provided


under SP_MAPPING column in the configuration table.
■ As a rule of the framework, the transaction date in the aggregation source table
has to be named as DT_WID (for a source table at daily level), WK_WID (for a
source table at week level), or DAY_DT (for a source TMP table).
■ For all other Retail Insights standard columns, please refer to the Oracle Retail
Insights Data Model Guide.

Configuring Framework in W_RTL_AGGREGATION_DAILY_TMP


W_RTL_AGGREGATION_DAILY_TMP is a configuration table under the Retail
Insights batch user schema. It has aggregation information utilized by the framework
to generate DML statement. In order to use the Retail Insights Aggregation
Framework correctly, the following information has to be provided by the client:
■ SRC_TABLE
This is the source table that is used as a source of aggregation process.
For a regular daily batch, the source table has to be a temp table owned by the
Retail Insights batch user schema. In most case, it is generated by a Retail Insights
ETL batch program which is mandatory to be executed. In case of attribute
aggregation process the source temp table should be at _IT_LC_DY_TMP level.
For a reclassification batch that is only executed when there is a reclassification,
the source table is a fact table at item/location/day level for transaction fact or at
item/day or item/week for positional fact. These source fact tables should be
owned by the Retail Insights data mart schema and populated by Retail Insights
mandatory batch programs.
■ TGT_TABLE
This is the customized aggregation table. It is under the Retail Insights data mart
schema.
■ AGGREGATION_TYPE
This column specified the aggregation type that the framework supports.
Currently the framework supports aggregation on product hierarchy, time
hierarchy (to week), and product season. The valid values are:
– SC: from item to subclass based on as-was
– CL: from item to class based on as-was
– DP: from item to department based on as-was
– SC_ASIS: from item to subclass based on as-is
– CL_ASIS: from item to class based on as-is
– DP_ASIS: from item to department based on as-is
– SN: from item to item season based on the transaction date
– IT: to aggregate items at item/attribute levels, only for attribute aggregation
process.
– SC_REC_ASIS: fact recalculation on reclassification day from item to subclass
based on as-is. When this type is selected, the program will re-aggregate fact
data from item level to subclass level based on the as-is. A temporary table is
generated under the Retail Insights batch user schema to store the
re-calculation result which can be used for further aggregation to class or

Retail Insights Aggregation Framework 6-5


Overview of Retail Insights Aggregation Framework

department level. The temporary table name can be found in the DML
statement file generated by the program under the UTLFILE folder.
– CL_REC_ASIS: fact re-calculation on reclassification day from item to class
based on as-is. The batch program that uses SC_RC_ASIS aggregation type is a
pre-requirement for the batch program that uses CL_REC_ASIS and both
should have the same source table name under SRC_TABLE column.
– DP_REC_ASIS: fact re-calculation on reclassification day from item to
department based on as-is. The batch program that uses SC_RC_ASIS
aggregation type is a pre-requirement for the batch program that uses DP_RC_
ASIS and both should have the same source table name under SRC_TABLE
column.

Note: _ASIS & _REC_ASIS aggregation_types are not supported by


attribute aggregation process.
For reclassification type of aggregation, due to performance concern,
the subclass/location/day level is mandatory before clients can
aggregate to other levels. All other levels will use the result from the
process to continue higher level aggregation.

■ ATTRIBUTE_KEY
This is the Attribute Key from the attribute table on which the attribute
aggregation works. These keys are used to aggregate data at the attribute level. If
there are multiple attributes to be aggregated in a single table, all attributes should
be declared in the single column with comma "," delimiter. The attribute Key
columns should be prefixed with their respective table names.
Example: W_RTL_ITEM_GRP1_D.BRAND_WID, W_RTL_ITEM_GRP1_
D.COLOR_WID
The attribute key in case of regular aggregation or reclass aggregation will be left
blank.
■ SEQ_NAME
This is the name of the sequence that will be used as ROW_WID on the target
table.
■ AVG_COLUMNS
This is a list of columns that use average logic in the aggregation. The name of
columns should be separated by comma.
■ PK_COLUMNS
This lists primary key columns for the customized aggregation table. The name of
columns should be separated by comma.
■ SP_MAPPING
The framework provides auto column mapping for the following cases:
– The column name in the target table is the same as a source column in the
source table.
– The amount columns _AMT, _AMT_GLOBAL1, _AMT_GLOBAL2, _AMT_
GLOBAL3 are mapped to the source columns as _AMT_LCL/LOC_
EXCHANGE_RATE if the configuration CURRENCY_EXPAND_IND is set to
'Y'.

6-6 Oracle Retail Insights Implementation Guide


Overview of Retail Insights Aggregation Framework

– The column W_INSERT_DT and W_UPDATE_DT on the target column are


mapped to system time from the database.
– The sequence name defined in the configuration table is used as ROW_WID
when ROW_WID column exists in the target table.
– If the target column cannot be found in the source table by matching column
name and at the same time, the target column name does not exist in the
customized column mapping under SP_MAPPING, then value 0 will be used
for the mapping and a warning message will be written to the message file.
– The INTEGRATION_ID column is mapped with the concatenation of primary
key provided in the configuration table. The order of the concatenation is the
same as the order of primary key provided in the PK_COLUMNS column in
the configuration table. This auto mapping may use surrogate key instead of
the ID from source system if the surrogate key is used as part of primary key.
Besides the capability of auto mapping, this framework also provides
customized column mapping by using the column SP_MAPPING in the
configuration table.
– The syntax for the customized mapping is column1=value1. The column1 is a
column name on the target table. The value1 can be either constant value or a
column name on the source table.
– If there are multiple customized mappings, '&' should be used between each
mapping. For example column1=value1 & column2=value2.
– The SQL aggregation function (sum, average, min) should be considered if the
target column in the customized mapping is not part of the primary key
specified in the column PK_COLUMNS.
– The customized mapping using SP_MAPPING column only supports regular
update. Once a column mapping is specified, the update on this column will
always use TARGET.COLUMN1=SOURCE.COLUMN1 regardless of the
configuration value specified in the column POSITIONAL_IND.
– SP_MAPPING cannot be applied on the attribute key columns, Attribute key
will be the same column name as attribute table.
■ CURRENCY_EXPAND_IND
This column is to indicate if the target table has an amount column in a primary
currency or global currency that will be derived from the source table by a
calculation. The valid values are 'Y' or 'N'.
■ PARA_DEGREE
This column has the parallel degree for the DML process. The default value is 0.
■ POSITIONAL_IND
This column indicates if the amount columns, quantity columns, or count columns
on this table are stored in positional format or in transactional format. The valid
values are 'Y' or 'N'. If the value is 'N', the target column will be updated by
TARGET.COLUMN1=NVL(SOURCE.COLUMN1, 0)
+NVL(TARGET.COLUMN1,0). If the value is 'Y', the target column will be
updated by TARGET.COLUMN1=SOURCE.COLUMN1. If there is any exception,
the end user can use customized mapping on column SP_MAPPING for those
exceptional columns.

Retail Insights Aggregation Framework 6-7


Overview of Retail Insights Aggregation Framework

Populating the Customized Aggregation Table

Batch Process
Once the configuration is completed and tested, the customized aggregation table can
be populated as daily ETL batch process. The syntax to kick off the process is:
For daily batch process of regular aggregation process, calling Unix script aggplp.ksh
TARGET_TABLE_NAME, in which TARGET_TABLE_NAME is the name of
customized aggregation table and it should be already configured in W_RTL_
AGGREGATION_DAILY_TMP table.
For reclassification only process, calling Unix script aggrcplp.ksh TARGET_TABLE_
NAME, in which TARGET_TABLE_NAME is the name of customized aggregation
table and it should be already configured in W_RTL_AGGREGATION_DAILY_TMP
table. For transactional fact, the execution of subclass/location/day level is the
pre-requirement for all other levels. For positional fact, the execution of subclass/day
and subclass/week is the pre-requirement for other corporate/day level and
corporate/week level.
For daily batch process of attribute aggregation process, calling Unix script
attraggplp.ksh TARGET_TABLE_NAME, in which TARGET_TABLE_NAME is the
name of customized aggregation table and it should be already configured in W_RTL_
AGGREGATION_DAILY_TMP table.
Please refer to the Oracle Retail Insights Data Model Guide for Retail Insights table
naming standards.

Batch Status Control


The process from calling aggplp.ksh, aggrcplp.ksh or attraggplp.ksh will also cause
the framework to insert a record to the Retail Insights batch status control table C_
LOAD_DATES with PLP_RETAILAGGREGATIONDAILY, PLP_
RETAILAGGREGATIONRECLASS or PLP_RETAILATTRAGGREGATIONDAILY as
PACKAGE_NAME and the name of customized aggregation table as TARGET_
TABLE_NAME. The client has to either execute etlrefreshgenplp.ksh to remove this
record from C_LOAD_DATES or manually delete this status record from C_LOAD_
DATES before the same ETL batch process can be executed again against the same
aggregation table. This batch control process is consistent with the process used by
Retail Insights mandatory batch programs.

Batch Logging
The Retail Insights Aggregation Framework writes batch logging information into a
Retail Insights log file that is used by Retail Insights regular batch programs. The end
user can also view the detailed logging information though ODI operator. This is in
consistence with the logging from Retail Insights regular batch programs.
Besides the standard Retail Insights logging, the framework also provides a message
file and a SQL file under the Oracle utlfile folder. The message file uses [TABLE_
NAME] or [TABLE_NAME]_rc as file name and "msg" as file name extension. It
provides in-formation when the target column cannot be found in the source table and
when the customized column mapping cannot be found in the configuration table. The
SQL file is available when the execution mode is set to 'B' or 'F' in ra.env file. It uses
[TABLE_NAME] or [TABLE_NAME]_rc as file name and contains the DML statement
that will be used to populate the customized aggregation table. The DML statement
has "sql" as the file name extension. All these files can be used to help the end user to
verify the ETL process result during framework setup time or during the regular batch

6-8 Oracle Retail Insights Implementation Guide


Aggregation Framework Data Flow

process. In case of any failures the error information is passed to the ODI operator and
the regular batch logging.

Aggregation Framework Data Flow

Figure 6–2 SN, DP, CL, SC, DP_ASIS, CL_ASIS, SC_ASIS Framework Data Flow Diagram

Retail Insights Aggregation Framework 6-9


Aggregation Framework Data Flow

Figure 6–3 DP_REC_ASIS, CL_REC_ASIS, SC_REC_ASIS Framework Data Flow


Diagram

6-10 Oracle Retail Insights Implementation Guide


Aggregation Framework Data Flow

Figure 6–4 IT, SC, CL, DP, SN Attribute Aggregation Framework Data Flow Diagram

Retail Insights Aggregation Framework 6-11


Aggregation Framework Data Flow

6-12 Oracle Retail Insights Implementation Guide


7
Retail Insights Universal Adapter
7

This chapter describes the process of implementing the Retail Insights Universal
Adapter Framework.

Overview of Retail Insights Universal Adapter Framework


The Retail Insights BI product offering was intended to work closely with Oracle
Retail's transactional schema, RMS. As such, Retail Analytics (the earlier versions of
Retail Insights) shipped with source dependent extraction (SDE) routines designed to
move data from RMS tables into Retail Analytics staging tables is now moved to Retail
Data Extractor., The new version of Retail Insights works closely with Retail Data
Extractor and processes the staging data sent in the form of flat files and loads to
staging tables in RA Data Mart schema. The source independent load (SIL) moves data
from staging tables into warehouse tables (see Figure 7–1, "RDE to Retail Insights
Staging Data Flow").

Figure 7–1 RDE to Retail Insights Staging Data Flow

Customers who are working with third party (non-RMS) systems who wish to use
Retail Insights would need to write their own ETL solution to move their data into the
Retail Insights staging tables. The design of a custom ETL solution would be driven by
such factors as:
■ The number and nature of data sources (relational, mainframe, file-based, etc.)
containing the necessary transaction data.
■ The topology of the data sources. Such customers would either need to write
custom SDE ETL interfaces for use with Retail Insights' ODI-based ETL system or
create their own ETL logic from scratch.
The goal of the Universal Adapter Framework (UAF) is to simplify the process of
moving source dependent extracts into Retail Insights staging tables for customers in a
cloud/on-premise environment. The files arriving from RDE or non RMS systems

Retail Insights Universal Adapter 7-1


Universal Adapter Installation and Configuration

should provide pipe ('|') separated value (DAT) text file extracts to be RI. All date
columns should use a format of "YYYY-MM-DD;HH24:MI:SS". Once the DAT files are
in place, the UAF can be used to move that data into Retail Insights staging tables
through the use of Oracle sqlldr (see Figure 7–2, "Moving Third Party Extracts into
Retail Insights Staging Tables"). The control files required for sqlldr will be created
automatically during the processing that is controlled in ODI.

Figure 7–2 Moving Third Party Extracts into Retail Insights Staging Tables

Benefits
Customers who elect to leverage the UAF will enjoy the following benefits:
■ For customers whose third party data sources are non-relational in nature (for
example, mainframe data), their development efforts only need to be focused on
delivering DAT text file extracts in a pre-defined format as inputs to the UAF.
■ DBlink that was used in Retail Analytics SDE programs is not required anymore.
This will provide security compliance.

Universal Adapter Installation and Configuration


The installation of UAF is included in the RI standard installation. Please refer to the
Oracle Retail Insights Administration Guide for UAF installation and configuration
information.
Please ensure the following ODI installation files have permissions to execute the
Universal adapter
1. From the ODI installation directory navigate to the ODI SDK library files and
assign 775 permissions to the files mentioned below.
Path: cd <$ODIHOME>/../../oracledi.sdk/lib/Lib/

Command to Execute: chmod 775 <filename>

2. Replace the below filenames to assign 775 permissions to each of these.


■ os$py.class
■ stat$py.class
■ posixpath$py.class
■ warnings$py.class
■ types$py.class
■ linecache$py.class

7-2 Oracle Retail Insights Implementation Guide


Universal Adapter Execution

Universal Adapter Execution


Execute the Retail Insights script rtluasil.ksh and rabeuasil.ksh to run the Universal
Adapter for loading. Script rabeuasil.ksh is for the target tables owned by RI batch
user and script rtluasil.ksh is for the target tables owned by RI data owner.
Syntax:
rtluasil.ksh <Target table>
rabeuasil.ksh <Target table>

The Universal adapter programs accepts two types of file inputs.


■ Dat file – This file contains the staging data that will be loaded to Fact and
Dimension tables. Data files are mandatory for all the Staging tables to be loaded.
In order to load legless stage tables, It is customers responsibility to generate dat
files with data and place them in the "$MMHOME/data/staging"directory.
■ Ctx file – This is an optional file and contains the metadata information that will
be used to adjust ctl file generated by sqlloader in Universal adapter. This ctx file is
only required if there is a mismatch in the datatypes in the source data in text files
and the target loading database.
Before starting the execution download the exported zip file and extract staging data
files into $MMHOME/data/staging directory.
The following is the download file process.
1. Connect to <server> port 22.
2. Log in with the SFTP User credentials.
3. Change directory to /<SFTP User>/EXPORT.
4. Extract the tar file <Merch_Extract_date>.tar into $MMHOME/data/staging
directory.
5. The tar file <Merch_Extract_date>.tar can be deleted from the /<SFTP User>
directory after the data files / ctx files are extracted, but Oracle recommends you
archive these files for future reference and logging purposes.
Batch Logging:
The batch for Universal Adapter will have the same logging logic as other RI batch
programs. The execution status can be found in RI batch maintenance table C_LOAD_
DATES and ODI Operator. Besides these, Universal Adapter also provides sqlldr log
files for more information of the loading in detail. The sqlldr log files can be found
under $MMHOME/data/staging/log.

Retail Insights Universal Adapter 7-3


Universal Adapter Execution

7-4 Oracle Retail Insights Implementation Guide


8
Chief Marketing Officer Alerts Configuration
8

This chapter provides the steps to configure Schedule and Recipient details of the
following Chief Marketing Officer (CMO) Alerts:
■ Inventory to Plan
■ Sales to Plan
■ Top Selling Attribute
■ Liability > 50% of Demand

Configuration
After successful deployment of the Retail Insights catalog, the following steps must be
completed to configure the schedule and recipient details for the CMO alerts.
1. Navigate to Catalog > Shared Folders > CMO >Alerts.

Figure 8–1 CMO Alerts

Perform the following steps for each agent (object names ending with agent)
present in this folder.
2. Select the agent object and click Edit.

Chief Marketing Officer Alerts Configuration 8-1


Configuration

Figure 8–2 Editing CMO Alerts

3. Navigate to the Schedule tab.


a. On the Schedule tab, check the Enabled checkbox.
b. Set the Frequency and Start time as shown below to run the alerts daily once.

Note: Frequency and Start Time should be set according to the


implementation preference.

Figure 8–3 Alerts Schedule Configuration

4. Navigate to the Recipients tab.

8-2 Oracle Retail Insights Implementation Guide


Configuration

a. From the Recipients tab, click Add under the Select Recipients header and
select the users who need to be subscribed for the alert notification.

Figure 8–4 Alerts Recipients Configuration

5. Save the changes.


6. Repeat the steps 2-5 for the remaining three alert agents.

Chief Marketing Officer Alerts Configuration 8-3


Configuration

8-4 Oracle Retail Insights Implementation Guide


9
Merchandise Financial Planning Configuration
9

Retail Insights supports Merchandise Financial Planning (MFP) at three possible


combinations of three different hierarchies (Merchandise Hierarchy, Organization
Hierarchy, and Calendar Hierarchy). These possible combinations can be configured
during the installation. However, the front-end rpd file has to be modified to match
the configuration.
Before making any changing to the rpd file, copy the original files to a different
directory in case you need to refer to them at a later date:

Modify the .rpd File


Perform the following procedure to modify the rpd file::
1. Before modifying, rename the existing rpd file on the server. Do not overwrite.
2. Get the merchandise hierarchy level, organization hierarchy level, and calendar
level that are set up for option1, option2, and option3. This information can be
found in the database table C_ODI_PARAM.
3. Open the rpd file in offline mode using the administration tool.
4. Double-click the physical table that you want to modify. The Physical Table dialog
appears.

Figure 9–1 Physical Table Dialog

Merchandise Financial Planning Configuration 9-1


Modify the .rpd File

5. Click the Foreign key tab. You will see all the foreign keys that are associated with
this physical table. These foreign keys are named by the dimension level name,
such as SBC for merchandise hierarchy at subclass level. The following screen is an
example for W_RTL_MFPOP_PRODUCT1_LC1_T1_F (before the configuration is
done).

Figure 9–2 Foreign Key List

6. Remove all unrelated foreign keys and keep the foreign keys whose levels are
defined for that specific option (from step 1) and click OK. The following screen
shows the unused foreign keys removed. This is for the case when option 1 is
defined at the item/location/day level. The foreign key Fact_W_RTL_MFPOP_
PROD1_LC1_T1_F_COMPANY is reserved for the case when company level is
used for organization hierarchy. The physical tables that need to be modified
include: .
■ Fact_W_RTL_MFPCP_PRODUCT1_LC1_T1_F
■ Fact_W_RTL_MFPOP_PRODUCT1_LC1_T1_F
■ Fact_W_RTL_MFPCP_PRODUCT2_LC2_T2_F if option 2 is set
■ Fact_W_RTL_MFPOP_PRODUCT2_LC2_T2_F if option 2 is set
■ Fact_W_RTL_MFPCP_PRODUCT3_LC3_T3_F if option 3 is set
■ Fact_W_RTL_MFPOP_PRODUCT3_LC3_T3_F if option 3 is set

9-2 Oracle Retail Insights Implementation Guide


Modify the .rpd File

Figure 9–3 Unused Foreign Keys Removed

7. Under the BMM layer, expand Core Business Model, and expand logic table Fact -
Retail Planning. Expand Sources and you will see all logical table source used by
planning. There are six logical table source used by Merchandise Financial
Planning with flexible options. There are two tables for option1, two tables for
option2, and two tables for option3. All their names contain string "MFP"

Figure 9–4 Logical Table List

8. If the option2 is not defined, then disable all logical table sources used by option 2
or option 3. If option 2 is defined, but option 3 is not, then disable all logical table
sources used by option 3. To disable the logical table source, double-click the
logical table source that you want to disable, click the General tab, and then check
the Disabled checkbox.

Merchandise Financial Planning Configuration 9-3


Modify the .rpd File

Figure 9–5 Select the Disabled Checkbox

9. Set content for each logical table source (maximum of 6) used by merchandise
flexible planning. You need to do this for every merchandise financial planning
logical table source that is not disabled.
a. Double-click the logic table source that you want to set.
b. Click the Content tab.
c. Select Logical Level for Logical Dimension Date Retail Fiscal Calendar, Retail
Organization As Was, and item. The valid values for fiscal calendar are Fiscal
Day Detail, Fiscal Week, Fiscal Period, Fiscal Quarter, or Fiscal Year. The valid
values for organization are Location, Channel, or Company. Company is used
when the organization hierarchy level is set at company level in the
configuration. The valid values for merchandise hierarchy (item) are Product
Detail if the level is set at item, subclass, class, department, group, and
division.
d. Click OK. The screen below is an example when the option is set at
item/location/day level.

9-4 Oracle Retail Insights Implementation Guide


Modify the .rpd File

Figure 9–6 Logical Table Source

10. Save the rpd file and exit.


11. Upload the rpd file to the BI Setver and bounce the BI services.

Merchandise Financial Planning Configuration 9-5


Modify the .rpd File

9-6 Oracle Retail Insights Implementation Guide


10
Frequently Asked Questions
10

The following issues may be encountered while implementing Retail Insights. The
accompanying solutions will help you work through the issues.
Issue:
Why am I getting the Login Denied error with the following message when I try to run
a report using Oracle BI Presentation Services?
ORACLE ERROR CODE: 1017, MESSAGE: ORA-01017: INVALID USERNAME/PASSWORD; LOGON
DENIED
Solution:
Ensure that the repository connection pool has the right login credentials in the Oracle
BI Administration Tool and check the tnsnames.ora file.
Issue:
I am getting the following error when I performed the "Update all Row Counts" task
from the Oracle BI Administration tool.
UNABLE TO CONNECT DATABASE USING CONNECTION POOL
Solution:
Ensure the repository connection pool has the right login credentials in Oracle BI
Administration tool or check the tnsnames.ora entry.
Issue:
Why can’t I see query activity at the individual user level in the NQQUERY.LOG file?
Solution:
Check the logging level field in the user dialog box in the User tab. If the logging level
is set to zero, the administrator may have disabled query logging. Contact the Oracle
BI administrator to enable query logging.
Issue:
Why is the data not loaded to the fact table, even though I have valid data in the
staging table?
Solution:
Data may be missing in the corresponding dimension table(s) or the transaction date is
not in the active time period of the dimension.
Issue:
Why is the data not loaded to the dimension table, even though I have valid data in
staging table?
Solution:
Parent data may be missing in the corresponding dimension table. This applies to
dimensions with hierarchy.

Frequently Asked Questions 10-1


Issue:
Why is the data not loaded to the fact table, even though I have valid data in the fact
staging tables and all the corresponding keys in the dimensions?
Solution:
Check the effective start and end date values in the dimension tables. If any of these
dimension's effective from date values is greater than the fact date value, those will not
be loaded to the fact tables as those are the future dimension records.
Issue:
Description of a subclass is changed from a to b in the source system but I cannot see
both the records in Retail Insights after the loading process?
Solution:
This type of change does not alter the relationship of subclass to any other level of the
hierarchy above or below it. The record is simply updated to reflect the description
change; as it is tracked as scd type 1 change. For more information, refer to the Oracle
Retail Insights Operations Guide.
Issue:
Why the load program performance is not improving even after using ODI
multi-threading?
Solution:
This can occur because of several reasons. Check the following settings:
■ The number of threads must be appropriate for the hardware and data volume.
■ The number of partitions on the intermediate temp table must be equal to or
higher than the number of threads.
Issue:
How do you execute the failed threads for multi-threading programs?
Solution:
This can be done by using the batch log in the C_LOAD_DATES table. Table C_
LOAD_DATES has a record for the execution status of each batch at thread number
level. Same thread of a same batch cannot be executed twice unless the log record is
deleted manually. This provides a possibility to re-execute only one thread for a case
when only one thread fails and other threads complete successfully.
To re-execute failed threads, the user can manually delete the threads that need to be
executed and keep all other threads untouched in the C_LOAD_DATES table. Then the
user can start the batch again. When the re-execution is done, the program will show
errors in the UNIX console, but the threads that need to be re-executed should
complete successfully. The error in the UNIX console is for the re-execution of the
threads that completed successfully in the first execution, so it can be ignored.
Issue:
While running the packages it is possible that there could be a scenario failure with
error "Variable has no value" due to ODI out of memory.
Solution:
If this error occurs, verify the values of the following two parameters are set as below
(for more details refer to the Oracle Retail Insights Installation Guide) and regenerate the
scenario that is failing.
■ ODI_INIT_HEAP=256M
■ ODI_MAX_HEAP=1024M

10-2 Oracle Retail Insights Implementation Guide


Issue:
While loading data from files to RI staging tables by using Universal Adapter, there
could be an error due to index (PK index) in unusable state.
Solution:
This could be caused by duplicate records in the source file. Due to DIRECT load is
used in the sqlldr, the PK index will be disabled when this type of error happens. The
ender user can clean up the source data, re-enable the PK index on the target table
(staging table), clean up records in the C_LOAD_DATES table, and then re-execute the
program.

Frequently Asked Questions 10-3


10-4 Oracle Retail Insights Implementation Guide

You might also like