DQ 1056 DataQualityGettingStartedGuide en
DQ 1056 DataQualityGettingStartedGuide en
10.5.6
This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be
reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial
computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such,
the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the
extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License.
Informatica, PowerCenter, PowerExchange, and the Informatica logo are trademarks or registered trademarks of Informatica LLC in the United States and many
jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://www.informatica.com/trademarks.html. Other company
and product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product.
The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at
[email protected].
Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE
INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.
Table of Contents 3
Chapter 4: Lesson 3. Creating Default Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . 24
Creating Default Profiles Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Task 1. Create and Run a Default Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Task 2. View the Profile Results in Summary View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Creating Default Profiles Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 Table of Contents
Part II: Getting Started with Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . 44
Table of Contents 5
Chapter 14: Lesson 5. Standardizing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Standardizing Data Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Task 1. Create a Target Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Step 1. Create an All_Customers_Stdz_tgt Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Step 2. Configure Read and Write Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Task 2. Create a Mapping to Standardize Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Step 1. Create a Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Step 2. Add Data Objects to the Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Step 3. Add a Standardizer Transformation to the Mapping. . . . . . . . . . . . . . . . . . . . . . . . 73
Step 4. Configure the Standardizer Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Task 3. Run the Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Task 4. View the Mapping Output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Standardizing Data Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6 Table of Contents
Preface
Read the Data Quality Getting Started Guide to discover the main features and functionality of Data Quality
and to learn how to perform data quality tasks in Informatica Developer and Informatica Analyst.
Informatica Resources
Informatica provides you with a range of product resources through the Informatica Network and other online
portals. Use the resources to get the most from your Informatica products and solutions and to learn from
other Informatica users and subject matter experts.
Informatica Network
The Informatica Network is the gateway to many resources, including the Informatica Knowledge Base and
Informatica Global Customer Support. To enter the Informatica Network, visit
https://network.informatica.com.
To search the Knowledge Base, visit https://search.informatica.com. If you have questions, comments, or
ideas about the Knowledge Base, contact the Informatica Knowledge Base team at
[email protected].
Informatica Documentation
Use the Informatica Documentation Portal to explore an extensive library of documentation for current and
recent product releases. To explore the Documentation Portal, visit https://docs.informatica.com.
If you have questions, comments, or ideas about the product documentation, contact the Informatica
Documentation team at [email protected].
7
Informatica Product Availability Matrices
Product Availability Matrices (PAMs) indicate the versions of the operating systems, databases, and types of
data sources and targets that a product release supports. You can browse the Informatica PAMs at
https://network.informatica.com/community/informatica-network/product-availability-matrices.
Informatica Velocity
Informatica Velocity is a collection of tips and best practices developed by Informatica Professional Services
and based on real-world experiences from hundreds of data management projects. Informatica Velocity
represents the collective knowledge of Informatica consultants who work with organizations around the
world to plan, develop, deploy, and maintain successful data management solutions.
You can find Informatica Velocity resources at http://velocity.informatica.com. If you have questions,
comments, or ideas about Informatica Velocity, contact Informatica Professional Services at
[email protected].
Informatica Marketplace
The Informatica Marketplace is a forum where you can find solutions that extend and enhance your
Informatica implementations. Leverage any of the hundreds of solutions from Informatica developers and
partners on the Marketplace to improve your productivity and speed up time to implementation on your
projects. You can find the Informatica Marketplace at https://marketplace.informatica.com.
To find your local Informatica Global Customer Support telephone number, visit the Informatica website at
the following link:
https://www.informatica.com/services-and-training/customer-success-services/contact-us.html.
To find online support resources on the Informatica Network, visit https://network.informatica.com and
select the eSupport option.
8 Preface
Chapter 1
You can log in to Informatica Administrator after you install Informatica. You use the Administrator tool to
manage the domain and configure the required application services before you can access the remaining
application clients.
• Application clients. A group of clients that you use to access underlying Informatica functionality.
Application clients make requests to the Service Manager or application services.
• Application services. A group of services that represent server-based functionality. An Informatica domain
can contain a subset of application services. You create and configure the application services that the
application clients require.
Application services include system services that can have a single instance in the domain. When you
create the domain, the system services are created for you. You can configure and enable a system
service to use the functionality that the service provides.
• Profile warehouse. A relational database that the Data Integration Service uses to store profile results.
• Reference data warehouse. A relational database that stores reference data values for the reference table
objects in the Model repository.
• Repositories. A group of relational databases that store metadata about objects and processes required
to handle user requests from application clients.
• Service Manager. A service that is built in to the domain to manage all domain operations. The Service
Manager runs the application services and performs domain functions including authentication,
authorization, and logging.
9
• Workflow database. A relational database that stores run-time metadata for workflows.
The following table lists the application clients, not including the Administrator tool, and the application
services and the repositories that the client requires:
The following application services are not accessed by an Informatica application client:
• PowerExchange® Listener Service. Manages the PowerExchange Listener for bulk data movement and
change data capture. The PowerCenter Integration Service connects to the PowerExchange Listener
through the Listener Service.
• PowerExchange Logger Service. Manages the PowerExchange Logger for Linux, UNIX, and Windows to
capture change data and write it to the PowerExchange Logger Log files. Change data can originate from
DB2 recovery logs, Oracle redo logs, a Microsoft SQL Server distribution database, or data sources on an
i5/OS or z/OS system.
• SAP BW Service. Listens for RFC requests from SAP BI and requests that the PowerCenter Integration
Service run workflows to extract from or load to SAP BI.
The following table describes the licensing options and the application features available with each option:
Data Services - Create logical data object models - Manage reference tables
- Create and run mappings with Data
Services transformations
- Create SQL data services
- Create web services
- Export objects to PowerCenter
Data Services and Profiling - Create logical data object models - Manage reference tables
Option - Create and run mappings with Data
Services transformations
- Create SQL data services
- Create web services
- Export objects to PowerCenter
- Create and run rules with Data
Services transformations
- Perform profiling
Note: If you use Informatica products with a Data Engineering licence, you might experience a different set of
features. For example, Data Engineering applications do not integrate with PowerCenter. You cannot perform
exception record management in Data Engineering Quality.
Depending on your license, business analysts and developers use the Analyst tool for data-driven
collaboration. You can perform column and rule profiling, scorecarding, and bad record and duplicate record
management. You can also manage reference data and provide the data to developers in a data quality
solution.
You can use the Developer tool to import metadata, create connections, and create data objects. You can
also use the Developer tool to create and run profiles, mappings, and workflows.
You can select additional views, hide views, and move views to another location in the Developer tool
workbench.
To select the views you want to display, click Window > Show View.
Displays source data, profile results, and previews the output of a transformation.
Displays the domain and the design-time and run-time objects in the domain. Design-time objects are
stored in projects and folders in the Model repository. Run-time objects are stored as part of a run-time
application on a Data Integration Service.
Outline view
Displays objects that are dependent on an object selected in the Object Explorer view.
Progress view
Displays the progress of operations in the Developer tool, such as a mapping run.
Properties view
You can also use the Show View menu to show the following views:
Alerts view
Notifications view
Displays options to notify users or groups when all work in the Human task is complete.
Search view
Displays the search results. You can also launch the search options dialog box.
Tags view
Displays tags that define an object in the Model repository based on business usage.
• Overview. Click the Overview button to get an overview of data quality and data services solutions.
• First Steps. Click the First Steps button to learn more about setting up the Developer tool and accessing
Informatica Data Quality and Informatica Data Services lessons.
• Tutorials. Click the Tutorials button to see tutorial lessons for data quality and data services solutions.
• Web Resources. Click the Web Resources button for a link to the Informatica Knowledge Base. You can
access the Informatica How-To Library. The Informatica How-To Library contains articles about
Informatica Data Quality, Informatica Data Services, and other Informatica products.
• Workbench. Click the Workbench button to start working in the Developer tool.
Click Help > Welcome to access the welcome page after you close it.
Cheat Sheets
The Developer tool includes cheat sheets as part of the online help. A cheat sheet is a step-by-step guide that
helps you complete one or more tasks in the Developer tool.
When you complete a cheat sheet, you complete the tasks and see the results. For example, after you
complete a cheat sheet to import and preview a relational data object, you have imported a relational
database table and previewed the data in the Developer tool.
Use the Developer tool to design and run processes that achieve the following objectives:
• Profile data. Profiling reveals the content and structure of your data. Profiling is a key step in any data
project as it can identify strengths and weaknesses in your data and help you define your project plan.
The headquarters includes a central ICC team of administrators, developers, and architects responsible for
providing a common data services layer for all composite and BI applications. The BI applications include a
CRM system that contains the master customer data files used for billing and marketing.
HypoStores Corporation must perform the following tasks to integrate data from the Los Angeles operation
with data at the Boston headquarters:
• Examine the Boston and Los Angeles data for data quality issues.
• Parse information from the Los Angeles data.
• Standardize address information across the Boston and Los Angeles data.
• Validate the accuracy of the postal address information in the data for CRM purposes.
Lessons
Each lesson introduces concepts that will help you understand the tasks to complete in the lesson. The
lesson provides business requirements from the overall story. The objectives for the lesson outline the tasks
that you will complete to meet business requirements. Each lesson provides an estimated time for
completion. When you complete the tasks in the lesson, you can review the lesson summary.
If the environment within the tool is not configured, the first lesson in each tutorial helps you do so.
Tasks
The tasks provide step-by-step instructions. Complete all tasks in the order listed to complete the lesson.
The lessons you can perform depend on whether you have the Informatica Data Quality or Informatica Data
Services products.
The following table describes the lessons you can perform, depending on your product:
Lesson 1. Setting up Informatica Log in to the Analyst tool and create a project and folder for the Data Quality
Analyst tutorial lessons. Data Services
Lesson 2. Creating Data Objects Import a flat file as a data object and preview the data. Data Quality
Lesson 3. Creating Quick Profiles Creating a quick profile to quickly get an idea of data quality. Data Quality
Lesson 4. Creating Custom Create a custom profile to configure columns, and sampling Data Quality
Profiles and drilldown options.
Lesson 5. Creating Expression Create expression rules to modify and profile column values. Data Quality
Rules
Lesson 6. Creating and Running Create and run a scorecard to measure data quality progress Data Quality
Scorecards over time.
Lesson 7. Creating Reference Create a reference table that you can use to standardize source Data Quality
Tables from Profile Results data. Data Services
Lesson 8. Creating Reference Create a reference table to establish relationships between Data Quality
Tables source data and valid and standard values. Data Services
Informatica Data Quality users use the Developer tool to design and run processes that enhance data quality.
Informatica Data Quality users also use the Developer tool to create and run profiles that analyze the content
and structure of data.
Profiling includes join analysis, a form of analysis that determines if a valid join is possible between two data
columns.
Tutorial Prerequisites
Before you can begin the tutorial lessons, the Informatica domain must be running with at least one node set
up.
The installer includes tutorial files that you will use to complete the lessons. You can find all the files in both
the client and server installations:
• You can find the tutorial files in the following location in the Developer tool installation path:
<Informatica Installation Directory>\clients\DeveloperClient\Tutorials
• You can find the tutorial files in the following location in the services installation path:
<Informatica Installation Directory>\server\Tutorials
You need the following files for the tutorial lessons:
• All_Customers.csv
• Boston_Customers.csv
• LA_customers.csv
17
Chapter 2
The Informatica domain is a collection of nodes and services that define the Informatica environment.
Services in the domain include the Analyst Service and the Model Repository Service. The Analyst Service
runs the Analyst tool, and the Model Repository Service manages the Model repository. When you work in the
Analyst tool, the Analyst tool stores the assets that you create in the Model repository.
You must create a project before you can create assets in the Analyst tool. A project contains assets in the
Analyst tool. A project can also contain folders that store related assets, such as data objects that are part of
the same business requirement.
Objectives
In this lesson, you complete the following tasks:
Prerequisites
Before you start this lesson, verify the following prerequisites:
• An administrator has configured a Model Repository Service and an Analyst Service in the Administrator
tool.
18
• You have the host name and port number for the Analyst tool.
• You have a user name and password to access the Analyst Service. You can get this information from an
administrator.
Timing
Set aside 5 to 10 minutes to complete this lesson.
You logged in to the Analyst tool and created a project and a folder.
Now, you can use the Analyst tool to complete other lessons in this tutorial.
Story
HypoStores keeps the Los Angeles customer data in flat files. HypoStores needs to profile and analyze the
data and perform data quality tasks.
Objectives
In this lesson, you complete the following tasks:
1. Upload the flat file to the flat file cache location and create a data object.
2. Preview the data for the flat file data object.
Prerequisites
Before you start this lesson, verify the following prerequisites:
Timing
Set aside 5 to 10 minutes to complete this task.
21
Task 1. Create the Flat File Data Objects
In this task, you create a flat file data object from the LA_Customers file.
1. In the Analyst tool, click New > Flat File Data Object.
The Add Flat File wizard appears.
2. Select Browse and Upload, and click Browse.
3. Browse to the location of LA_Customers.csv, and click Open.
4. Click Next.
The Choose type of import panel displays Delimited and Fixed-width options. Select the Delimited
option. The default option is Delimited.
5. Click Next.
6. Under Specify the delimiters and text qualifiers used in your data, select Double quotes as a text
qualifier.
7. Under Specify lines to import, select Import from first line to import column names from the first
nonblank line.
The Preview panel updates to show the column headings from the first row.
8. Click Next.
The Column Attributes panel shows the datatype, precision, scale, and format for each column.
9. Click Next.
The Name field displays LA_Customers.
10. Select the Tutorial_ project and the Customers folder.
11. Click Finish.
The data object appears in the folder contents for the Customers folder.
You uploaded a flat file and created a flat file data object, previewed the data for the data object, and viewed
the properties for the data object.
After you create a data object, you create a default profile for the data object in Lesson 3, and you create a
custom profile for the data object in Lesson 4.
Create and run a default profile to analyze the quality of the data when you start a data quality project. When
you create a default profile object, you select the data object and the data object columns that you want to
analyze. A default profile skips the profile column and option configuration. The Analyst tool performs
profiling on the live flat file for the flat file data object.
Story
HypoStores wants to incorporate data from the newly-acquired Los Angeles office into its data warehouse.
Before the data can be incorporated into the data warehouse, it needs to be cleansed. You are the analyst
who is responsible for assessing the quality of the data and passing the information on to the developer who
is responsible for cleansing the data. You want to view the profile results quickly and get a basic idea of the
data quality.
Objectives
In this lesson, you complete the following tasks:
1. Create and run a default profile for the LA_Customers flat file data object.
2. View the profile results.
Prerequisites
Before you start this lesson, verify the following prerequisite:
24
Timing
Set aside 5 to 10 minutes to complete this lesson.
1. In the Library > Assets > Profiles pane, click the LA_Customers profile.
The profile results appear in the summary view.
2. In the summary view, click Columns in the Filter By pane to view the profile results for columns.
3. Move the pointer over the horizontal bar charts to view the values in percentages.
4. In the Null Distinct Non-Distinct % section, you can view the null values, distinct values, and non-distinct
values in percentages for a column.
You created a default profile and analyzed the profile results. You got more information about the columns in
the profile, including null values and data types. You also used the column values and patterns to identify
data quality issues.
After you analyze the results of a quick profile, you can complete the following tasks:
• Create a custom profile to exclude columns from the profile and only include the columns you are
interested in.
• Create an expression rule to create virtual columns and profile them.
• Create a reference table to include valid values for a column.
You create and run a profile to analyze the quality of the data when you start a data quality project. When you
create a profile object, you start by selecting the data object and data object columns that you want to run a
profile on.
Story
HypoStores needs to incorporate data from the newly-acquired Los Angeles office into its data warehouse.
HypoStores wants to access the quality of the customer tier data in the LA customer data file. You are the
analyst responsible for assessing the quality of the data and passing the information on to the developer
responsible for cleansing the data.
Objectives
In this lesson, you complete the following tasks:
1. Create a custom profile for the flat file data object and exclude the columns with null values.
2. Run the profile to analyze the content and structure of the CustomerTier column.
3. Drill down into the rows for the profile results.
27
Prerequisites
Before you start this lesson, verify the following prerequisite:
Timing
Set aside 5 to 10 minutes to complete this lesson.
1. Verify that you are in the summary view of the profile results for the Profile_LA_Customers profile.
2. Click the CustomerTier column.
The profile results for the column appear in the detailed view.
The Data Preview pane displays the first 100 rows for the selected column. The title of the Data Preview
pane shows the logic used for the source column.
You created a custom profile that included the CustomerTier column, ran the profile, and drilled down to the
underlying rows for the CustomerTier column in the results.
The output of an expression rule is a virtual column in the profile. The Analyst tool profiles the virtual column
when you run the profile.
You can use expression rules to validate source columns or create additional source columns based on the
value of the source columns.
Story
HypoStores wants to incorporate data from the newly-acquired Los Angeles office into its data warehouse.
HypoStores wants to analyze the customer names and separate customer names into first name and last
name. HypoStores wants to use expression rules to parse a column that contains first and last names into
separate virtual columns and then profile the columns. HypoStores also wants to make the rules available to
other analysts who need to analyze the output of these rules.
Objectives
In this lesson, you complete the following tasks:
1. Create expression rules to separate the FullName column into first name and last name columns. You
create a rule that separates the first name from the full name. You create another rule that separates the
last name from the first name. You create these rules for the Profile_LA_Customers profile.
2. Run the profile and view the output of the rules in the profile.
31
3. Edit the rules to make them usable for other Analyst tool users.
Prerequisites
Before you start this lesson, verify the following prerequisite:
Timing
Set aside 10 to 15 minutes to complete this lesson.
You created two expression rules, added them to a profile, and ran the profile. You viewed the output of the
rules and made them available to all Analyst tool users.
To create a scorecard, you add columns from the profile to a scorecard as metrics, assign weights to
metrics, and configure the score thresholds. You can add filters to the scorecards based on the source data.
To run a scorecard, you select the valid values for the metric and run the scorecard to see the scores for the
metrics.
Scorecards display the value frequency for columns in a profile as scores. Scores reflect the percentage of
valid values for a metric.
Story
HypoStores wants to incorporate data from the newly-acquired Los Angeles office into its data warehouse.
Before the organization merges the data, they want to verify that the data in different customer tiers and
states is analyzed for data quality. You are the analyst who is responsible for monitoring the progress of
performing the data quality analysis. You want to create a scorecard from the customer tier and state profile
columns, configure thresholds for data quality, and view the score trend charts to determine how the scores
improve over time.
34
Objectives
In this lesson, you will complete the following tasks:
1. Create a scorecard from the results of the Profile_LA_Customers_Custom profile to view the scores for
the CustomerTier and State columns.
2. Run the scorecard to generate the scores for the CustomerTier and State columns.
3. View the scorecard to see the scores for each column.
4. Edit the scorecard to specify different valid values for the scores.
5. Configure score thresholds, and run the scorecard.
6. View score trend charts to determine how scores improve over time.
Prerequisites
Before you start this lesson, verify the following prerequisite:
Timing
Set aside 15 minutes to complete the tasks in this lesson.
1. Select the State row that contains the State score you want to view.
In the sc_LA_Customer - metrics section, you can view the following properties of the scorecard:
• Scorecard name.
• Total number of rows in the scorecard.
1. Verify the you are in the Scorecard workspace, and the sc_LA_Customer scorecard is open.
2. Select Actions > Edit > Metrics.
The Edit Scorecard dialog box appears.
3. In the Metrics section, select CustomerTier.
4. In the Score using: Values section, move Ruby from the Valid Values section to the Available Values
section.
Accept the default settings in the Metric Thresholds section.
5. Click Save & Run to save the changes to the scorecard and run it.
6. View the CustomerTier score again.
The CustomerTier score changes to 81.4 percentage.
1. Verify the you are in the Scorecard workspace, and the sc_LA_Customer scorecard is open.
2. Select Actions > Edit > Metrics.
The Edit Scorecard dialog box appears.
3. In the Metrics section, select State.
4. In the Metric Thresholds section, enter the following ranges for the Good and Unacceptable scores: 90
to 100% Good; 0 to 50% Unacceptable; 51% to 89% Acceptable.
The thresholds represent the lower bounds of the acceptable and good ranges.
5. Click Save & Run to save the changes to the scorecard and run it.
In the Scorecard panel, view the changes to the score percentage and the score displayed as a bar for
the State score.
1. Verify the you are in the Scorecard workspace, and the sc_LA_Customer scorecard is open.
2. Select State row.
3. Click Actions > Show Trend Chart, or click the arrow under the Score Trend column.
The Trend Chart Detail dialog box appears. You can view the Good, Acceptable, and Unacceptable
thresholds for the score. The thresholds change each time you run the scorecard after editing the values
for scores in the scorecard.
4. Point to any circle in the chart to view the valid values in the Valid Values section at the bottom of the
chart.
5. Click Close to return to the scorecard.
You created a scorecard from the CustomerTier and State columns in a profile to analyze data quality for the
customer tier and state columns. You ran the scorecard to generate scores for each column. You edited the
scorecard to specify different valid values for scores. You configured thresholds for a score and viewed the
score trend chart.
You can create a reference table from the results of a profile. After you create a reference table, you can edit
the reference table to add columns or rows and add or edit standard and valid values. You can view the
changes made to a reference table in an audit trail.
Story
HypoStores wants to profile the data to uncover anomalies and standardize the data with valid values. You
are the analyst who is responsible for standardizing the valid values in the data. You want to create a
reference table based on valid values from profile columns.
Objectives
In this lesson, you complete the following tasks:
1. Create a reference table from the CustomerTier column in the Profile_LA_Customers_Custom profile by
selecting valid values for columns.
2. Edit the reference table to configure different valid values for columns.
Prerequisites
Before you start this lesson, verify the following prerequisite:
39
Timing
Set aside 15 minutes to complete the tasks in this lesson.
Property Description
Name CustomerTier
Precision 10
10. Optionally, choose to create a description column for rows in the reference table. Enter the name and
precision for the column.
11. Verify the CustomerTier column values in the Preview section.
12. Click Next.
The Reftab_CustomerTier_HypoStores reference table name appears. You can enter an optional
description.
13. In the Save in section, select your tutorial project where you want to create the reference table.
The Reference Tables: panel lists the reference tables in the location you select.
14. Enter an optional audit note.
15. Click Finish.
You created a reference table from a profile column by selecting valid values for columns. You edited the
reference table to configure different valid values for columns.
You can manually create a reference table using the reference table editor. Use the reference table to define
and standardize the source data. You can share the reference table with a developer to use in Standardizer
and Lookup transformations in the Developer tool.
Story
HypoStores wants to standardize data with valid values. You are the analyst who is responsible for
standardizing the valid values in the data. You want to create a reference table to define standard customer
tier codes that reference the LA customer data. You can then share the reference table with a developer.
Objectives
In this lesson, you complete the following task:
• Create a reference table using the reference table editor to define standard customer tier codes that
reference the LA customer data.
Prerequisites
Before you start this lesson, verify the following prerequisite:
Timing
Set aside 10 minutes to complete the task in this lesson.
42
Task 1. Create a Reference Table
In this task, you will create the Reftab_CustomerTier_Codes reference table to standardize the valid values
for the customer tier data.
You created a reference table using the reference table editor to standardize the customer tier values for the
LA customer data.
44
Chapter 10
The Informatica domain is a collection of nodes and services that define the Informatica environment.
Services in the domain include the Model Repository Service and the Data Integration Service.
The Model Repository Service manages the Model repository. The Model repository is a relational database
that stores the metadata for projects that you create in the Developer tool. A project stores objects that you
create in the Developer tool. A project can also contain folders that store related objects, such as objects that
are part of the same business requirement.
The Data Integration Service performs data integration tasks in the Developer tool.
Objectives
In this lesson, you complete the following tasks:
45
• Create a project to store the objects that you create in the Developer tool.
• Create a folder in the project that can store related objects.
• Select a default Data Integration Service to perform data integration tasks.
Prerequisites
Before you start this lesson, verify the following prerequisites:
Timing
Set aside 5 to 10 minutes to complete the tasks in this lesson.
1. In the Object Explorer view, select the project that you want to add the folder to.
2. Click File > New > Folder.
3. Enter a name for the folder.
4. Click Finish.
The Developer tool adds the folder under the project in the Object Explorer view. Expand the project to
see the folder.
You started the Developer tool and set up the Developer tool. You added a domain to the Developer tool,
added a Model repository, and created a project and folder. You also selected a default Data Integration
Service.
Now, you can use the Developer tool to complete other lessons in this tutorial.
Story
HypoStores Corporation stores customer data from the Los Angeles office and Boston office in flat files. You
want to work with this customer data in the Developer tool. To do this, you need to import each flat file as a
physical data object.
Objectives
In this lesson, you import flat files as physical data objects. You also set the source file directory so that the
Data Integration Service can read the source data from the correct directory.
Prerequisites
Before you start this lesson, verify the following prerequisite:
Timing
Set aside 10 to 15 minutes to complete the tasks in this lesson.
49
Task 1. Import the Boston_Customers Flat File Data
Object
In this task, you import a physical data object from a file that contains customer data from the Boston office.
2. Right-click the Tutorial_Objects folder and select New > Data Object.
3. Select Physical Data Objects > Flat File Data Object and click Next.
9. Click Next.
10. Select Import column names from first line.
Note: The Developer tool machine must have access to the source file directory on the machine that runs
the Data Integration Service. If the Developer tool cannot access the source file directory, the Developer
tool cannot preview data in the source file or run mappings that access data in the source file. If you run
multiple Data Integration Services, there is a separate source file directory for each Data Integration
Service.
15. Click the Data Viewer view.
16. In the Data Viewer view, click Run.
You created physical data objects from flat files. You also set the source file directory so that the Data
Integration Service can read the source data from the correct directory.
You use the data objects as mapping sources in the data quality lessons.
Profiling and data discovery is often the first step in a project. You can run a profile to evaluate the structure
of data and verify that data columns are populated with the types of information you expect. If a profile
reveals problems in data, you can define steps in your project to fix those problems. For example, if a profile
reveals that a column contains values of greater than expected length, you can design data quality processes
to remove or fix the problem values.
A profile that analyzes the data quality of selected columns is called a column profile.
Note: You can also use the Developer tool to discover primary key, foreign key, and functional dependency
relationships, and to analyze join conditions on data columns.
• The number of distinct and null values in each column, expressed as a number and a percentage.
• The patterns of data in each column, and the frequencies with which these values occur.
• Statistics about the column values, such as the maximum and minimum lengths of values and the first
and last values in each column.
• For join analysis profiles, the degree of overlap between two data columns, displayed as a Venn diagram
and as a percentage value. Use join analysis profiles to identify possible problems with column join
conditions.
58
You can run a column profile at any stage in a project to measure data quality and to verify that changes to
the data meet your project objectives. You can run a column profile on a transformation in a mapping to
indicate the effect that the transformation will have on data.
Story
HypoStores wants to verify that customer data is free from errors, inconsistencies, and duplicate information.
Before HypoStores designs the processes to deliver the data quality objectives, it needs to measure the
quality of its source data files and confirm that the data is ready to process.
Objectives
In this lesson, you complete the following tasks:
• Perform a join analysis on the Boston_Customers data source and the LA_Customers data source.
• View the results of the join analysis to determine whether or not you can successfully merge data from
the two offices.
• Run a column profile on the All_Customers data source.
• View the column profiling results to observe the values and patterns contained in the data.
Prerequisites
Before you start this lesson, verify the following prerequisite:
Time Required
• Set aside 20 minutes to complete this lesson.
1. Select the tutorial folder and click File > New > Profile.
2. Select Enterprise Discovery Profile.
3. Click Next.
4. In the Name field, enter Tutorial_Profile.
5. Click Finish.
The Tutorial_Profile profile appears in the Object Explorer.
6. Drag the Boston_Customers and LA_Customers data sources to the editor on the right.
Tip: Hold down the Shift key to select multiple data objects.
7. Right-click a data object name and select Join Profile.
The New Join Profile wizard appears.
8. In the Name field, enter JoinAnalysis.
9. Verify that Boston_Customers and LA_Customers appear as data objects, and click Next.
10. Verify that the CustomerID column is selected in both data sources.
Note: Do not close the profile. You view the profile results in the next task.
1. In the Object Explorer view, browse to the data objects in your tutorial project.
2. Select the All_Customers data source.
3. Click File > New > Profile.
The New dialog box appears.
4. Select Profile.
5. Click Next.
6. In the Name field, enter All_Customers.
1. Click Window > Show View > Progress to view the progress of the All_Customers profile.
The Progress view opens.
2. When the Progress view reports that the All_Customers profile finishes running, click the Results view in
the editor.
3. In the Column Profiling section, click the CustomerTier column.
The Details section displays all values contained in the CustomerTier column and displays information
about how frequently the values occur in the data set.
4. In the Details section, double-click Ruby.
The Data Viewer runs and displays the records where the CustomerTier column contains the value Ruby.
5. In the Column Profiling section, click the OrderAmount column.
6. In the Details section, click the Show list and select Patterns.
The Details section shows the patterns found in the OrderAmount column. The string 9(5) in the Pattern
column refers to records that contain five-figure order amounts. The string 9(4) refers to records
containing four-figure order amounts.
7. In the Pattern column, double-click the string 9(4).
The Data Viewer runs and displays the records where the OrderAmount column contains a four-figure
order amount.
8. In the Details section, click the Show list and select Statistics.
The Details section shows statistics for the OrderAmount column including the average value, standard
deviation, maximum and minimum lengths, the five most common values, and the five least common
values.
You learned that you can perform a join analysis on two data objects and view the degree of overlap between
the data objects. You also learned that you can run a column profile on a data object and view values,
patterns, and statistics that relate to each column in the data object.
You created the JoinAnalysis profile to determine whether data from the Boston_Customers data object can
merge with the data in the LA_Customers data object. You viewed the results of this profile and determined
that all values in the CustomerID column are unique and that you can merge the data objects successfully.
You created the All_Customers profile and ran a column profile on the All_Customers data object. You
viewed the results of this profile to discover values, patterns, and statistics for columns in the All_Customers
Parsing allows you to have greater control over the information in each column. For example, consider a data
field that contains a person's full name, Bob Smith. You can use the Parser transformation to split the full
name into separate data columns for the first name and last name. After you parse the data into new
columns, you can create custom data quality operations for each column.
You can configure the Parser transformation to use token sets to parse data columns into component
strings. A token set identifies data elements such as words, ZIP codes, phone numbers, and Social Security
numbers.
You can also use the Parser transformation to parse data that matches reference table entries or custom
regular expressions that you enter.
Story
HypoStores wants the format of customer data files from the Los Angeles office to match the format of the
data files from the Boston office. The customer data from the Los Angeles office stores the customer name
in a FullName column, while the customer data from the Boston office stores the customer name in separate
FirstName and LastName columns. HypoStores needs to parse the Los Angeles FullName column data into
first names and last names so that the format of the Los Angeles data will match the format of the Boston
data.
63
Objectives
In this lesson, you complete the following tasks:
Prerequisites
Before you start this lesson, verify the following prerequisite:
Timing
Set aside 20 minutes to complete the tasks in this lesson.
1. In the Object Explorer view, browse to the data objects in your tutorial project.
2. Double-click the LA_Customers_tgt data object.
The LA_Customers_tgt data object opens in the editor.
3. Verify that the Overview view is selected.
4. Select the FullName column and click the New button to add a column.
A column named FullName1 appears.
5. Rename the column to Firstname. Click the Precision field and enter "30."
6. Select the Firstname column and click the New button to add a column.
A column named FirstName1 appears.
7. Rename the column to Lastname. Click the Precision field and enter "30."
8. Click File > Save to save the data object .
1. Create a mapping.
2. Add source and target data objects to the mapping.
3. Add a Parser transformation to the mapping.
4. Configure the Parser transformation to parse the source column containing the full customer name into
separate target columns containing the first name and last name.
1. In the Object Explorer view, browse to the data objects in your tutorial project.
2. Select the LA_Customers data object and drag it to the editor.
The Add Physical Data Object to Mapping window opens.
3. Verify that Read is selected and click OK.
The data object appears in the editor.
4. In the Object Explorer view, browse to the data objects in your tutorial project.
5. Select the LA_Customers_tgt data object and drag it to the editor.
The Add Physical Data Object to Mapping window opens.
6. Select Write and click OK.
The data object appears in the editor.
7. Select the CustomerID, CustomerTier, and FullName ports in the LA_Customers data object. Drag the
ports to the CustomerID port in the LA_Customers_tgt data object.
Tip: Hold down the CTRL key to select multiple ports.
The ports of the LA_Customers data object connect to corresponding ports in the LA_Customers_tgt
data object.
1. In the Object Explorer view, locate the LA_Customers_tgt data object in your tutorial project and double
click the data object.
The data object opens in the editor.
2. Click Window > Show View > Data Viewer.
The Data Viewer view opens.
3. In the Data Viewer view, click Run.
The Data Viewer runs and displays the data.
4. Verify that the FirstName and LastName columns display correctly parsed data.
You learned that you use the Parser transformation to parse data. You also learned that you can create a
profile for a transformation in a mapping to analyze the output from that transformation. Finally, you learned
that you can view mapping output using the Data Viewer.
You created and configured the LA_Customers_tgt data object to contain parsed output. You created a
mapping to parse the data. In this mapping, you configured a Parser transformation with a token set to parse
first names and last names from the FullName column in the Los Angeles customer file. You configured the
mapping to write the parsed data to the Firstname and Lastname columns in the LA_Customers_tgt data
object. You also ran a profile to view the output of the transformation before you ran the mapping. Finally,
you ran the mapping and used the Data Viewer to view the new data columns in the LA_Customers_tgt data
object.
To improve data quality, standardize data that contains the following types of values:
• Incorrect values
• Values with correct information in the wrong format
• Values from which you want to derive new information
Use the Standardizer transformation to search for these values in data. You can choose one of the following
search operation types:
• Text. Search for custom strings that you enter. Remove these strings or replace them with custom text.
• Reference table. Search for strings contained in a reference table that you select. Remove these strings,
or replace them with reference table entries or custom text.
For example, you can configure the Standardizer transformation to standardize address data containing the
custom strings Street and St. using the replacement string ST. The Standardizer transformation replaces
the search terms with the term ST. and writes the result to a new data column.
Story
HypoStores needs to standardize its customer address data so that all addresses use terms consistently.
The address data in the All_Customers data object contains inconsistently formatted entries for common
terms such as Street, Boulevard, Avenue, Drive, and Park.
70
Objectives
In this lesson, you complete the following tasks:
Prerequisites
Before you start this lesson, verify the following prerequisite:
Timing
Set aside 15 minutes to complete this lesson.
1. Create a mapping.
2. Add source and target data objects to the mapping.
3. Add a Standardizer transformation to the mapping.
4. Configure the Standardizer transformation to standardize common address terms to consistent formats.
1. In the Object Explorer view, browse to the data objects in your tutorial project.
2. Select the All_Customers data object and drag it to the editor.
The Add Physical Data Object to Mapping window opens.
3. Verify that Read is selected and click OK.
The data object appears in the editor.
4. In the Object Explorer view, browse to the data objects in your tutorial project.
5. Select the All_Customers_Stdz_tgt data object and drag it to the editor.
The Add Physical Data Object to Mapping window opens.
6. Select Write and click OK.
The data object appears in the editor.
7. Select all ports in the All_Customers data object. Drag the ports to the CustomerID port in the
All_Customers_Stdz_tgt data object.
Tip: Hold down the Shift key to select multiple ports. You might need to scroll down the list of ports to
select all of them.
The ports of the All_Customers data object connect to corresponding ports in the
All_Customers_Stdz_tgt data object.
Note: You add an output port to the transformation when you configure a standardization strategy.
Note: You will define five standardization operations in this task. Each operation replaces a string in the input
column with a new string.
STREET ST.
BOULEVARD BLVD.
AVENUE AVE.
DRIVE DR.
PARK PK.
12. Repeat steps 9 through 12 to define standardization operations for all strings in the table.
13. Drag the Address1 output port to the Address1 port in the All_Customers_Stdz_tgt data object.
14. Click File > Save to save the mapping.
1. In the Object Explorer view, locate the All_Customers_Stdz_tgt data object in your tutorial project and
double-click the data object.
The data object opens in the editor.
2. Click Window > Show View > Data Viewer.
The Data Viewer view opens.
3. In the Data Viewer view, click Run.
The Data Viewer displays the mapping output.
4. Verify that the Address1 column displays correctly standardized data. For example, all instances of the
string STREET should be replaced with the string ST.
You learned that you can use a Standardizer transformation to standardize strings in an input column. You
also learned that you can view mapping output using the Data Viewer.
You created and configured the All_Customers_Stdz_tgt data object to contain standardized output. You
created a mapping to standardize the data. In this mapping, you configured a Standardizer transformation to
standardize the Address1 column in the All_Customers data object. You configured the mapping to write the
standardized output to the All_Customers_Stdz_tgt data object. Finally, you ran the mapping and used the
Data Viewer to view the standardized data in the All_Customers_Stdz_tgt data object.
An address is valid when it is deliverable. An address may be well formatted and contain real street, city, and
post code information, but if the data does not result in a deliverable address then the address is not valid.
The Developer tool uses address reference datasets to check the deliverability of input addresses.
Informatica provides address reference datasets.
An address reference dataset contains data that describes all deliverable addresses in a country. The
address validation process searches the reference dataset for the address that most closely resembles the
input address data. If the process finds a close match in the reference dataset, it writes new values for any
incorrect or incomplete data values. The process creates a set of alphanumeric codes that describe the type
of match found between the input address and the reference addresses. It can also restructure the address,
and it can add information that is absent from the input address, such as a four-digit ZIP code suffix for a
United States address.
Use the Address Validator transformation to build address validation processes in the Developer tool. This
multi-group transformation contains a set of predefined input ports and output ports that correspond to all
possible fields in an input address. When you configure an Address Validator transformation, you select the
default reference dataset, and you create an input and output address structure using the transformation
ports. In this lesson you configure the transformation to validate United States address data.
76
Story
HypoStores needs correct and complete address data to ensure that its direct mail campaigns and other
consumer mail items reach its customers. Correct and complete address data also reduces the cost of
mailing operations for the organization. In addition, HypoStores needs its customer data to include
addresses in a printable format that is flexible enough to include addresses of different lengths.
To meet these business requirements, the HypoStores ICC team creates an address validation mapping in
the Developer tool.
Objectives
In this lesson, you complete the following tasks:
• Create a target data object that will contain the validated address fields and match codes.
• Create a mapping with a source data object, a target data object, and an Address Validator
transformation.
• Configure the Address Validator transformation to validate the address data of your customers.
• Run the mapping to validate the address data, and review the match code outputs to verify the validity of
the address data.
Prerequisites
Before you start this lesson, verify the following prerequisites:
Timing
Set aside 25 minutes to complete this lesson.
To create and configure the target data object, complete the following steps:
The MailabilityScore value describes the deliverability of the input address. The MatchCode value describes
the type of match the transformation makes between the input address and the reference data addresses.
1. In the Object Explorer view, browse to the data objects in your tutorial project.
2. Double-click the All_Customers_av_tgt data object.
The All_Customers_av_tgt data object opens in the editor.
3. Verify that Overview is selected.
4. Select the final port in the port list. This port is named MiscDate.
5. Click New.
A port named MiscDate1 appears.
To create the mapping and add the objects you need, complete the following steps:
All_Customers is the source data object for the mapping. The Address Validator transformation reads data
from this object. All_Customers_av_tgt is the data target object for the mapping. This object reads data
from the Address Validator transformation.
1. In the Object Explorer view, browse to the data objects in your tutorial project.
2. Select the All_Customers data object and drag it to the editor.
The Add Physical Data Object to Mapping window opens.
3. Verify that Read is selected and click OK.
The data object appears in the editor.
4. In the Object Explorer view, browse to the data objects in your tutorial project.
5. Select the All_Customers_av_tgt data object and drag it onto the editor.
The Add Physical Data Object to Mapping window opens.
When this step is complete, you can configure the transformation and connect its ports to the data objects.
Note: The Address Validator transformation contain a series of predefined input and output ports. Select the
ports you need and connect them to the objects in the mapping.
The Address Validator transformation contains several groups of predefined input ports. Select the input
ports that correspond to the fields in your input address and add these ports to the transformation.
Hold the Ctrl key when selecting ports in the steps below to select multiple ports in a single operation.
Delivery Address Line 1 Street address data, such as street name and building number.
Note: Hold the Ctrl key to select multiple ports in a single operation.
5. On the toolbar above the port names list, click Add port to transformation.
This toolbar is visible when you select Templates.
The selected ports appear in the transformation in the mapping editor.
6. Connect the source ports to the Address Validator transformation ports as follows:
ZIP Postcode 1
State Province 1
The Address Validator transformation contains several groups of predefined output ports. Select the ports
that define the address structure you require and add these ports to the transformation.
You can also select ports containing information on the type of validation achieved for each address.
Street Complete 1 Street address data, such as street name and building number.
5. Expand the Last Line Elements output port group and select the following ports:
Note: Hold the Ctrl key to select multiple ports in a single operation.
6. Expand the Country output port group and select the following port:
7. Expand the Status Info output port group and select the following ports:
Mailability Score Score that represents the chance of successful postal delivery.
Match Code Code that represents the degree of similarity between the input address and the reference
data.
8. On the toolbar above the port names list, click Add port to transformation.
This toolbar is visible when you select Templates.
Postcode 1 ZIP
u Connect the unused ports on the data source to the ports with the same names on the data target.
The Match Code value is an alphanumeric code representing the type of validation that the mapping
performed on the address.
The Mailability Score value is a single-digit value that summarizes the deliverability of the address.
1. In the Object Explorer view, find the All_Customers_av_tgt data object in your tutorial project and
double click the data object.
The data object opens in the editor.
2. Select Window > Show View > Data Viewer.
The Data Viewer opens.
Code Description
A1 Address code lookup found a partial address or a complete address for the input
code.
C2 Corrected, but the delivery status is unclear due to absent reference data.
I4 Data cannot be corrected completely, but there is a single match with an address
in the reference data.
I3 Data cannot be corrected completely, and there are multiple matches with
addresses in the reference data.
N7 Validation error. Address validation did not take place because single-line
validation is not unlocked.
N6 Validation error. Address validation did not take place because single-line
validation is not supported for the destination country.
N5 Validation error. Address validation did not take place because the reference
database is out of date.
N4 Validation error. Address validation did not take place because the reference data
is corrupt or badly formatted.
N3 Validation error. Address validation did not take place because the country data
cannot be unlocked.
N2 Validation error. Address validation did not take place because the required
reference database is not available.
N1 Validation error. Address validation did not take place because the country is not
recognized or not supported.
Q3 Suggestion List mode. Address validation can retrieve one or more complete
addresses from the address reference data that correspond to the input address.
Q2 Suggestion List mode. Address validation can combine the input address elements
and elements from the address reference data to create a complete address.
R7 Country recognized from the country name, but the validation process identified
errors in the country data.
S1 Parse mode. There was a parsing error due to an input format mismatch.
V4 Verified. The input data is correct. Address validation checked all postally relevant
elements, and inputs matched perfectly.
V3 Verified. The input data is correct, but some or all elements were standardized, or
the input contains outdated names or exonyms.
V2 Verified. The input data is correct, but some elements cannot be verified because
of incomplete reference data.
V1 Verified. The input data is correct, but user standardization has negatively
impacted deliverability. For example, the post code length is too short.
You learned that the address validation process also returns status information on the quality of each
address.
You learned that Administrator tool users run the Data Quality Content Installer to install address reference
data.
You also learned that the Address Validator transformation is a multi-group transformation, and that you
select the input and output ports for the transformation from the port groups. The input ports you select
determine the content of the address that is validated. The output ports determine the content of the final
address record.
Can I use one user account to access the Administrator tool, the Developer tool, and the Analyst tool?
Yes. You can give a user permission to access all three tools. You do not need to create separate user
accounts for each client application.
You can use the Developer tool and the Analyst tool to create and share reference data objects. The
Model repository stores the reference data object metadata. The reference data database stores
reference table data values. Configure the reference data database on the Content Management Service.
You can validate a mapplet as a rule. A rule is business logic that defines conditions applied to source
data, for example when you run a profile. You can validate a mapplet as a rule when the mapplet meets
the following requirements:
I have a Data Engineering product license. Can I use the Developer tool to export objects to PowerCenter?
87
What is the difference between a source and target in PowerCenter and a physical data object in the Developer tool?
In PowerCenter, you create a source definition to include as a mapping source. You create a target
definition to include as a mapping target. In the Developer tool, you create a physical data object that
you can use as a mapping source or target.
What is the difference between a mapping in the Developer tool and a mapping in PowerCenter?
A PowerCenter mapping specifies how to move data between sources and targets. A Developer tool
mapping specifies how to move data between the mapping input and output.
A PowerCenter mapping must include one or more source definitions, source qualifiers, and target
definitions. A PowerCenter mapping can also include shortcuts, transformations, and mapplets.
A Developer tool mapping must include mapping input and output. A Developer tool mapping can also
include transformations and mapplets.
• Mapping that moves data between sources and targets. This type of mapping differs from a
PowerCenter mapping only in that it cannot use shortcuts and does not use a source qualifier.
• Logical data object mapping. A mapping in a logical data object model. A logical data object mapping
can contain a logical data object as the mapping input and a data object as the mapping output. Or, it
can contain one or more physical data objects as the mapping input and logical data object as the
mapping output.
• Virtual table mapping. A mapping in an SQL data service. It contains a data object as the mapping
input and a virtual table as the mapping output.
• Virtual stored procedure mapping. Defines a set of business logic in an SQL data service. It contains
an Input Parameter transformation or physical data object as the mapping input and an Output
Parameter transformation or physical data object as the mapping output.
What is the difference between a mapplet in PowerCenter and a mapplet in the Developer tool?
A mapplet in PowerCenter and in the Developer tool is a reusable object that contains a set of
transformations. You can reuse the transformation logic in multiple mappings.
A PowerCenter mapplet can contain source definitions or Input transformations as the mapplet input. It
must contain Output transformations as the mapplet output.
A Developer tool mapplet can contain data objects or Input transformations as the mapplet input. It can
contain data objects or Output transformations as the mapplet output. A mapping in the Developer tool
also includes the following features:
C P
creating custom profiles profiling data
overview 27 overview 58
creating data objects
overview 21
creating default profiles
overview 24
R
creating expression rules reference tables
overview 31 overview 42
creating reference tables from columns
overview 39
creating scorecards
overview 34
S
setting up Analyst tool
overview 18
89