Creditone Bank
Creditone Bank
1. Interaction history.
2. Can we stop, new futures in IH, what are the IH are that exisiting in the system.
3. NBA arbitration, Adaptive models , business leavers ,
4. Strategies execution,
5. Campaign Execution process
Deployment: what is role back , how will u deploy the decision rules
How do u handle the millions of the records: from the activity we will invoke the dataflow that will
read the data from the files, and write into table . (data jobs).
Pega collections
For example:
6. In the Value field, enter the number of days to store interaction data.
7. Click Save.
The CDH-Common Ruleset and the Data-pxStrategyResult Class are a common layer shared by all
applications in multiapplication system. The InteractionFiles and DecisionFiles Data Sets,
interaction history properties, summaries, and aggregates must be available in this common layer,
so that all applications can use them
file repository Changes to summaries trigger a backfill operation that uses the raw interaction history
data from the file repository. The backfill operation recreates the summary data up to the current
moment, which might take days to complete depending on the data volume.
Dataflow issues :
1. Data Flow fails to run from a job scheduler in the System Runtime Context
Pega Customer Decision Hub version 8.8 addresses a common bottleneck in project
implementations. In the past, Customer Decision Hub needed a custom process to ingest data. With
Customer Profile Designer, you can import data through a standardized and well-defined process
through data jobs. This update reduces significant development effort in projects and gives complete
visibility of the data ingestion process.
It is also possible to use the import and export functions in Customer Profile Designer to manually
import and export data in CSV format.
File detection: This trigger initiates the run when the token files are uploaded to the file
repository.
Scheduler: A scheduled time and frequency triggers the data import job
Every time a data import job begins, the system creates a work object (PegaMKT-Work-JobImport).
This work object is the run of the Data Job. The run has four main stages:
Data Validation: This stage is the initialization of the run, and you can extend the stage to complete
additional custom validations. To perform custom validation, complete the two following actions:
Define an activity that uses these attributes to verify whether the file content is correct
(pyDataImportCustomPreValidation).
Data Staging: In this stage, the available records can go through an intermediate storage. By default,
the staging data flow has an abstract destination that helps with post-validation. You can customize
the corresponding data flow to have an intermediate data storage. This stage primarily helps with the
following scenarios:
Performing post-validation of records to ensure that the specified records mentioned in the
manifest file match the actual counts.
Record de-duplication.
Data Processing: The actual data ingestion to the customer data source occurs in this stage. You can
customize the ingestion data flow if you have a different ingestion logic.
Data Post-Processing: This stage occurs after data ingestion and deletion to the destination data
source. The import run performs the following activities:
Data archival: The system archives the source files in the ArchivedDataFiles folder that is
relative to the root folder. Each run finds its place in the folder that corresponds to its
completion date.
Data cleanup: Cleanup occurs in the current source folder once the files are archived
successfully. You can manage the retention policy for the archived files in the general settings
of the Data Import that are available in App Studio. The default values are seven days for the
files and 30 days for the runs.
You manage the flow of data from one source to another source through a data set and data
flows. Data sets access different types of stored data and insert or update the data into various
destinations. Data flows are scalable and resilient data pipelines that ingest, process and move data
from one or more sources to one or more destinations. In a data ingestion case, there are two types
of data sets. A file data set accesses the data in the repository, and a database data set stores the
data in corresponding tables in the database. Customer Profile Designer automatically configures all
the data sets and data flows to process the data for you.
First, the data for the file repository is structured externally to match the data model in Customer
Decision Hub and then compressed. For file and record level validations, a manifest file is necessary
which holds additional information about the data file that it represents.
The ETL team loads the files from an external environment to a file repository to which Customer
Decision Hub has access.
File detection or a given schedule can trigger the data jobs. The system requires a token file to
initiate the data job when file detection trigger is selected. File listeners continuously access the file
repository for the token file. When the file listeners detect the file, the ingestion process begins. The
process begins when the defined time arrives for a scheduled data job.
Case types streamline and automate the flow of files from the repository into their destinations, as
the process is visible and provides various error handling options. Customer Decision Hub includes a
preconfigured case type for you.
After the process begins, the system creates a data job work object. During this process, the files are
validated, staged, and moved to their final destination. Finally, the files are archived.