0% found this document useful (0 votes)
50 views10 pages

Creditone Bank

Uploaded by

syamson Bevara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views10 pages

Creditone Bank

Uploaded by

syamson Bevara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

CREDITONE BANK

What are the issues that u face in realtime ?

1M millions of customer and

1. Interaction history.
2. Can we stop, new futures in IH, what are the IH are that exisiting in the system.
3. NBA arbitration, Adaptive models , business leavers ,
4. Strategies execution,
5. Campaign Execution process

Deployment: what is role back , how will u deploy the decision rules

How do u handle the millions of the records: from the activity we will invoke the dataflow that will
read the data from the files, and write into table . (data jobs).

Upgrade : multiple application on same (data-decisionrequest-customer-cdh)

Pega collections

Intraction history : CDH-Common


1. In the header of Dev Studio, click Create > SysAdmin > Dynamic System Settings.

2. Enter a short description.

For example:

Days to keep exported interactions

3. In the Setting purpose field, enter decision/interaction_files/days_to_keep_data.

4. In the Owning Ruleset field, enter Pega-DecisionEngine.

5. Click Create and open.

6. In the Value field, enter the number of days to store interaction data.

7. Click Save.

8. The pyInteractionHistoryConfiguration Data Transform that is


defined in the Data-Decision-IH-Configuration Class configures
interaction history by setting the properties in this Class. In your
application Ruleset, you can override
the pyInteractionHistoryConfiguration Data Transform to customize
interaction history. In Pega Customer Decision Hub™ environments,
it must be the CDH-Common Ruleset if that Ruleset is present.

Reacting to real-time events:

 No of predictors can be add in ADM are 300.

Updating your system configuration to support multiple applications :

The CDH-Common Ruleset and the Data-pxStrategyResult Class are a common layer shared by all
applications in multiapplication system. The InteractionFiles and DecisionFiles Data Sets,
interaction history properties, summaries, and aggregates must be available in this common layer,
so that all applications can use them

 we can verify it by running the Declare_MKTSystemSetting.SharedRuleSetName node-level


data page.
Define how long you want to keep interactions in your file repository. By default, the system keeps
exported interactions for 465 days. A cleanup activity removes files that are older than the retention
period

file repository Changes to summaries trigger a backfill operation that uses the raw interaction history
data from the file repository. The backfill operation recreates the summary data up to the current
moment, which might take days to complete depending on the data volume.

Dataflow issues :

1. Data Flow fails to run from a job scheduler in the System Runtime Context

pyAccessGroup property in the Data-Decision-DDF-RunOptions class

Data Flows cannot parse JSON records


Data-Ingestion:

Pega Customer Decision Hub version 8.8 addresses a common bottleneck in project
implementations. In the past, Customer Decision Hub needed a custom process to ingest data. With
Customer Profile Designer, you can import data through a standardized and well-defined process
through data jobs. This update reduces significant development effort in projects and gives complete
visibility of the data ingestion process.

It is also possible to use the import and export functions in Customer Profile Designer to manually
import and export data in CSV format.
 File detection: This trigger initiates the run when the token files are uploaded to the file
repository.

 Scheduler: A scheduled time and frequency triggers the data import job

Every time a data import job begins, the system creates a work object (PegaMKT-Work-JobImport).
This work object is the run of the Data Job. The run has four main stages:

Data Validation: This stage is the initialization of the run, and you can extend the stage to complete
additional custom validations. To perform custom validation, complete the two following actions:

 Provide custom attributes for each file in the manifest file.

 Define an activity that uses these attributes to verify whether the file content is correct
(pyDataImportCustomPreValidation).
Data Staging: In this stage, the available records can go through an intermediate storage. By default,
the staging data flow has an abstract destination that helps with post-validation. You can customize
the corresponding data flow to have an intermediate data storage. This stage primarily helps with the
following scenarios:

 Performing post-validation of records to ensure that the specified records mentioned in the
manifest file match the actual counts.

 Record de-duplication.

Data Processing: The actual data ingestion to the customer data source occurs in this stage. You can
customize the ingestion data flow if you have a different ingestion logic.

Data Post-Processing: This stage occurs after data ingestion and deletion to the destination data
source. The import run performs the following activities:

 Data archival: The system archives the source files in the ArchivedDataFiles folder that is
relative to the root folder. Each run finds its place in the folder that corresponds to its
completion date.

 Data cleanup: Cleanup occurs in the current source folder once the files are archived
successfully. You can manage the retention policy for the archived files in the general settings
of the Data Import that are available in App Studio. The default values are seven days for the
files and 30 days for the runs.

You manage the flow of data from one source to another source through a data set and data
flows. Data sets access different types of stored data and insert or update the data into various
destinations. Data flows are scalable and resilient data pipelines that ingest, process and move data
from one or more sources to one or more destinations. In a data ingestion case, there are two types
of data sets. A file data set accesses the data in the repository, and a database data set stores the
data in corresponding tables in the database. Customer Profile Designer automatically configures all
the data sets and data flows to process the data for you.

In summary, Customer Decision Hub uses data jobs to process data.

First, the data for the file repository is structured externally to match the data model in Customer
Decision Hub and then compressed. For file and record level validations, a manifest file is necessary
which holds additional information about the data file that it represents.

The ETL team loads the files from an external environment to a file repository to which Customer
Decision Hub has access.

File detection or a given schedule can trigger the data jobs. The system requires a token file to
initiate the data job when file detection trigger is selected. File listeners continuously access the file
repository for the token file. When the file listeners detect the file, the ingestion process begins. The
process begins when the defined time arrives for a scheduled data job.

Case types streamline and automate the flow of files from the repository into their destinations, as
the process is visible and provides various error handling options. Customer Decision Hub includes a
preconfigured case type for you.

After the process begins, the system creates a data job work object. During this process, the files are
validated, staged, and moved to their final destination. Finally, the files are archived.

You might also like