Profiling & Fine Tuning in Platform (IBPL & LS) - Assessment
Profiling & Fine Tuning in Platform (IBPL & LS) - Assessment
Assessment
Congratulations!
Passed
Consider a scenario where only the previous month capacity data 2.17%
(resource inputs belong to multiple measure groups) needs to be
deleted during every batch run process before the data load and batch
run is scheduled on the 15th of every month.
Which of the following are the best strategies that can be used to improve the
performance of this process? (Select 3)
Delete the entire measure group table and populate the records which do not belong to the past
using parallel file upload
Use Truncate command filtered for the past month
Parallelize the process of deleting the records with another independent process in the
Integration layer
Use parallelization in a procedure for the IBPL commands in the Live Server
In the above scenario, which of the following steps would you take to improve the
performance of the rule?
Choose the best strategy that can be implemented to improve the performance of
the rule.
Use the following ‘If’ condition Measure.[D WIP PROC Production Intermediate]= If ((Measure.[D
ProcLane Supplier Location Association] * Measure.[D ProcLane To Location Association])>0)
then (Measure.[PO Line Commit Open Quantity TG] * Measure.[D Item Activity 123 Association]);
Consolidate the rule by move some computations to another Intermediate Association measure.
Measure.[ D Proclane Supplier Location To Location Association] = Measure.[D ProcLane
Supplier Location Association] * Measure.[D ProcLane To Location Association]; And use the
intermediate Association measure in the computation. Measure.[D WIP PROC Production
Intermediate]= (Measure.[PO Line Commit Open Quantity TG] * Measure.[D Item Activity 123
Association])*Measure.[D Proclane Supplier Location To
Convert the rule to evaluate member scope
Implement Partitioning
Use the following ‘If’ condition instead Measure.[D WIP PROC Production Intermediate]= If
(Measure.[D ProcLane Supplier Location Association]>0 && Measure.[D ProcLane To Location
Association]>0 && Measure.[D Item Activity 123 Association] >0 ) then Measure.[PO Line
Commit Open Quantity TG];
Which of the following could be the reasons that delay the ‘Save’ activity during a 2.17%
batch run process in production environment? (Select 3)
The number of active users using the tenant while running the ‘Save’
Active rules can be parameterized and they run for the incremental scope
Rules in a procedure that are independent can be triggered in parallel using parallelization
Active rules are triggered sequentially, and the user defines the dependencies for better
performance
Multiple procedures should be modeled as they cannot be parameterized
Select all the correct statements about LS and IBPL. (Select all that apply) 2.17%
Active rules that are not required to be computed in real-time and have no dependencies can be
converted to reporting measure
Reporting measures are computed at report generation time
Rules computed in an active rule are faster than rules computed by reporting measures
Choose any two situations when modelling a query as an active rule would be more 2.17%
beneficial than modeling a reporting measure (in terms of performance of the
model).
The rule to compute the measure has to be triggered for entire scope
The measure is computed only once and does not undergo much changes on a daily basis
The measure is used in 20 different reports with 3-4 other reporting measures in them
Select the incorrect statements with respect to scenarios. (Select all that apply) 2.17%
Number of scenarios created by the user does not affect the performance of the model
Select the correct statements with respect to the usage of cartesian scope (Select 2.17%
all that apply)
The cartesian scope can be used to assign a constant and a variable value based on a measure
to the LHS measure
The cartesian scope works with leadoffset
Select the statement that is true with respect to reporting measures. 2.17%
Reporting measure gets triggered only for the scope of the reports
Reporting Measures are faster compared to active rules when triggered for the same scope
What are the most appropriate factors that can be used to choose a partitioning 2.17%
attribute?
(Select any two)
Attribute which gives an equal split of Measure groups that need to be partitioned
Run background save to keep the memory consumption of the model under control
In a particular implementation the transaction data load process using integration is 2.17%
taking longer time what are the steps that can be implemented to improve the
performance of the process. (choose any three)
Implement partitioning
Load the transactional data fact files in parallel with other fact files
Load only the updated records to the LS using the net change functionality of integration
Select any two statements that are incorrect with respect to slowness caused due 3.26%
to poor IBPL modeling.
Increased computation time due to large number of invoked records during rule computation
Increased demand plan time due to limited memory available in the graph cube server
Increased computation time due to inconsistent sort order across measure groups
Sequential computation of interdependent procedure rules in a particular process where
parallelization is possible
Increased rule computation time due to multiple interdependencies among active rules
Select the correct statements that represents the ways of removing data or 2.17%
intersections. (Select all that apply)
The truncate command can be used to delete the records for a specific scope
Choose some of the practices that can be implemented to improve the performance 2.17%
of the model. (Select all that apply)
Using active rules for all net change related IBPL queries
Note :
A. These commands are triggered as a part of procedure and are triggered
sequentially.
B. The total run time is 60min where each command is running
approximately for 10min.
C. All the measure groups used in the delete data command are populated
via file load.
Choose the strategies from the following list that can be implemented to improve
the performance of the model. (Select all that apply)
Null the records that have undergone a change and populate only those using integration
Choose any one rule that would give the most performance benefit.
After
(Run Time - 5min)
scope : ( [Version].[Version Name].filter(#.Name in {"{{Version}}"}) *
&AllBODLanes* &AllTransModes * Activity3.Activity3 * Item.Item *
Location.Location);
Measure.[D Material Production Association]= if( Measure.[D Item
Location Association]> 0 && Measure.[D Item Activity3 Association] >
0 && Measure.[D Lane Association] ==TRUE &&
(Activity1.#.relatedmembers([To Location]).element(0).Name==
Location.#.Name)) then 1;
Measure.[D Min LotSize]= if(Measure.[D Material Production
Association] > 0) then Measure.[D Lane Min Lot Size - For Solve];
Measure.[D Lead Time Intermediate]= if(Measure.[D Material
Production Association] > 0) then Measure.[D Lane Lead Time];
Which of the following would result in improved performance of the computation?
(Select all that apply)
Usage of Association measures reduces the redundant computation by invoking only for the
updated records
Redundant computations are computed only once and accessed when needed
Computations will be triggered only for the valid non null records
Consider a scenario where forecast data received from a customer has 40,000,000 2.17%
records of which more than 40% of the records have 0 units as the forecast. As a
result of the forecast data load, certain active rules which have the forecast
measure are running for a longer time (more than 5 to 6 minutes). Which of the
following steps results in best performance?
Implement partitioning
Using a check in the integration to validate the records before its loaded to the LS
Convert the active rules to procedures and trigger them when needed
What are some of the best practices while segregating the sort order for a particular 3.26%
measure group? (Select all that apply)
(Consider the measure of that measure group are not used in any UI and not linked
to any measure used in the UIs)
Consistent sort order across measure groups does not affect the UI load time
Parallelization improves the computation time for a particular procedure by using multiple
threads to compute in parallel
Increasing MaxRowsToSortInMemory in tenant setting Avoids external Sorting and decreases
the computation time
Partitioning when implemented is independent of the number of records in the measure group
table
Select the correct statements with respect to IBPL and LS. (Select all that apply) 2.17%
Reports that fetch data from multiple partitions are much faster as they are triggered in parallel
Plugins run faster for partitioned input and output measure groups
scope:(&CWV*[Item].
[Item]*&CurrentAndNext12Months.relatedmembers([Day])*[Sales
Domain].[Ship To]);
Measure.[DP Final Forecast at Item ShipTo Day]=if(isnull(Measure.
[Forecast at Item ShipTo Day]))then measure.[Planner Allocation
Adjustment] else sum(Measure.[Forecast at Item ShipTo Day],Measure.
[Planner Allocation Adjustment]);
end scope;
Run time - 1.9416min
Invocations - 11,969,069
Executions - 8,535,316
Considering the above scenario, which of the following will be the most effective in
improving the performance of the query?
Option A
Option B
Option C
Option D
What steps can be implemented to improve the performance of the 3.26%
following rule:
Implement partitioning
Select statements that are applicable to the usage of release memory. (Select all 2.17%
that apply)
It frees up the memory by releasing data from graph cuber server memory
Choose any one strategy that can be implemented to deliver optimum performance.
Convert to a procedure
Analyze the process computation time based on the memory consumption on the graph cube
server
Analyze the solver performance based on the number of invocation and executions
Analyze the performance of the model based on the thread ID and request ID
Analyze the IBPL query computation time based on the number of invoked and updated records
Consider the situation where the BOStoInventory plugin is running for a 1 2.17%
hour.
Note: The plugin is triggered as a part of the procedure
Choose three strategies that can be implemented to improve the
performance of the model.
Implement Slicing
Use appropriate namesets in the exec plugin command to restrict the scope of the plugin run
Consider the following set of rules which are currently modeled as an active rule: 3.26%
Note:
1. All the LHS Measure are computed after the solver run which is
running for the entire scope and they do not need interactive edits
2. The LHS Measures have around 70,000,000 records in their respective
Measure Groups.
3. These measures are used for reporting as well as future
computations.
Choose strategies that can implemented that would give the most
performance benefit. (select any 3)
Implement Partitioning
Load only the records which have undergone a change using the Net Change file upload
Choose from the options given below that will improve the performance of the
model. (Select 2)
Choose any one rule that would deliver optimum performance without affecting the
logic.
Run Time
Thread ID
User ID
Request ID
Consider a scenario where the following rule is running for 55min: 3.26%
EvaluateMember scope:([Supplier].[Supplier
Location]*&CWVAndScenarios*[Activity1].[From Location]*[Location].
[Location]*[Time].[Day]*[Item].[Item]);
Measure.[MRP Quantity at From and Supplier Location] =
if([Supplier].#.Name == [Activity1].#.Name) then Measure.[MRP
Quantity];
end scope;
Following are some of the strategies that can be used to improve the
performance of the model.
A. Implement Partitioning if the number of invocations is still greater
than 30 M after validation.
B. Run Gather column stats for the Measure Group that the Measure.
[MRP Quantity at From and Supplier Location] belongs to.
C. Validate from a functional perspective the records present in the
Measure Group that the Measure.[MRP Quantity at From and Supplier
Location] belongs are valid intersections.
D. Use association measure instead of evaluate member scope.
E. Use appropriate named-sets.
F. Increase MaxRowsToSortInMemory to avoid external sorting. (
Currently 50,000,000)
A, B, C
A, B, C, D, E and F
A, B
Use Background save along with Release Memory after the solver run