0% found this document useful (0 votes)
83 views24 pages

Profiling & Fine Tuning in Platform (IBPL & LS) - Assessment

The document outlines various strategies and best practices for profiling and fine-tuning processes in a platform, focusing on performance improvement techniques for data handling and rule execution. It includes scenarios with specific performance metrics, questions related to active rules, partitioning, and the impact of different modeling approaches on computation time. Additionally, it highlights the importance of optimizing data deletion processes, managing active rules, and employing parallelization for enhanced efficiency.

Uploaded by

swapz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views24 pages

Profiling & Fine Tuning in Platform (IBPL & LS) - Assessment

The document outlines various strategies and best practices for profiling and fine-tuning processes in a platform, focusing on performance improvement techniques for data handling and rule execution. It includes scenarios with specific performance metrics, questions related to active rules, partitioning, and the impact of different modeling approaches on computation time. Additionally, it highlights the importance of optimizing data deletion processes, managing active rules, and employing parallelization for enhanced efficiency.

Uploaded by

swapz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Profiling & Fine Tuning in Platform (IBPL & LS) –

Assessment

Congratulations!

You completed this test on 04/04/2024 at 11:52


Score

95.65 % You ranked in the top 2 of 57 users. The average


score is 80.05%.

Passed

Consider a scenario where only the previous month capacity data 2.17%
(resource inputs belong to multiple measure groups) needs to be
deleted during every batch run process before the data load and batch
run is scheduled on the 15th of every month.

Which of the following are the best strategies that can be used to improve the
performance of this process? (Select 3)

Delete the entire measure group table and populate the records which do not belong to the past
using parallel file upload
Use Truncate command filtered for the past month
Parallelize the process of deleting the records with another independent process in the
Integration layer
Use parallelization in a procedure for the IBPL commands in the Live Server

Use the regular scope to null the records

Use Delete Data commands filtered for the past month


Consider a scenario where a query modeled currently as an active rule is 3.26%
having following parameters:
A. Rule: Measure A = Coalesce (Measure B, Measure C);
B. Run Time - 6min
C. Number of invocations - 31,000,000 (No external sorting)
D. Number of executions - 500
E. Measure B and Measure C are custom plugin outputs. During
every plugin run, between 400 to 500 records are updated.
F. Measure A belongs to Measure Group XYZ
G. Measure Group XYZ has 31,000,000 records. (Data extracted
using Gather Column Stats)

In the above scenario, which of the following steps would you take to improve the
performance of the rule?

Convert the active rule to a procedure based on the functionality

Remove coalesce and replace it with ‘if’ condition

Using staging measures/shadow measures in order to restrict the invoked records


Implement partitioning and convert the rule to a parallel procedure using the Partition ID
parameter
Consider the following rule: 4.35%
scope[Item].[Item]*[Location].[Location]*[Supplier].[Supplier
Location]*&AllProcLanes*
&AllTransModes*&AllProducedItems*&CWVAndScenarios*&DailyPlan
ningHorizon*&PurchaseOrders);
Measure.[D WIP PROC Production Intermediate]= (Measure.[PO Line
Commit Open Quantity TG] * Measure.[D Item Activity 123
Association])*Measure.[D ProcLane Supplier Location Association] *
Measure.[D ProcLane To Location Association];
end scope;

All four RHS measures belong to different measure groups.


Run time - 16.5286 min.
Invocations - 152058.
Executions - 152054.

Choose the best strategy that can be implemented to improve the performance of
the rule.

Use the following ‘If’ condition Measure.[D WIP PROC Production Intermediate]= If ((Measure.[D
ProcLane Supplier Location Association] * Measure.[D ProcLane To Location Association])>0)
then (Measure.[PO Line Commit Open Quantity TG] * Measure.[D Item Activity 123 Association]);
Consolidate the rule by move some computations to another Intermediate Association measure.
Measure.[ D Proclane Supplier Location To Location Association] = Measure.[D ProcLane
Supplier Location Association] * Measure.[D ProcLane To Location Association]; And use the
intermediate Association measure in the computation. Measure.[D WIP PROC Production
Intermediate]= (Measure.[PO Line Commit Open Quantity TG] * Measure.[D Item Activity 123
Association])*Measure.[D Proclane Supplier Location To
Convert the rule to evaluate member scope

Implement Partitioning
Use the following ‘If’ condition instead Measure.[D WIP PROC Production Intermediate]= If
(Measure.[D ProcLane Supplier Location Association]>0 && Measure.[D ProcLane To Location
Association]>0 && Measure.[D Item Activity 123 Association] >0 ) then Measure.[PO Line
Commit Open Quantity TG];
Which of the following could be the reasons that delay the ‘Save’ activity during a 2.17%
batch run process in production environment? (Select 3)

Size of the config

The number of scenarios or versions maintained in the tenant

Size of the measure groups loaded on the graph cube server

Size of the dataset maintained in the disk

The number of active users using the tenant while running the ‘Save’

Select the correct statement with respect to active rules. 2.17%

Active rules can be parameterized and they run for the incremental scope

Rules in a procedure that are independent can be triggered in parallel using parallelization
Active rules are triggered sequentially, and the user defines the dependencies for better
performance
Multiple procedures should be modeled as they cannot be parameterized

Active rules getting triggered can be controlled

Select all the correct statements about LS and IBPL. (Select all that apply) 2.17%

Active rules that are not required to be computed in real-time and have no dependencies can be
converted to reporting measure
Reporting measures are computed at report generation time

Rules computed in an active rule are faster than rules computed by reporting measures

Active rules can be parameterized


What are the benefits of modelling a rule as a part of procedure when 2.17%
compared to an active rule? (Select all that apply)

Can be triggered in parallel threads


Evaluation of member scopes are faster when triggered in a procedure compared to an active
rule
Can be parameterized and the same code can be reused
Queries are comparatively faster when modeled as a procedure than an active rule as the
dependency is already defined
Can be triggered only for the records that have undergone a change

Choose any two situations when modelling a query as an active rule would be more 2.17%
beneficial than modeling a reporting measure (in terms of performance of the
model).

The rule to compute the measure has too many if conditions

The rule to compute the measure has to be triggered for entire scope

The measure computed is used as an input for a python plugin

The measure is computed only once and does not undergo much changes on a daily basis

The measure is used in 20 different reports with 3-4 other reporting measures in them

Select the incorrect statements with respect to scenarios. (Select all that apply) 2.17%

Save time is independent of the number of scenarios

Creating scoped scenarios improves the performance of the model

Restricting the number of scenarios maintained impacts the performance

Number of scenarios created by the user does not affect the performance of the model
Select the correct statements with respect to the usage of cartesian scope (Select 2.17%
all that apply)

The cartesian scope can be used to assign a constant and a variable value based on a measure
to the LHS measure
The cartesian scope works with leadoffset

The cartesian scopes must always be used with restricted scope

The cartesian scope can be used to open up new intersections

Select the statement that is true with respect to reporting measures. 2.17%

Reporting measure gets triggered only for the scope of the reports

Indefinite number of reporting measures

Reporting Measures are faster compared to active rules when triggered for the same scope

Reporting measures can be parameterized

What are the most appropriate factors that can be used to choose a partitioning 2.17%
attribute?
(Select any two)

Attribute which has the highest records

Attribute by which planners are segregated

Use the most frequently filtered attribute

Attribute which is updated on a regular basis

Attribute which gives an equal split of Measure groups that need to be partitioned

Attribute which has most child attributes


What are some of the practices that should be followed to maintain the 2.17%
performance integrity of the model? (Select all that apply)

Delete obsolete attribute members on a regular basis

Run background save to keep the memory consumption of the model under control

Delete obsolete transactional data during every batch run process

Effective segregation of measure groups to control the number of invocations in a computation

In a particular implementation the transaction data load process using integration is 2.17%
taking longer time what are the steps that can be implemented to improve the
performance of the process. (choose any three)

Implement partitioning

Load the transactional data fact files in parallel with other fact files

Decrease MaxRowsToSortInMemory to avoid external sorting

Load only the updated records to the LS using the net change functionality of integration

Increase MaxRowsToSortInMemory to avoid external sorting

Select any two statements that are incorrect with respect to slowness caused due 3.26%
to poor IBPL modeling.

Increased computation time due to large number of invoked records during rule computation

Increased demand plan time due to limited memory available in the graph cube server

Increased computation time due to inconsistent sort order across measure groups
Sequential computation of interdependent procedure rules in a particular process where
parallelization is possible
Increased rule computation time due to multiple interdependencies among active rules
Select the correct statements that represents the ways of removing data or 2.17%
intersections. (Select all that apply)

The truncate command can be used to delete the records for a specific scope

Delete data commands can be used to delete attribute properties


Truncate and delete data commands can be triggered in parallel for different measure groups
using the parallel procedure
Delete data commands can be used for both measures and graphs

Choose some of the practices that can be implemented to improve the performance 2.17%
of the model. (Select all that apply)

Using blockscope wherever it can be implemented

Implementing parallelization of procedure rules for independent rule blocks

Using active rules for all net change related IBPL queries

Having consistent sort order across the measure groups


Consider the following set of delete data commands 4.35%

DELETE DATA FOR MODEL [006.001D DIST Network Inputs


BODLane Consumption Production Associations] WHERE {Version.
[Version Name].filter(#.Name in {[{{Version}}]})};
DELETE DATA FOR MODEL [006.002D DIST Network Inputs
BODLane Details] WHERE {Version.[Version Name].filter(#.Name in
{[{{Version}}]})};
DELETE DATA FOR MODEL [006.003D DIST Network Inputs
BODLane Item Details] WHERE {Version.[Version
Name].filter(#.Name in {[{{Version}}]})};
DELETE DATA FOR MODEL [006.004D DIST Network Inputs
BODLane TransMode Association] WHERE {Version.[Version
Name].filter(#.Name in {[{{Version}}]})};
DELETE DATA FOR MODEL [006.005D DIST Network Inputs Inbound
And Outbound Shared Resource Associations] WHERE {Version.
[Version Name].filter(#.Name in {[{{Version}}]})};
DELETE DATA FOR MODEL [006.006D DIST Network Inputs Shared
Resource Association] WHERE {Version.[Version Name].filter(#.Name
in {[{{Version}}]})};

Note :
A. These commands are triggered as a part of procedure and are triggered
sequentially.
B. The total run time is 60min where each command is running
approximately for 10min.
C. All the measure groups used in the delete data command are populated
via file load.

Choose the strategies from the following list that can be implemented to improve
the performance of the model. (Select all that apply)

Use Truncate command

Replace with the Null commands


Implement paralellization in the procedure

Null the records that have undergone a change and populate only those using integration

Consider the following rule 3.26%


scope:([Version].[Version Name].filter(#.Name
in{"CurrentWorkingView"})*&DailyPlanningHorizon*&DailyReplenIte
ms*&DailyReplenLocations);
Measure.[Value of D Total Outflow]= if(~IsNull(Measure.[D ASP Item]) &&
~IsNull(Measure.[D Total Demand At Material Node])) then
coalesce(Measure.[D ASP tem],0)*coalesce(Measure.[D Total Demand
At Material Node],0) else null;
end scope;

Note : Measure.[Value of D Total Outflow] is deleted during every batch


run and 0 record is equivalent to null record.

Run Time - 3.9836 min


Invocations - 34,269,582
Executions - 0

Choose any one rule that would give the most performance benefit.

Measure.[Value of D Total Outflow]= if(coalesce(Measure.[D ASP Item] * Measure.[D Total


Demand At Material Node],0)>0 ) then Measure.[D ASP Item] * Measure.[D Total Demand At
Material Node] else null;
Measure.[Value of D Total Outflow]=Measure.[D ASP Item] * Measure.[D Total Demand At
Material Node];
Measure.[Value of D Total Outflow]= if(~IsNull(Measure.[D ASP Item]) && ~IsNull(Measure.[D
Total Demand At Material Node])) then Measure.[D ASP tem] * Measure.[D Total Demand At
Material Node] else null;
Measure.[Value of D Total Outflow]= if(coalesce(Measure.[D ASP Item] ,0) * coalesce(Measure.[D
Total Demand At Material Node],0) >0 ) then Measure.[D ASP tem] * Measure.[D Total Demand
At Material Node] else null;
Consider a computation that was running for 30 min using an association 5.43%
measure reduced the computation time to 5 min?
Before
(Run time - 30 min)
scope : ( [Version].[Version Name].filter(#.Name in {"{{Version}}"}) *
Activity1.Activity1 * Activity2.Activity2 * Activity3.Activity3 *
Item.Item * Location.Location);
Measure.[D Min LotSize]= if(Measure.[D Item Location Association] >
0 && Measure.[D Item Activity3 Association] > 0 && Measure.[D Lane
Association] ==TRUE && Activity1.#.relatedmembers([To
Location]).element(0).Name== Location.#.Name) then Measure.[D
Lane Min Lot Size - For Solve];
Measure.[D Lead Time Intermediate]=if(Measure.[D Item Location
Association] > 0 && Measure.[D Item Activity3 Association] > 0 &&
Measure.[D Lane Association] ==TRUE &&
(Activity1.#.relatedmembers([To Location]).element(0).Name==
Location.#.Name)) then Measure.[D Lane Lead Time];
End scope;

After
(Run Time - 5min)
scope : ( [Version].[Version Name].filter(#.Name in {"{{Version}}"}) *
&AllBODLanes* &AllTransModes * Activity3.Activity3 * Item.Item *
Location.Location);
Measure.[D Material Production Association]= if( Measure.[D Item
Location Association]> 0 && Measure.[D Item Activity3 Association] >
0 && Measure.[D Lane Association] ==TRUE &&
(Activity1.#.relatedmembers([To Location]).element(0).Name==
Location.#.Name)) then 1;
Measure.[D Min LotSize]= if(Measure.[D Material Production
Association] > 0) then Measure.[D Lane Min Lot Size - For Solve];
Measure.[D Lead Time Intermediate]= if(Measure.[D Material
Production Association] > 0) then Measure.[D Lane Lead Time];
Which of the following would result in improved performance of the computation?
(Select all that apply)

Usage of Association measures reduces the redundant computation by invoking only for the
updated records
Redundant computations are computed only once and accessed when needed

Computations are triggered in parallel threads for the valid intersections

Computations will be triggered only for the valid non null records

Consider a scenario where forecast data received from a customer has 40,000,000 2.17%
records of which more than 40% of the records have 0 units as the forecast. As a
result of the forecast data load, certain active rules which have the forecast
measure are running for a longer time (more than 5 to 6 minutes). Which of the
following steps results in best performance?

Implement partitioning

Using a check in the integration to validate the records before its loaded to the LS

Parallelize the load of forecast data with other fact files

Convert the active rules to procedures and trigger them when needed

Move the active rule computation to SSIS

What are some of the best practices while segregating the sort order for a particular 3.26%
measure group? (Select all that apply)

(Consider the measure of that measure group are not used in any UI and not linked
to any measure used in the UIs)

Consistent sort order across LHS and RHS Measure groups

Should be based on the filtering order that a planner


Grains common across measure groups should be sorted in the same order and the extra grains
should be placed at the end
Item grain should always be given the highest priority
Select the correct statements with respect to IBPL & LS. (Select all that apply) 2.17%

Consistent sort order across measure groups does not affect the UI load time
Parallelization improves the computation time for a particular procedure by using multiple
threads to compute in parallel
Increasing MaxRowsToSortInMemory in tenant setting Avoids external Sorting and decreases
the computation time
Partitioning when implemented is independent of the number of records in the measure group
table

Select the correct statements with respect to IBPL and LS. (Select all that apply) 2.17%

Reports that fetch data from multiple partitions are much faster as they are triggered in parallel

Fact file uploads are faster for a partition-enabled measure group

Plugins run faster for partitioned input and output measure groups

Writes into one partition conflicts with reads on another partition


The following query is running for two minutes: 3.26%

scope:(&CWV*[Item].
[Item]*&CurrentAndNext12Months.relatedmembers([Day])*[Sales
Domain].[Ship To]);
Measure.[DP Final Forecast at Item ShipTo Day]=if(isnull(Measure.
[Forecast at Item ShipTo Day]))then measure.[Planner Allocation
Adjustment] else sum(Measure.[Forecast at Item ShipTo Day],Measure.
[Planner Allocation Adjustment]);
end scope;
Run time - 1.9416min
Invocations - 11,969,069
Executions - 8,535,316

Measure.[DP Final Forecast at Item ShipTo Day] is not used for


dashboarding.

Considering the above scenario, which of the following will be the most effective in
improving the performance of the query?

Option A

Option B

Option C

Option D
What steps can be implemented to improve the performance of the 3.26%
following rule:

SPREAD scope: (&NettingMonthPlanningHorizon_Daily * [Sales Domain].


[Customer Group] * &ForecastDemandIDs * &AllForecastDemandTypes *
[Item].[Item] * [Location].[Location] * &CWV ;
Measure.[D Demand Quantity] = Measure.[Netted Demand Quantity];
end scope;
Invocations : 9,000,000
Executions : 9,000,000
Run time - 15min

Convert it to a block scope

Convert the computation to a python plugin

Implement partitioning

Check and set-up an assortment basis measure if not present

Move the computation to SSIS

Select statements that are applicable to the usage of release memory. (Select all 2.17%
that apply)

It frees up the memory by releasing data from the hard disk

It frees up the memory by releasing data from graph cuber server memory

It can be used during save to handle high memory consumption

It can be used during bg save to handle high memory consumption


Consider the following rule which is modelled as an active rule : 3.26%
scope: (([Dependent Demand Graph],&CWVAndScenarios, from
[Activity1].[Activity1]*[Activity2].[Activity2]*[Activity3].[Activity3]*
[Documents].[OrderlineID]*[Item].[Item]*[Location].[Location]*[Time].
[Day],to [Demand Type].[Demand Type]*[Demand].[DemandID]*[Item].
[Item]*[Location].[Location]*[Sales Domain].[Customer Group]*[Time].
[Day]));
Edge.[GHW Planned Supply] = Edge.[002.002D Solver Outputs
Material Consumption Pegging].[D Material Consumption Pegged
Quantity]@(To.[Sales Domain].[Customer Group], To.[Demand].
[DemandID], To.[Demand Type].[Demand Type], To.[Item].[Item], To.
[Location].[Location], To.[Time].[Day], From.[Time].[Day], From.
[Activity1].[Activity1], From.[Activity2].[Activity2], From.[Activity3].
[Activity3], From.[Documents].[OrderlineID], From.[Item].[Item], From.
[Location].[Location]);
end scope;

Run Time - 2min


Invocations - 4106052
Executions - 9363

Choose any one strategy that can be implemented to deliver optimum performance.

Convert to a procedure

Use appropriate namedsets

Use parallel scope statements

Use association measure

Convert to a reporting measure


Which of the following statements reflect the best usage of QPA for improving the 2.17%
performance of the model?

Analyze the process computation time based on the memory consumption on the graph cube
server
Analyze the solver performance based on the number of invocation and executions

Analyze the performance of the model based on the thread ID and request ID

Analyze the IBPL query computation time based on the number of invoked and updated records

Consider the situation where the BOStoInventory plugin is running for a 1 2.17%
hour.
Note: The plugin is triggered as a part of the procedure
Choose three strategies that can be implemented to improve the
performance of the model.

Implement Partitioning and parameterize the procedure based on the partition id

Implement Slicing

Set the MaxConcurrency Argument to 8

Use appropriate namesets in the exec plugin command to restrict the scope of the plugin run
Consider the following set of rules which are currently modeled as an active rule: 3.26%

Note:
1. All the LHS Measure are computed after the solver run which is
running for the entire scope and they do not need interactive edits
2. The LHS Measures have around 70,000,000 records in their respective
Measure Groups.
3. These measures are used for reporting as well as future
computations.

Choose strategies that can implemented that would give the most
performance benefit. (select any 3)

Convert them to a procedure

Implement Partitioning

Convert them to Reporting Measures

Implement Parallelization of Procedure

Use Association Measures


Consider a scenario where the Item Dimension Data load takes 20 minutes through 2.17%
integration and is being loaded in parallel with other Dimension files that take less
than a minute. Which of the following steps can be implemented to improve the
overall performance of the process?

Load them in parallel with other fact files

Parallelize the Dimensional File upload process with Network Generation

Load only the records which have undergone a change using the Net Change file upload

Implement Partitioning on the Item Dimension File


Consider the following rule : 3.26%
scope: (&CBPScenarios * [Activity1].[Activity1] * [Activity2].[Activity2] *
[Item].[Item] * [Location].[Location] * [Time].[Week]);
Measure.[MPS CBP Exceptions in CBP] = if(coalesce(measure.[Material
Consumption Quantity Plan Output (MPS)], 0) < coalesce(measure.[D
Material Consumption Quantity Plan Output],0)) then 1;
end scope;

Run Time - 3min


Invocations: 11,732,014
Executions: 2,875,965

Choose from the options given below that will improve the performance of the
model. (Select 2)

Implement Partitioning and parallelize the rule based on the Partition ID


Use the following rule : Measure.[MPS CBP Exceptions in CBP] = if(measure.[Material
Consumption Quantity Plan Output (MPS)] < measure.[D Material Consumption Quantity Plan
Output]) then 1;
Use appropriate namedsets in the scope statements
Use the following rule : Measure.[MPS CBP Exceptions in CBP] = if(sum (measure.[D Material
Consumption Quantity Plan Output], -1 * measure.[Material Consumption Quantity Plan Output
(MPS)] )>0) then 1;
Use Association measure to consolidate the ‘if’ condition

Use a cartesian scope


Consider the following rule: 3.26%
scope:([Item].[Item]*[Location].
[Location]*&CwvAndScenarios*&DailyPlanningHorizon);
Measure.[D SS Violation Quantity] = if (coalesce(Measure.[D Ending On
Hand],0) <= Coalesce(Measure.[D Target Inventory],0)) then
(Coalesce(Measure.[D Target Inventory],0) - Measure.[D Ending On
Hand]);
end scope;

Run Time - 6 min


Invocations - 14930027
Executions - 14930027

Choose any one rule that would deliver optimum performance without affecting the
logic.

Measure.[D SS Violation Quantity] = if (Measure.[D Ending On Hand] <= Coalesce(Measure.[D


Target Inventory]) then (Measure.[D Target Inventory] - Measure.[D Ending On Hand]);
Measure.[D SS Violation Quantity] = if ( sum (Measure.[D Target Inventory], -1 * Measure.[D
Ending On Hand] )>=0) then sum (Measure.[D Target Inventory], -1 * Measure.[D Ending On
Hand] );
Measure.[D SS Violation Quantity] = sum (Measure.[D Ending On Hand], -1 * Measure.[D Target
Inventory]);
Measure.[D SS Violation Quantity] = if ( (Measure.[D Target Inventory] - Measure.[D Ending On
Hand] )>=0) then Measure.[D Target Inventory] - Measure.[D Ending On Hand];
Which of the following parameters can be used to analyze the performance of a 2.17%
particular computation? (Select all that apply)

Run Time

Number of Invocations and Executions

Thread ID

User ID

Memory available in the Graph Cube server

Request ID
Consider a scenario where the following rule is running for 55min: 3.26%

EvaluateMember scope:([Supplier].[Supplier
Location]*&CWVAndScenarios*[Activity1].[From Location]*[Location].
[Location]*[Time].[Day]*[Item].[Item]);
Measure.[MRP Quantity at From and Supplier Location] =
if([Supplier].#.Name == [Activity1].#.Name) then Measure.[MRP
Quantity];
end scope;

Following are some of the strategies that can be used to improve the
performance of the model.
A. Implement Partitioning if the number of invocations is still greater
than 30 M after validation.
B. Run Gather column stats for the Measure Group that the Measure.
[MRP Quantity at From and Supplier Location] belongs to.
C. Validate from a functional perspective the records present in the
Measure Group that the Measure.[MRP Quantity at From and Supplier
Location] belongs are valid intersections.
D. Use association measure instead of evaluate member scope.
E. Use appropriate named-sets.
F. Increase MaxRowsToSortInMemory to avoid external sorting. (
Currently 50,000,000)

Please select which of the following strategies can be implemented to


improve the performance of the model.

A, B, C

A, B, C, D, E and F

A, B

An implementation is having a monthly batch run during the second 2.17%


Sunday of every month and runs for 10 hours. After the solver run, the
memory consumption is approximately 95%. In order to free the
memory for further computations, a ‘save’ activity is run for an hour.

Which of the following process can be implemented to improve the performance of


the save activity?

Use only Background save after the solver run

Use only Release Memory after the solver run

Use Release Memory before save after the solver run

Use Background save along with Release Memory after the solver run

You might also like