The Aggregate Storage Option for Essbase data has come a long way since its introduction in version 7; so far, in fact, ASO should now be considered before BSO as the storage option for Essbase applications. Additionally, ASO is now available for Planning applications. However, ASO does present challenges around performance. If you have dynamic dimensions, KPI formulas, YTD, dynamic currency conversion, and growth formulas in your cube then reporting times can become unacceptable. This is the first of a series of posts containing tips to ensure you get the best performance from ASO cubes.

 

Use MDX Formulae

This keeps the data raw in the cube – greatly improving the cube size and data load times. With MDX formulae you can also apply lots of aggregations to improve performance.

 

Use ASO custom calculations and allocations

If you’re using EPM 11.1.2.0 or later then you can take advantage of this feature to solve problems with dynamic retrieval performance. For example, custom calculations perform YTD calcs faster than using SQL.

Be warned, though, that this can considerably increase the cube’s size, and with a member set combination of 2^64, it’s more suited to calculating a few complex measures rather than populating the entire YTD or performing the currency conversion.

Avoid UDAs – consider alternate alias tables instead

UDAs in MDX is relatively slow, so avoid them if possible. If you need to categorize members, use an alternate alias table to get around the requirement for the alias to be unique. For example, this use of a UDA…

 

Case When IsUda([Measures].CurrentMember, “AVG”) Then

… calculation for normal YTD divided by the number of months…

 

When IsUda([Measures].CurrentMember, “LAST”) Then

… return only month value…

 

Else

… calculation for normal YTD…

 

…can be performed using an alternate alias table:

 

Case Substring(Measures.CurrentMember.MeasureTypes, 1, 3)

When “TBA” Then

… calculation for normal YTD divided by the number of months…

 

When “TBS” Then

… return only month value…

 

Else

… calculation for normal YTD…

 

End

 

Adjust Essbase.cfg Settings

CALCPARALLEL affects aggregation and reports performance. Increase it to use more processors. Experiment with the best settings for your server/application but don’t be afraid to go beyond the number of available processor costs. Other settings to check are:

 

TARGETASOOPT TARGETAPP TRUE – for partition optimization (see below)

 

AUTOMERGE – for slices optimization (see below)

 

Partitioning

If you’re splitting ASO databases, create a simple BSO outline as the target (empty, with no data). It MUST have the exact same member count. Then use the TARGETASOOPT setting in essbase.cfg to send compact queries.

You could consider splitting ASO applications by year – or maybe by type of data – such as balancing accounts in one database and summing accounts in another. This takes the case statement out of the formulas and uses the partition.

 

Slices

Most ASO cubes are for reporting only – but with Planning now able to use it, input cubes are also possible. From Essbase version 11.1.2.3.500, an AUTOMERGE setting was added to help with data growth.

Essbase will create slices whenever you load data incrementally, submit data, run a logical clear, or performs custom calculations or allocations. Think of these as mini ASO cubes which add together to give the final number.

Unless the data is large, Essbase will automatically merge these mini cubes into one incremental load slice. As this incremental slice grows, though, there can be lags in merging the new data into it when submitting or running business rules.

If your business rule also needs to calculate data based on inputs, the auto-merge might replace zero with #missing, causing data in, for example, converted currencies to not be removed, because the calculation will skip over #missing and not remove the previously calculated values.

To solve the above problems, AUTOMERGE NEVER will tell Essbase to create new slices for every new input.

However, this creates a new problem. As the number of incremental slices grows, the reported speed will slow. If the number of slices is above 1,000 then it becomes noticeably slower and above 4000 performance will probably be unacceptable. Merging 1 incremental slice on a schedule takes only a few seconds though, and scheduling this half-hourly is recommended.

Out-of-hours full merging to the main database slice takes much longer but gives the best performance, so is also recommended.