Showing posts with label ASO. Show all posts
Showing posts with label ASO. Show all posts

Tuesday, December 31, 2013

Optimizing Report Scripts in ASO Applications

Report script is one of the powerful reporting options in ESSBASE. Report scripts are easy to write and give us flexibility to select the members and format we need.

Performance will always play a key role here. How do we get best performance from a report script? As like many other areas in essbase, it's achievable by tweaking and testing various settings.

Let's see the report script in a different angle, 

Based on my understanding, essbase engine runs a report in two phases. One, fetching the data and preparing the data in the format we need, Two, writing the report to an output file. There are different options to improve both phases.

Let's start from second phase (this is where I get more performance )  ,  where data will be exported to a file.

I will always get dramatic performance improvement when I arrange dimensions in row command in such a way that  the dimension with least number of members that  you are exporting in a report comes left most and dimension with most number of members comes right most ( ascending ). 

**** Here is the confusion part; I said number of members in your report script, not members in your outline. There is difference between two statements.

For example, I have an outline with 5 dimensions ,

Account - with 10000 members
Entity - with 5000 members 
Product - with 500 members
Market - 50 members
Time - with 12 months 

I want to export a report with

100 Accounts
5000 Entities
500 Products 
50 Markets
1 Month

Then, my report looks like,

<ROW( "Time", "Market","Accounts" ,"Products","Entities")

As I said earlier, I arranged my dimensions in ascending order based on number of members in report that I am exporting, not based on number of members in outline. This trick drastically improves second phase of the report i.e writing report to a output file. Some of my reports are improved from 100KB/Sec to 5MB/Sec.


Coming to the first phase,
Cache settings on application and database plays main role.
"Pending cache setting" on application level confuses me a lot. As per Database Administrator Guide

"If the input-level data size is greater than 2 GB by some factor, the aggregate storage cache can be
increased by the square root of the factor. For example, if the input-level data size is 3 GB
(2 GB * 1.5), multiply the aggregate storage cache size of 32 MB by the square root of 1.5, and
set the aggregate cache size to the result: 39.04 MB."

But, that never worked for me for cube with input-level data more than 5 GB. I got good performance when i set very higher than the recommendation formula. For example, i got optimal performance on reports for an ASO cube if i set pending cache to  512 MB when input-level data is 10 GB and 1024 MB when input-level data is 40 GB. I do not have a specific recommendation for pending cache , but its better to test with various values and find the sweet spot.

"Buffer size" and "Sort buffer size" also helps to speed up the report scripts. Play around with different settings to find out best for you.

Building aggregations on a cube will speed up the report script if you are pulling data on parent level.

You can also try to use <LEAVES instead of <DIMBOTTOM. Try to avoid dynamic members from member selection.











Monday, June 24, 2013

Data Copy in ASO Applications using calculation script

Data copy in a BSO application ( Ex: copy data from one version to another ) is straight forward. You can write a calculation script, and use DATACOPY command. Easy, Peasy, Japaneesy!!!

But, ASO is a different story.
We usually export the required data from ASO application and import it back using rule file by changing the member names in rule file.

This approach is not faster and needs extra automation scripts.

But, things have changed after Oracle introduced calculation scripts in ASO application. We can do things like simple math, minor allocations and datacopy  using ASO calculation scripts.

Even though it is not a straight forward calculation script in BSO, you can achieve it with little MDX scripting knowledge and patience (Patience because, it will take some time to write MDX expression to make sure you are not "fixing" on derived, parent or dynamic cells in POV section).

Let’s see how we can do it.

Take ASOsamp:sample as an example.

If we want to copy all data from "Prev Year" to "Curr Year", We have to create a  MAXL script like below,

execute calculation on database ASOSamp.Sample with local script_file "/hyperion/Test_ASO_Calc.csc"
 POV " Crossjoin(Descendants([MTD],Time.Levels(0)),
Crossjoin({[Original Price],[Price Paid],[Returns],[Units],[Transactions]},
Crossjoin(Descendants([Transaction Type],[Transaction type].Levels(0)),
Crossjoin(Descendants([Payment Type], [Payment Type].Levels(0)),
Crossjoin(Descendants(Promotions, promotions.Levels(0)),
Crossjoin(Descendants(Age, Age.Levels(0)),
Crossjoin(Descendants([income Level],[Income Level].Levels(0)),
Crossjoin(Descendants(Geography, Geography.Levels(0)),
Crossjoin(Descendants(Stores,Stores.Levels(0)),
Descendants(Products,Products.Levels(0)))))))))))"

SourceRegion "{[Prev Year],[Curr Year]}";



My custom calculation script, Test_ASO_Calc.csc look like

[Curr Year] := [Prev Year];


This process runs faster like DATACOPY in BSO. 

Wednesday, June 19, 2013

Loading Data to ASO Cube - How "Override" Option Works.


Data Load Properties in Rule file like Overwrite and Add to existing will not have any impact on data load for ASO cubes. where as  subtract from existing will work as expected.We have to mention the load type while importing data from the “Load Buffer” in MAXL statement ( "Load data" window if you are loading from EAS). Below are some examples to explain how these properties will work.


1) Override (Default): By Default, ASO will use “override” property. Data will over write only for the intersections we load.
          
Syntax:
import database 'AppName'.'DBname' data from data_file '/Folder/Newfile.txt' using server rules_file 'Test' to load_buffer    with buffer_id 1  on error write to 'Newfile.txt';

import database 'AppName'.'DBname' data from load_buffer with buffer_id 1;
                                                                (or)
import database 'AppName'.'DBname' data from load_buffer with buffer_id 1 override values;


 For example,
 Existing Data: Jan-> Sales = 10
                                               Feb-> Sales =20

                                  Loading: Feb-> Sales =30
                                               Mar-> Sales =40

                                After Load:Jan-> Sales =10
                                                Feb-> Sales =30
Mar-> Sales =40.

2)Override all data:   It works in a different way. Unlike override, this will replace whole data exist in the cube with the data file we load . Data cells will be cleared in the cube if you are not loading to those cells.
Syntax:
 import database 'AppName'.'DBname' data from load_buffer with buffer_id 1 override all data;


For example, 
                          Existing data:Jan -> Sales = 25
                                              Feb -> Sales = 569

                                 Loading:Jan->Sales = 34

                              After Load:Jan -> Sales = 34
                                              Feb -> Sales = #Mi


3) Override incremental data  Unlike above all the override options, override incremental data deals with slices of data. When you specify this option, data will be loaded to incremental slice. You will get sum of both primary slice and incremental slice when you pull the data.

Syntax:
import database 'AppName'.'DBname' data from load_buffer with buffer_id 1 override incremental data;

For examples, if we have data like below when are loading the first time
            
                        Existing data:Jan -> Sales = 25
                                            Feb -> Sales = 569
                                                 
                               Loading:Jan -> Sales = 34
                                           Mar -> Sales = 100

                           After Load:Jan -> Sales = 59 (25 Primary Slice + 34 Incremental Slice)
                                           Feb-> Sales = 569(Primary Slice)
                                           Mar - >Sales = 100 (Incr Slice)


When we are loading the same file with different data for the next time it looks like below.

                    Existing data:Jan -> Sales = 59  (25 Primary Slice + 34 Incremental Slice)
                                        Feb-> Sales = 569.
                                        Mar -> Sales = 100 (Incr Slice)
                           
                           Loading: Jan-> Sales = 5

                        After Load:Jan-> Sales = 30 (25 Primary Slice + 5 Incr Slice)
                                        Feb-> Sales = 569
                                        Mar -> Sales = #Mi

By looking at the above two examples we can conclude that, every time it will override all the data in incremental slice but it will not change primary slice. Data for Mar ->Sales set to #mi since the data was not loaded during second time.


The difference between “add values” and “override incremental data”  is, “add values” will always add the data to the primary slice where as the “override incremental data” will override the data within the incremental slice.

Author : Deepa.