Note 590370 - Too many uncompressed request (f table partitions)

Summary
Symptom

Unspecific symptoms on an infocube like ...
    • very long execution times for statistics update or index rebuild
    • ORA 14400 while loading (eventually only intermittingly)
    • problems during aggregate buildup
Other terms

Partitions F facttable Aggregates Indexes Statistics long execution compression ORA 14400 Performance E facttable
Reason and Prerequisites

A cause for these symptoms has in many cases been too many uncompressed requests in the F facttable of the corresponding infocube or a high overall number of partitions on the system.
  • Partitioning on an Infocubes F-facttable

Under Oracle every uncompressed request corresponds to one partition in the F facttable of the corresponding infocube. This partition also brings along a full set of secondary local bitmap index partitions.

If you have a cube with 1000 uncompressed requests and 10 dimensions (every dimension brings with it a local bitmap index) you have 1000 partitions for the table itself and 10 x 1000 index partitions which makes a total of 11.000 database objects. This amount of objects hurts most when updating statistics or dropping/recreating secondary indexes. In situations with a high load frequency you will most certainly run into problems in this area sooner or later.

If you do a statistics update, all these 11.000 objects will receive new statistics which will take some time and may raise locking problems during execution or at least a serialization when changing the statistics fields in the database catalog tables.

If you drop/recreate secondary indexes, all these objects must be removed or created in the database catalog which also is a serial operation and may again rise locking situations and/or long runtimes. Additionally there is lots of DB cost for returning the allocated disk space.

You will not experience trouble from this direction in the beginning, when the number of partitions is low. But you will run into problems randomly at first, when the number of partitions increase, and permanently, after the number of partitions have exceeded some (not specifiable, system and context dependent) threshold.

In especially bad situations, the flags for statistics update and/or index deletion/recreation are checked so that each and every load leads to the time consuming steps described above.

Furthermore it is likely that if you find one "partition-disaster" infected cube, there will be others as well. This fact adds to the problems and the point in time you will experience them.

A maximum number of 20 - 30 partitions per f table are recommended (that figure comes from a "one load a day and 30 days a month" scenario).

Partitions on the E facttable may be more numerous because in queries there usually will be time restrictions that reduce the number of partition accesses (partition pruning) because the partitions on the E facttable are by some time criterion (OCALMONTH or 0FISCPER).

On the F table, every partition must be accessed for every query because there is no effective restriction on the p-dimension partitioning key on the F facttable, and due to load performance the Indexes on the F table are all local indexes. This will cause unacceptable query response times if there are too many partitions.

The typical measure taken in such situations, is to build aggregates on these cubes (instead of compression). This will not help, but only postpone the disaster to a later point in time. And at some point compression is not possible in a reasonable amount of time anymore, because deleting a partition takes so long that the whole workflow on that cube (and others later up the chain) is not fitting into the allocated timeframes.
  • Partitioning on an Infocubes E-facttable

Numerous questions on this note made it necessary to talk a bit about E tables here as well since E facttables may also be partitioned. But they are only allowed to be partitioned by the time characteristics 0FISCPER and 0CALMONTH and only once they are activated. Any attempts to switch on partitioning after the activation has taken place and the cube has data (regardless whether in F or E table) is not supported at all. You have to do that by creating a second cube, partition that one and then load it from the first via an export datasource. (Customers trying any other tricks to achieve this, do this on their own risk and definitely should try their tricks before applying them to production and check whether they still see their data afterwards. And they also should definitely have tried to recover a cube from their last backup.)

Since there is a tendency to partition for a longer period here (I think it is reasonable to 5-10 years with 12 partitions each for a total of 60-120 partitions on the E facttable) we generally see more partitions on an E as compared to an F table. This is no general problem with that since index drop/recreate situations happen on the F and not the E facttable.

BUT the number of E partitions does nevertheless fill up the database catalog as well. And since the E table partitioning is handed over to all aggregates (at least to those that contain the corresponding partitioning time characteristic), the amount of partitions due to E facttables must be multiplied by the number of aggregates over that cube and may in total sum up substantially in certain scenarios.

Now there will come the point in time where your E table partitioning gets outdated (you have partitiond up to month M in year Y and now month M+1 in year Y+1 is approaching) and you would like to extend the E tables partitions. Please check OSS note 895539 that has a program SAP_PART_EXTEND_ETABLE in it, which allows you to add partitions to partitioned E tables. You run it, give the name of the cube, give the number of partitions to add (1-12 to keep you from accidentally adding a higher number) and check the do_it flag, and there you are. The aggregates do not get adapted automatically but the next time you rebuild them (or they get rebuild by a changerun) they will reflect the new partitioning as well.

No tool out yet does reduce the number of partitions. There are two situations that could call for such tool. First is, when you do archiving of the "old" data by time, you may face a situation where the partitions get emptied of data at the "historic" end. Second is, when you have started with some sceptics and have partitioned your cubes way up into the future (e.g. December 2099), you may have empty partitions on the "future" end of the E table.

Such situations can currently only be managed by the above suggestion to create a copy with improved partitioning and a reload.
But, in both cases, the next BW release will help you with an easier approach.
  • The future (the next release)

The final tool for these E table partition related question will be delivered with the next release of BW as part of Netweaver 2004s. There will be a repartitioning tool allowing you to dynamically (well, you still shouldn't do this every day;-) change the partitioning range of a cube. And it will allow you to switch cubes to be partitioned which were unpartitiond before without going through the reload scenario that is necessary in BW 3.X (where X = 0, 1 or 5).

Since that tool is based on major rewrites of existing as well as creation of new functionality, it can not be downported to BW 3.X .

Solution

Compress, compress, compress !

Compress the infocube on a regular basis.

There are tools build in the BW that allow for a regular compression of an infocube.
  • You can specify the number of requests you want to keep uncompressed or the number of days you want uncompressed in the "Collapse" tabstrip of the adminworkbench infocube management. Just press the button "Calculated Request ID", flag the desired option and enter the number of days or requests.
  • You can switch on automatic compression after the requests have been rolled up under the  "Environment" -> "Automatic request processing" menu from the cube management.
  • You can set up your own event or process (only 3.X) chain to compress according to your business/workflow needs.

Please do not leave infocubes uncompressed. There are tools to help you organizing a structured approach to compression.

Sometimes the compression is not only not done on one single cube, but on many or even all cubes of a BW. This will of course show the above problems much earlier. Here are some hints on how to analyse the situation:
  • Use the report SAP_DROP_EMPTY_FPARTITIONS on an infocube to see how many partitions there are on a cube (work for aggregates as well) and how many data they contain.
  • Have a look at the infocubes P dimension table (/BI[C|0]/D<cubename>P). The number of entries (-1 or 2) reflect the number of unconmpressed partitions.
  • Use the report RSORAVDV to have a look at the partitioned table on the Oracle database catalog. Use DBA / DBA_PART_TABLES , display TABLE_NAME and PARTITION_COUNT and see, whether there are tables with more than 10 partitions ( PARTITION_COUNT > 9) and eventually restrict TABLE_NAME like '/BI%/F%' to see only the F facttables.


Please take the above measures first and then see whether your initial problems persist.


Header Data
Release Status:Released for Customer
Released on:28.02.2006  15:46:19
Master Language:English
Priority:Recommendations/additional info
Category:Consulting
Primary Component:BW-SYS-DB-ORA BW ORACLE
Secondary Components:BW-WHM-DBA-ICUB InfoCubes
BW-BEX-OT-DBIF-CON Condensor
Affected Releases
Release-Independent
Related Notes

 
1681396 - Query Performance
 
1599602 - FAQ: BW system performance
 
1548125 - Interesting facts about Inventory Cubes
 
1500459 - Analysis of facttable partitions on basis Infocube (ORA DB)
 
1461926 - FAQ: BW report SAP_INFOCUBE_DESIGNS
 
1388570 - BW Change Run
 
1335666 - Selective deletion on ORA F table with too many partitions
 
1273778 - Currency conversion from SSK into Euro in SAP BW
 
1178655 - E2E Diagnostics - BI InfoCube compression - information
 
1013912 - FAQ: Oracle BW performance
 
903886 - Hierarchy and attribute change run
 
790223 - DBIF_RSQL_INVALID_RSQL when filling aggregates
 
779123 - Memory-related terminations at runtime
 
778709 - Rollup of aggregates and indexes (2)
 
640464 - Termn rlgmnt run: RSDU_TABLE_COPY_INI/DDIF_TABL_ACTIVAT
 
588668 - FAQ: Database statistics
 
407260 - FAQs: Compression of InfoCubes
 
385163 - Partitioning on ORACLE since BW 2.0
 
314719 - BW2.0 ORACLE FEATURES

No comments: