1597364 - FAQ: BW-BCT: Extraction performance in source system

Symptom
Poor performance while Extracting data from BW source system. Performance problem in LO, FI, HR, CO extraction.

Other Terms
ROIDOCPRMS, SMQS, ALE, IDoC, qRFC and tRFC, RMBWV3nn.

Reason and Prerequisites
Extraction Performance problems due to incorrect parameter settings and other factors related to datasource specific settings across various application components.

Solution

1. Package size settings for BW data loads

The general formula is:
Package size = MAXSIZE * 1000 / size of the transfer structure,
but not more than MAXLINES.

look up the transfer structure (extract structure) in the table ROOSOURCE in the active version of the DataSource and determine its size using SE11 (ABAP Dictionary) -> Extras -> Table Width -> Length of data divison under ABAP.

To ensure the performance and stability of the upload process, it is very important that the table ROIDOCPRMS be set up correctly on both the Source system side as well as on the BI side (for DataMart-related loads). This table contains the control parameters for data transfer. Alternatively in your R/3 system, go to transaction SBIW -> General settings -> Maintain control parameters for data transfer.

Although there is no specific recommendation on this setting, standard values are as follows:
  STATFRQU     10
  MAXPROCS     03
  MAXSIZE      Value between 20,000 to 50,000   (default is 10,000)
  MAXLINES     value between 50,000 to 100,000 (default is 100,000)

Recommendation:
  • In a resource constrained system, reduce the data package size.
  • In larger systems (with adequate CPU and memory) increase the package size to speed up collection. However, take care so as not to impact the communication process and unnecessarily hold up the work processes.

For more details check note 409641.



2. Using SMQS

If there is a large volume of data or parallel requests of data for different DataSources, performance  problems may occur when you transfer data from the source system to BW.

As of BW Service API 7.0, the transfer of data was switched to the default tRFC scheduler(represented in transaction SMQS).a BI 7.X system is supplied by default by the RFC scheduler SMQS. Note that the BW destination is registered in SMQS. You must implement the settings for Max. Conn and Max. Runtime specifically for the system. The Basis documentation contains the mode of action. The default Max. Conn of 10 and Max. Runtime is 60.

ROIDOCPRMS and SMQS:
  • ROIDOCPRMS parameters are still used in the data transfer i.e. data package size etc... but for the work processes parameter (MAX.proc) if the indicator for 7.X is selected in SMQS then this parameter is not used.
  • For more information check note 1163359.



3. qRFC and tRFC performance

The tables ARFCSSTATE, ARFCSDATA, ARFCRSTATE, TRFCQDATA, TRFCQIN, TRFCQOUT or TRFCQSTATE contain a large number of entries. This leads to poor performance during tRFC and qRFC processing.

See Note 375566 and do the manual steps.
See note 375566 if there are large number of entries in tRFC and qRFC tables

tRFC or qRFC can cause a high database load or long runtimes, check SAP note  371068 which deals with measures for better performance

There is poor performance when accessing T and QRFC databases even though the index fits the executed statement optimally and is already being used in the newest version of the T and QRFC.
See SAP note 407125 for Poor performance of Q and TRFC on ORACLE
Check the storage quality as outlined in Note 332677.
For Resource Management for tRFC and aRFC see note 74141.



4. IDoc processing and Performance

The "Trigger Immediately" \ "Transfer IDoc Immed." options should be always used.


Please see note 1333417 for more information.

For number of dialog process in data transfer see note 916706.
When IDoc hangs check please check the note  1280898.



5. Collective Job performance

Performance problem in collective job RMBWV3nn

Try to maintain the control parameters for the extraction for MCEX(xx) queue by the method below.
Tr.LBWR > Queue name (MCEX Queue) > Customizing > No.Of.Documents = 1000 to 3000.Check if this is reflected in Table TMCEXUPD-UPD_HEAD_COUNT field.

The adjustment of  TMCEXUPD- UPD_HEAD_COUNT will need to be tested for each application, as setting too large a value can result in a memory dump.



6. Basis table size in BW source system

There are many administrative Basis tables that may become much bigger, and thereby cause problems, if the entries are not regularly deleted or archived, or if the configuration is incorrect.

a)tRFC and qRFC tables:
  • ARFCSSTATE, ARFCSDATA,ARFCRSTATE,TRFCQDATA,TRFCQIN, TRFCQOUT or TRFCQSTATE
  • Check the proposals from Note 375566.

b)Tables with ALE change pointers: BDCP, BDCPS, BDCP2
  • See Note 513454 and regularly schedule report RBDCPCLR to delete change pointers that are no longer required.

c)Update tables: VBDATA, VBMOD, VBHDR, VBERROR
  • Check proposals from Note 67014.



7. Setup table performance and downtime

Review Note 753654 and remember to start RMCENEUA when system load is low and run it in parallel in small ranges.

Check these notes also.
437672     LBWE: Performance for setup of extract structures
436393     Performance improvement for filling the setup table



8. Extraction Performance
  • Ensure that you do not create more than 1000 data packages for each request.
  • Ensure that you do not load more than 10 million records for each request.

For more information check note 892513

Check if there is any customer enhancement, if performance is affected because of the user exit. Deactivate the code and try extraction again, if there is problem in the custom code then try to optimize the code.

Setup table
  • You can schedule the jobs for the setup in parallel (in applications that support this function. Do not use for application 5/setup transaction OLIQBW). See note 436393 for more info.

Regarding package size for SD see the note 423118.



9. MM Extraction

See note 1474938 for general OLI3BW performance.

OLI3BW have Performance issue to extract Purchase Orders for services.
Apply Note 1404736 if you have a lot of Purchase Orders  to process that involve Services. Check this in your tables, EKPO, etc, in transaction SE16N If you do not use Services, you do not need to apply this Note.



10. FI extraction

Please refer notes 641977, 760973, 798536, 787713, 815638 and 856148 in case of performance of FI extractors. Also check whether indices used are standard ones which are given by SAP. If not then this may also a reason for poor performance and should activate the standard indices. Please make sure that the indices mentioned in the notes above are present by checking manually even if the corrections in the notes have been implemented. Because standard indexes are delivered and will not be active in the system, all these notes need to be checked manually for all the releases on the index status.

Check if there is any custom coding for FI extractors in customer exits and check whether it is the reason for poor performance by testing the extractor without exit and see whether it makes any difference.

See note 1392817 for delta procedures and performance on FI-GL extractors.

0FI_GL_10 Performance
Create the following additional secondary indexes for your              NewGL totals table (for example FAGLFLEXT):
   a) Index using the fields
        RCLNT
        RLDNR
        TIMESTAMP
        RBUKRS
(Also see Note 1531175.)


   b) Index using the fields
        RACCT,
        PRCTR
PRCTR (if the profit center scenario is used),and               afterwards the two to four most selective fields of your totals table.

See note 1056245 for more details.



11. CO Extraction performance

It takes long time in extracting CO data.

Make sure that you have activated Index ~4 (Delta Index) of COEP table at DB level.
  • Use transaction SE16 to create an entry (if it does not yet  exist) in the BWOM_SETTINGS table with the following key fields
             OLTPSOURCE = <name of the affected Datasource>
             PARAM_NAME = 'NOBLOCKING'.
  • Use transaction SE16 to change the value in the 'PARAM_VALUE' column for the aforementioned entry of the 'BWOM_SETTINGS' table to 'X'.

See SAP note 549552 for more details.

For more details check the SAP note 382329 578417 416265 836740, 1424403

It takes a very long time to extract the CO tables COSP and COSS from Oracle database systems when there are no qualified index selections
0CO_OM_CCA_1,0CO_OM_CCA_2,0CO_OM_OPA_1,0CO_OM_NAE_1,0CO_OM_NTW_1,
0CO_OM_NWA_1,0CO_OM_WBS_1,0CO_OM_ABC_1

See note 1312060 for more details.



12. HR Performance

The extraction of time data in the delta mode requires long runtimes in the OLTP source system. 0HR_PT_1, 0HR_PT_2, 0HR_PT_3.

In the following DB tables:

a)  PTDW_PWS_DB
b)  PTDW_TIMES_DB
c)  PTDW_QUOT_DB

Create the following non-unique index and activate:
MANDT (client), PERNR (personnel number), KMONTH (calendar month), if you have not already done so.

Check for timeframe in table T569R.
In order to keep runtime short, we recommend to choose the Earliest and Latest Transfer Date in table T569R in a way that only necessary data would be transfered. Please maintain the entries in the table T569R as described in the Notes 353177 and 696838 or make sure that it is done so already.

Refer the note 354584 for more information on HR extractors.


13. CO-PA extraction

If there is performance issue, the join method used requires an additional index on the line item tables. Create an additional secondary index with the fields MANDT and PAOBJNR (in this sequence) for the tables CE1xxxx and CE2xxxx (xxxx=operating concern). The index is not unique. You can also set secondary indices of COEP~4 and COEJ~4. See note 0210219 for more details.

Set the flag NO_GROUP_BY = 'X' in table TKEBWL, and check the extraction again. Check note 1432022, for more details on technical settings for CO-PA extractors



14.Performace problem with ALE change pointers

See Note 513454 and regularly schedule report RBDCPCLR to delete change pointers that are no longer required.



15. TCT and BI Admin cockpit

ST03n performance problem when calling BI workload.

Tables RSDDSTAT_OLAP and RSDDSTAT_DM stores statistics of query runtime and keeps on growing depending on the activities on the system. These tables are too big when checked. More over TCT_KEEP_OLAP_DM_DATA_N_DAYS parameter in the RSADMIN table is not maintained so this leads to performance problems

   a)Automatic deletion during data load
  • Per default, with each delta load for query runtime statistics ( DataSource 0TCT_DS01 - 0TCT_DS03), data of the last 14 days (30 days as of SP16, SAP Note 1095411) is deleted out of the BI Statistics tables. This time frame can be customized using the TCT_KEEP_OLAP_DM_DATA_N_DAYS parameter in the RSADMIN table.


   b)Manual deletion of statistics data
  • In transaction RSDDSTAT. Dates to be deleted can be selected using program RSDDSTAT_DATA_DELETE.



16. Trace and performance

When there is a performance issue in extraction and in spite of all parameter settings given above, then a trace should be done while extraction is running. There are various traces like ST05, SE30 Etc. but you can combine everything in ST12 trace.
  • ST12: ST12 combines ABAP and performance (SQL) trace into one transaction, with major functional enhancements especially for the ABAP trace part.ST12 allows to activate the ABAP trace for another user.The ST12 can also be used for tracing workprocess, program/transaction or a user. At a click of a button you can view the SQL and ABAP trace information.See Note 755977 for more information.

  • STAD: Used to check response time of programs/transactions and provides various levels of detail which can be tailored.


Poor performance when accessing Oracle DBA views – the story

Introduction

Hello SCN Community
I received a Go Live check report form SAP which stated expensive SQL statements existed for a particular SAP system. So I decided to check if I could find SQL statement which could be improved.

Identifying the expensive SQL statement(s)

The start point is transaction DB02 also referred to as DBA cockpit in the recent SAP product versions.
Performance overview DB02
Picture 1.1
To find expensive SQL statements you can check the Shared Cursor Cache which you can find under Performance -> SQL Statement Analysis -> Shared Cursor Cache (see picture 1.1).
Double-click on it to go to the selection screen for the shared cursor cache.
Buffer gets
Picture 1.2
Here I usually start by clicking on the Buffer gets option (see picture 1.2) under List Sort (as most expensive statements also have the most buffer gets). The most important ones are Disk Reads, Buffer gets and Elapsed time.
Now click yes to get the overview.
Shared Cursor cache overview
Picture 1.3
After checking the top SQL statements by sorting buffer gets, I sorted the list on Elapsed Time to find out the top 3 remained the same.
You can see that the top 3 are executed 412 times (first row) with buffer gets ranging from 56 million to 114 million.
SQL statement
Picture 1.4
Let's take a look at the first SQL statement. To view the full SQL statement click on the SQL statement column (in the result - picture 1.3) to see the full statement.
Now you have some options here (picture 1.4), on top you can request DDIC information, next to it a explain plan (to check how the data is accessed, index, full table scan etc) and a button to get to the call point in ABAP (easy to identify from which program or function the statement is coming). If you click the ABAP call point button and you get an error DB6_DIAG_VIEW_SOURCE: "program source not found", SAP Note 1309309 - "Application Info" is missing could help you resolve that error.
Now when you look at the statement itself you can see that it selects data from the dba_segments view and the dba_tablespaces view. Since there are a lot of customers who have had expensive SQL statements and had them investigated by SAP, a buck load of SAP notes exists on SQL statements.

Information to improve the SQL statement

SAP Notes search
Picture 1.5
Let us take a look at some SAP notes we can find. Using the right keywords on the right source is very important when you search for information online. If you are interested in finding valuable information on internet, you are welcome to read one of my previous blogs on this topic: How to find valuable information on the internet. To search for SAP notes I used the keywords "expensive SQL dba_sgements" (see picture 1.5).
SAP Note search result
Picture 1.6
As a result of the search I only find 6 SAP notes (see picture 1.6). That is great isn't it. You can already see I checked SAP Note 871455 as it seems to be the most relevant for my situation as I'm looking at "possible" poor performance of the dba_segments view.

Following SAP Note instructions

Next I check the content of SAP Note 871455, since it's a SAP system residing on Oracle 10g I scroll down to point 4. Oracle 10g.
The first remark is to make sure dictionary statistics and fixed object statistics are implemented as described in SAP Note 838725. I have those in place, you can check one of my previous blogs on this topic by the way (how to calculate dictionary and system statistics along with the regular statistics run):Improving Oracle performance by maintaining exception table DBSTATC.
Alright moving on to the next point For problems relating to DBA_SEGMENTS, note the following points(extract from SAP Note 871455). Great this looks likes the right section.
In SAP Note 871455 it states you have to check that there are no remainders from earlier optimization measures and you can find those under paragraph Oracle 9i (9.2.0.5 or higher). Let us take a look.
There I can find the following information:
Ensure that only BW systems create statistics for the Oracle DDIC. If the following query returns tables such as TAB$ and IND$ in a non-BW system, delete the statistics as described in SAP Note 558746 (in the lower part of the note) (extract from SAP Note 871455):

SELECT TABLE_NAME FROM DBA_TABLES WHERE OWNER = 'SYS' AND LAST_ANALYZED IS NOT NULL;

Where is the where?
Now maybe it's just me but why isn't there a where clause in this statement? It sais in the text check if TAB$ and IND$ are returned.
Imagine you go to a car dealer and you want to buy an Audi R8 (it's an imagination so it's ok). The dealer has around 200 Audi R8's on the parking lot and he has a map with an index on which he can see where that specific Audi R8 is parked. If he sais check each car on the parking lot and if you find that specific Audi R8 let me know, I doubt he would sell much.
In the same logical sense, let's rewrite the above statement to only look for table name TAB$ or IND$ (since that is what we are really looking for).
SELECT TABLE_NAME FROM DBA_TABLES WHERE OWNER = 'SYS' AND LAST_ANALYZED IS NOT NULL AND TABLE_NAME='TAB$' OR LAST_ANALYZED IS NOT NULL AND TABLE_NAME='IND$';

The result
The result:
TABLE_NAME
----------
IND$
TAB$
So we do actually find these entries, as such, following the instructions in blue (see above) we should delete the statistics as described in SAP Note 558746 - Better Oracle Data Dictionary BW Performance.
SAP Note 558746 states that the solution is only valid for BW 2.x and BW 3.x SAP systems. I will ignore this as long as the instructions make sense.
Cleaning up
Let's perform the action.
Delete statistics script
Picture 1.7
In the lower part of SAP Note 558746 you find SQL statements to delete the Oracle data dictionary statistics. Copy the lines and paste them in SQLPlus and execute them.
Back to the main SAP Note
Once that is done we return back to the section on Oracle 10g in SAP Note 871455. There is another bullet points for dba_segments, extraction from the content of the SAP Note:
Therefore, as described in the "Oracle 9i (9.2.0.5 or higher)" section, check whether problematic segments exist , if necessary, run the script relating to DBMS_SPACE_ADMIN.TABLESPACE_FIX_SEGMENT_EXTBLKS again.

Alright, back to the Oracle 9i section and checking for content on problematic segments. When I do a search on DBMS_SPACE_ADMIN.TABLESPACE_FIX_SEGMENT_EXTBLKS in SAP Note 871455 I can only find the following in the Oracle 9i section:
If you use 9.2.0.5 on HP-IA64, the definition for the tablespace_fix_segment_extblks procedure is missing. Therefore the above script terminates at runtime with the following error: PLS-00302: component 'TABLESPACE_FIX_SEGMENT_EXTBLKS' must be declared
.

Checking the DBA segments
Now this is somewhat confusing, instead of using proper numbering in the note or something similar and referring to the numbering the Oracle10g section refers to TABLESPACE_FIX_SEGMENT_EXTBLKS. It sais if you use 9.2.0.5, no I'm on Oracle10g so I read on and I notice the words in bold. Alright so I should look up, in SAP Note 871455 that is, where I find the following:
Execute the script Segments_DBA_SEGMENTSPrecalculation.txt of Note 1438410. This generates a list of DBMS_SPACE_ADMIN commands if required.

Now we are getting somewhere again, let us take a look at SAP Note 1438410 - SQL: Script collection for Oracle which I happen to know already (useful stuff in it, you should check it out). This note gives a zip file which has SQL scripts that you can execute in DBACOCKPIT (or in Oracle).
DBA Segments
Picture 1.8
Among all the scripts I can find the Segments_DBA_SEGEMENTSPrecalculation.txt.
SQL command editor
Picture 1.9
Let's run this and take a look at the results. To run this I will use the SQL Command Editor in the local SAP system (transaction DB02).
The result
result
Picture 2.0
The result is nothing. In SAP Note 871455 the following is stated on the result of the script:
If all segments are converted cleanly, the script Segments_DBA_SEGMENTSPrecalculation.txt may no longer return lines.

Done?
Good news, this means we are done. The instructions for section Oracle10g for DBA_SEGMENTS are as they should be. But wait, does all this make sense? When you go through the actions performed in this blog you come to the conclusion you removed the Oracle data dictionary statistics available in Oracle for this SAP system.

The dark side

Before someone asks, yes I'm a fan of Star Wars (I guess I give the stereotype administrator geek some credit). If I could choose my role I would be a Jedi SAP Administrator :)
This is a great example of what the dark side can do. Information on what I consider to be the dark side can be found in one of my other blogs: To read or not to read the manual, that's the question.
Of course if you delete statistics, you should recreate them to get the correct ones so I performed a system statistics and dictionary statistics run. When you perform the initial statement again to check if the old statistics are present you get the following result:
SELECT TABLE_NAME FROM DBA_TABLES WHERE OWNER = 'SYS' AND LAST_ANALYZED IS NOT NULL AND TABLE_NAME='TAB$' OR LAST_ANALYZED IS NOT NULL AND TABLE_NAME='IND$';

The result, Euh?
The result:
TABLE_NAME
_________
IND$
TAB$
This makes it look like after all the instructions (which take about six pages in Word in blog format) were followed that nothing seemed to have changed.
The statement in the note that if you find entries like IND$ or TAB$ you should perform the actions seems to be useless since you get the same result after performing all of the instructions.

A customer message is called for

I decided to create a customer message and I handed my six page long Word blog to SAP to see what they could make out of it. I also raised the point that the instructions didn't state to perform new statistics runs, which seemed odd to me as you delete the statistics.
The first feedback I received offered me a rewritten SQL statement (of the SQL statement in which I inserted the "where" clause.
SELECT TABLE_NAME FROM DBA_TABLES WHERE OWNER = 'SYS' AND LAST_ANALYZED IS NOT NULL AND (TABLE_NAME='TAB$' OR TABLE_NAME='IND$');

The rewritten statement looks nicer but it doesn't change the result.
Confusion on both sides
After getting some conversation between both parties, I received the feedback to follow the instructions and recalculate the statistics (been there done that) but there is no proper way to check if I had changed anything.
Confusion caused by following instructions for Oracle 10g in the Oracle 9 section, jumping from SAP note to SAP note, performing actions from SAP notes which clearly state they are not valid for Oracle 10g, missing instructions and so on. My request to rewrite some parts to avoid other customers to get confused was returned with a request to "forget" the confusion. Rewriting the instructions would be too complicated.

A challenge rises

Of course I like a challenge so this sounded like a small challenge, one I was willing to take up. Too complicated that doesn't exist in my dictionary.
This calls for a call
My first step to get started to find an alternative was to call one of my colleagues who knows a lot about Oracle. I asked him if he could provide me with some available views which I could use to find an alternative. Most of the times Google is my friend but Google isn't my only friend.
One of the views he mentioned is DBA_TAB_STATS_HISTORY. I immediately found it interesting and I checked the available columns to see if I could come up with a proper statement.
The statement
So I whipped up a statement which could provide valuable information:
SELECT TABLE_NAME,STATS_UPDATE_TIME FROM DBA_TAB_STATS_HISTORY where (table_name = 'IND$' or table_name = 'TAB$');

I decided to execute the statement on an Oracle database which I suspected to be upgraded from Oracle 9 to Oracle 10 in the past.
The result before following the instructions
The result:
no rows returned
Now this looks interesting, if I now get a result after performing the instructions that is. I also needed to test on Oracle databases which were newly installed (Oracle 10g, Oracle 11g) to get some confirmation on the result by performing different test cases.
On all the correct or corrected SAP systems I do get a result back.
The result after following the instructions
The result:
TABLE_NAME STATS_UPDATE_TIME
__________ _________________
IND$ 10-FEB-11 02.17.56.787577 PM +01:00
TAB$ 10-FEB-11 02.18.10.412705 PM +01:00
I provided this information to SAP support but they insisted to forget the confusion and since I was the first to bring up the confusion, no one else seems to find the instructions confusing or noticed the SQL statement is useless. My response was that other customers should know the correct statement to be able to check if a certain Oracle database has old statistics yes or no. Even if the instructions themselves are valid (although they don't state you should recalculate statistics and I can imagine some forget to do so and as an effect run SAP on Oracle without those statistics) the initial check is wrong. As an effect you could perform the operation on Oracle databases where it's not needed and loose valuable time.
The response was that it was too complicated to change it. Because I didn't want to get involved in an endless discussion I closed down the customer stating I would contribute this information on SCN as I find it to be my duty as an active SCN contributor.

The correct instructions

What I found to be the correct instructions concerning the cleanup of old statistics was to use the following statement to check if the statistics are correct:
SELECT TABLE_NAME,STATS_UPDATE_TIME FROM DBA_TAB_STATS_HISTORY where (table_name = 'IND$' or table_name = 'TAB$');

If the result was no rows returned I followed the instructions from SAP Note 558746 to delete the statistics on the Oracle Data Dictionary.
Once the statistics are deleted, I perform a brconnect run to calculate the system statistics and once that was done, a brconnect run to calculate the dictionary statistics again:
brconnect -u / -c -f stats -t system_stats
brconnect -u / -c -f stats -t oradict_stats
After performing those actions I run the SQL statement again:
SELECT TABLE_NAME,STATS_UPDATE_TIME FROM DBA_TAB_STATS_HISTORY where (table_name = 'IND$' or table_name = 'TAB$');

The result then shows entries for IND$ and TAB$:
TABLE_NAME STATS_UPDATE_TIME
__________ _________________
IND$ 10-FEB-11 02.17.56.787577 PM +01:00
TAB$ 10-FEB-11 02.18.10.412705 PM +01:00

Conclusion

I waited on SAP support to get a view on the situation. I do have a lot of respect for SAP support services and persons who are receiving customer message questions.
However, when you come across such information you should do something with it. Stating it's too complicated isn't a very good statement, requesting to forget the confusion isn't either. If you receive a question and you understand the customers concern, do something with it. If you are not the correct person to pick up the issue, pass the issue on to the correct person. There is no problem to request assistance from others if it's needed.
This story could have been very different, I could have written a blog about the great interaction between me and SAP support and how the instructions were rewritten, the confusion wiped away. I could have been given the assurance that other customers would be following correct instructions but that's not how the story ended.
The bottom line is you should still think about what you are performing, I don't say you have to spit out every single piece of information that is handed to you but use common sense, don't put it away.