Summary
Symptom
This note contains additions to the guidelines.
Other terms
Partitioned tables, bitmap indexes, clustered indexes, encoded vector index (EVI), UMG_POOL_TABLE, performance during activation of transfer rule (control indicator L_PSAVER)
Reason and Prerequisites
You want to copy an NetWeaver system homogeneously or heterogeneously using R3load.
A heterogeneous database migration to INFORMIX is not supported.
Solution
Contents
------
Then start the program SMIGR_CREATE_DDL before you export data from the source system. You can use this program to copy database objects that do not correspond to SAP standard system. These objects contain partitioned (fragmented) tables and bitmap indexes. Specific files of type <TABART>.SQL are generated for these objects. These files contain native DLL (create) statements and can be analyzed using R3load. Proceed as follows:
See Notes 1077644, 948780 and 1533143, before you run report RS_BW_POST_MIGRATION.
Before you run report RS_BW_POST_MIGRATION, connect all source systems to the migrated BW system. This is required to ensure that new versions of all PSA (Persistent Staging Area) tables are migrated. Furthermore, you must ensure that all connected SAP systems are active while program RS_BW_POST_MIGRATION is running. Depending on whether or not you have changed the database platform, you must run report RS_BW_POST_MIGRATION with another variant in the background. The program must run in the background so that the spool log remains available. When you do so, DB2 on zOS, DB2 on AS400 and DB2 on UNIX/NT are treated as three different database platforms.
If you have changed the database platform, use variant &POSTMGRDB. If you have NOT changed the database platform, use variant &POSTMGR. If the SAP HANA database is the target database, the variant SAP&POSTMGRHDB must be used in both cases. Program RS_BW_POST_MIGRATION carries out all necessary adjustments after the system copy. This includes adjusting the database-specific ABAP Dictionary entries (modify, delete, enhance), invalidating database-specific (generated) programs, deleting temporary objects from specific LRU buffers, migrating more recent versions for the PSA tables and adjusting table DBDIFF. The runtime of the program may be a few hours.
Note 1149665 adds new parameters (described in detail below) to optimize the performance during the "PSA Tables" processing step (control indicator L_PSAVER):
mm = month
dd = day
hh = hours
mm = minutes
ss = seconds
For example 20.080.325.150.813)
If this parameter is set, only transfer rules that were last activated before the time stamp are taken into account. This parameter is very useful if the L_PSAVER processing step has terminated. When the time stamp L_TI_GT is chosen properly, the processing step can be restarted from the place where it terminated.
Table RSTS can be used to determine the time stamp. It may also be possible to use the user (field TSTPNM) or the last time of activation (field TIMESTMP) to add the value from the field TIMESTMP to the parameter L_TI_LT.
Call transaction SE38 for the program RSTMPL81.
Choose the "Utilities" menu followed by "Versions" then
-> "Version management". The date and time of the active
version is the required time stamp.
When you implement the correction from Note 1376507, R3load terminates when duplicate data records exist. Note 1376507 also describes how you can recognize whether data records have disappeared.
- You have chosen "LOAD" as the method for loading data to the MDC table.
- The input data for the table is not sorted according to MDC columns, for example, if the data is sorted according to the primary key, but the MDC columns are not the first columns in the primary key.
Solution: To improve the LOAD performance of unsorted data in MDC tables, increase the value of the database configuration parameter "UTIL_HEAP_SZ" to approximately 250,000 4 KB pages.
Also set the value of the "DATA BUFFER" parameter for the "LOAD Utility" to at least 200,000 4 KB pages. Note that you cannot start multiple R3load processes with these settings because the available utility heap is already completely used for the "DATA BUFFER" of the first process.
If you use the R3load utility (which calls a CLI version of the "LOAD"), you can set the shell environment variable DB6LOAD_DATA_BUFFER_SIZE to the required "DATA BUFFER" value (approx. 200,000 pages). R3load will then transfer the value from the environment variable to the "LOAD Utility". See Note 454173 for more information about the environment variable DB6LOAD_DATA_BUFFER_SIZE.
The performance problem mentioned above does not occur if the input data for an MDC table is sorted according to the MDC columns or the target table for loading data does not have MDC.
For more information about MDC, refer to the manual "SAP NetWeaver Business Warehouse 7.0 and higher - Administration Tasks: IBM DB2 for Linux, UNIX, and Windows" on SAP Service Marketplace under:
http://service.sap.com/instguidesnw70 -> Operations ->¨ Database-Specific Guides
Refer to Note 151603 for more information about this procedure.
Note that this type of system copy cannot be used if
For the specified cases, a system copy via R3Load, as described below, is required.
Program SMIGR_CREATE_DDL
You do not have to specify a database version.
When you make a homogeneous or heterogeneous system copy without changing the database platform, the necessary information about generating DDL statements is read directly from the system catalog. To do this, the system checks all database objects to see if the memory parameters differ from the parameters in the standard system. This applies, for example, to partitioned tables and bitmap indexes. Note 519407 describes how you can accelerate processing during the DDL generation. Ensure that the cubes are correctly condensed (see Notes 407260 and 5900370) so that your system performance is optimal.
See Note 813562 if you use ASSM on your Oracle database.
1605169: "SYB: SAP BW7.02 Correction Collection"
1608417: "SYB: SAP BW7.30 Correction Collection"
1616726: "SYB: SAP BW7.31 Correction Collection"
Program SMIGR_CREATE_DDL
You do not have to specify a database version.
Only a Unicode migration is supported with the target Sybase ASE.
You can use the programRS_BW_PRE_MIGRATION
to identify potential problems before the migration or the generation
of the DDL statements in the source system of the migration (see SAP
Note 1690674).
In addition, you should refer to SAP Note 1691300 "SYB: Unpartitioned F fact tables for InfoCubes". It describes how you can migrate F fact tables with a very large number of requests without table partitioning if required.
After the successful migration, you should check immediately in the new system whether the threshold values for splitting complex BW queries do not exceed the database limit for Sybase ASE. To do this, call transactionRSCUSTV25 as described in SAP Note 514907
"Processing complex queries (data mart, and so on)". Both threshold
values must not exceed 50. Otherwise, terminations occur. The default
values "Query-SQL-Split" = 50 and "Datamart-Query-Split" = 20 have
proven to be the best start values.
Refer to the following SAP Notes: We strongly recommend that you implement the correction instructions using SNOTE before the migration in the source system:
1644396: SMIGR: No data export for aggregate tables
1634222: BW corrections for SMIGR_CREATE_DDL
1640703: This SAP Note concerns SAP HANA DB-specific enhancements in RS_BW_POST_MIGRATION.
1657995: Corrections for RS_BW_POSTMIGRATION
1658360: Heterogeneous system copy: "column list already indexed"
Note the following with regard to a homogeneous system copy; that is, with SAP HANA database as the source and target database:
The data of the change logs of SAP HANA DB-optimized DSOs is not exported; this means that the change logs of SAP HANA DB-optimized DSOs are empty after a homogeneous system copy. The administrative and status data of the corresponding change logs must be corrected in the post-migration phase. For more information, see SAP Note 1697865.
This note contains additions to the guidelines.
Other terms
Partitioned tables, bitmap indexes, clustered indexes, encoded vector index (EVI), UMG_POOL_TABLE, performance during activation of transfer rule (control indicator L_PSAVER)
Reason and Prerequisites
You want to copy an NetWeaver system homogeneously or heterogeneously using R3load.
A heterogeneous database migration to INFORMIX is not supported.
Solution
Contents
------
- 1. General prerequisites for SAP kernel WEB AS 7.0
- 2. Tasks before the export
- 3. Tasks after the export
- 4. Tasks after the import
- 5. Special notes for Database Platform DB2
- 6. Special notes for Database Platform DB4
- 7. Special notes for Database Platform DB6
- 8. Special notes for Database Platform MSSQL
- 9. Special notes for Database Platform MAXDB
- 10. Special notes for Database Platform ORACLE
- 11. Special notes for Database Platform Sybase ASE
- 12. Special notes for the SAP HANA Database Platform
- 1. General prerequisites for SAP kernel WEB AS 7.0
----------------------------------------------------
Prerequisite for the homogeneous or heterogeneous system copy when you do not change the database platform: Ensure that your system meets the following requirements: R3load, R3sychk, BW 7.0, BASIS 7.0 patch level 7. Ensure that you use the new tools (R3load, R3szchk) both in the source system when you export the data and in the target system when you import the data.
Irrespective of these minimal requirements, SAP recommends that you import the current Support Packages since the development is updated constantly.
- 2. Tasks before the export
-----------------------
Then start the program SMIGR_CREATE_DDL before you export data from the source system. You can use this program to copy database objects that do not correspond to SAP standard system. These objects contain partitioned (fragmented) tables and bitmap indexes. Specific files of type <TABART>.SQL are generated for these objects. These files contain native DLL (create) statements and can be analyzed using R3load. Proceed as follows:
- Log on to the productive BW client as a user with SAP_ALL rights.
- Call program SMIGR_CREATE_DDL in transaction SE38.
- Select the database platform and the database version of your target system.
- Set the 'Unicode migration' indicator if your target system is an SAP Unicode system.
- Specify a directory for the '<TABART>.SQL' files to be generated. (If this directory is blank after the program has run, there are no tables in the system that must be treated as BW-specific. This is the case if BW is not operated actively in the system. This note is then no longer relevant and you can proceed in accordance with the standard guidelines.)
- Execute program 'SMIGR_CREATE_DDL'. The system now generates the DDL statements and writes them to the specified directory.
- You must ensure that no further changes (such as, activations, data loads to cubes or field changes) are carried out in the BW system after you call report SMIGR_CREATE_DDL and before you export the data. The best way to do this is to call this report directly before you export the data while the system is locked for the user.
- 3. Tasks after the export
------------------------
- 4. Tasks after the import
------------------------
See Notes 1077644, 948780 and 1533143, before you run report RS_BW_POST_MIGRATION.
Before you run report RS_BW_POST_MIGRATION, connect all source systems to the migrated BW system. This is required to ensure that new versions of all PSA (Persistent Staging Area) tables are migrated. Furthermore, you must ensure that all connected SAP systems are active while program RS_BW_POST_MIGRATION is running. Depending on whether or not you have changed the database platform, you must run report RS_BW_POST_MIGRATION with another variant in the background. The program must run in the background so that the spool log remains available. When you do so, DB2 on zOS, DB2 on AS400 and DB2 on UNIX/NT are treated as three different database platforms.
If you have changed the database platform, use variant &POSTMGRDB. If you have NOT changed the database platform, use variant &POSTMGR. If the SAP HANA database is the target database, the variant SAP&POSTMGRHDB must be used in both cases. Program RS_BW_POST_MIGRATION carries out all necessary adjustments after the system copy. This includes adjusting the database-specific ABAP Dictionary entries (modify, delete, enhance), invalidating database-specific (generated) programs, deleting temporary objects from specific LRU buffers, migrating more recent versions for the PSA tables and adjusting table DBDIFF. The runtime of the program may be a few hours.
Note 1149665 adds new parameters (described in detail below) to optimize the performance during the "PSA Tables" processing step (control indicator L_PSAVER):
- L_TI_LT (time stamp format: yy.yym.mdd.hhm.mss
mm = month
dd = day
hh = hours
mm = minutes
ss = seconds
For example 20.080.325.150.813)
If this parameter is set, only transfer rules that were last activated before the time stamp are taken into account. This parameter is very useful if the L_PSAVER processing step has terminated. When the time stamp L_TI_GT is chosen properly, the processing step can be restarted from the place where it terminated.
Table RSTS can be used to determine the time stamp. It may also be possible to use the user (field TSTPNM) or the last time of activation (field TIMESTMP) to add the value from the field TIMESTMP to the parameter L_TI_LT.
- L_TI_GT (time stamp format: yy.yym.mdd.hhm.mss)
Call transaction SE38 for the program RSTMPL81.
Choose the "Utilities" menu followed by "Versions" then
-> "Version management". The date and time of the active
version is the required time stamp.
- L_SRCSYS (Select option for source systems)
- 5. Special Database Platform DB2
--------------------------------
- 6. Special database platform DB4
--------------------------------
- Also use the "Installation Guide" specific to the platform and product. You can find the Installation Guide on the SAP Service Marketplace.
- For the migration you can use any installation master DVD for Release 7.00 or 7.10 on IBM IBM i.
- Refer to Note 744735, which provides an overview of special features of the migration with target platform IBM DB2 for i5/OS.
- Note 782993 describes how, when you execute a homogenous migration of an SAP product to DB2/400, Basis Release 6.20 and higher, you can change the storage parameters of database objects (such as tables and indexes) without causing any additional downtime.
- Heterogeneous system copy to target DB2/400 or platform-specific actions after the import:
- For performance reasons, when you are later using the BW system, we strongly recommend that you set up the EVI stage 2 support as outlined in Note 501572. However, do not start programs 'SAP_INFOCUBE_INDEXES_CHG_DB4' and 'SAP_INFOCUBE_INDEXES_REPAIR' that are described in the note.
- The next step is to set up the entire secondary database indexes of the BW fact tables. This procedure can run in parallel if you use Symmetric Multi Processing (SMP). SMP is an additional product for DB2/400. If you operate SMP you must implement Note 601110 in your system, so that the BW indexes can be set up in parallel.
- You can now use the program SAP_SANITY_CHECK_DB4 to check the settings for the EVI stage 2 support and to check the usage of SMP. See Note 541508 for more detailed information.
- If you decide to activate the SMP, you must set up the missing indexes. To do this, start program 'SAP_INFOCUBE_INDEXES_REPAIR' in the background.
- 7. Special Database Platform DB2 for LUW (DB6)
----------------------------------------------
- If you want to migrate to DB6, the system should run on at least SAP_BW Support Package 18 and SAP_BASIS Support Package 17 or higher. It is important that Notes 1273355 and 1335017 are implemented in your system. If Note 1335017 is not implemented, you must set RSADMIN parameter DB6_REMOVE_REDUNDANT to 'No'.
- Note 1281430 contains a report that checks whether the prerequisites for a migration to DB6 are met and lists the important notes that should be implemented before the migration. We recommend that you execute the report and implement the proposed notes.
- Possible data loss when loading F fact tables with incorrect primary indexes:
When you implement the correction from Note 1376507, R3load terminates when duplicate data records exist. Note 1376507 also describes how you can recognize whether data records have disappeared.
- If you migrate to a DB6 database with several DPF partitions and you want to compress tables directly while loading the data, you must refer to Note 1156514 (DB6: Possible data loss during system copy of SAP BI).
- Using the compression
- If you want to create fact tables, Persistent Staging Area (PSA) tables, and DataStore object tables as compressed in the DB6 target database, use Note 1450612. If you have not yet implemented the note, in the source system set the RSADMIN parameter DB6_ROW_COMPRESSION=YES before you call SMIGR_CREATE_DDL.
- Use of multidimensional clustering (MDC)
SAP NetWeaver BW 7.00 supports DB2 multidimensional clustering (MDC) for InfoCubes, DataStore objects, and PSA tables. Take the following points into account when migrating to DB6:
- We recommend MDC if you use DB2 9.7. In this way, you can greatly accelerate the deletion of InfoPackages (among other things). These functions are available as of DB2 V9.5. However, in DB2 V9.5 and older DB6 releases, there is the restriction that MDC tables can only be reorganized offline to return free space to the table space.
- If you use DB2 Version 9.1 up to and including FixPak 4, you cannot use DB2 LOAD to load MDC tables while using the row compression feature. In this case, data may be corrupted as described in DB2 APAR IZ21943. FixPak 5 solves the problem. We recommend that you use INSERT instead of LOAD.
- See Notes 1023410 and 1450612 for a conversion of range partitioning of InfoCubes in the source system in MDC on the DB6 database.
- Use Note 1450612 to generate PSA tables and DataStore object tables with MDC. If you have not yet implemented the note, in the source system set the RSADMIN parameter DB6_MDC_FOR_PSA=YES before you call SMIGR_CREATE_DDL.
- Possible performance problem when loading data to MDC tables with the LOAD utility:
Under certain circumstances, loading data to tables that use MDC may take longer than a "LOAD" to non-MDC tables.
This problem occurs under the following conditions:
- You have chosen "LOAD" as the method for loading data to the MDC table.
- The input data for the table is not sorted according to MDC columns, for example, if the data is sorted according to the primary key, but the MDC columns are not the first columns in the primary key.
Solution: To improve the LOAD performance of unsorted data in MDC tables, increase the value of the database configuration parameter "UTIL_HEAP_SZ" to approximately 250,000 4 KB pages.
Also set the value of the "DATA BUFFER" parameter for the "LOAD Utility" to at least 200,000 4 KB pages. Note that you cannot start multiple R3load processes with these settings because the available utility heap is already completely used for the "DATA BUFFER" of the first process.
If you use the R3load utility (which calls a CLI version of the "LOAD"), you can set the shell environment variable DB6LOAD_DATA_BUFFER_SIZE to the required "DATA BUFFER" value (approx. 200,000 pages). R3load will then transfer the value from the environment variable to the "LOAD Utility". See Note 454173 for more information about the environment variable DB6LOAD_DATA_BUFFER_SIZE.
The performance problem mentioned above does not occur if the input data for an MDC table is sorted according to the MDC columns or the target table for loading data does not have MDC.
For more information about MDC, refer to the manual "SAP NetWeaver Business Warehouse 7.0 and higher - Administration Tasks: IBM DB2 for Linux, UNIX, and Windows" on SAP Service Marketplace under:
http://service.sap.com/instguidesnw70 -> Operations ->¨ Database-Specific Guides
- More important corrections and enhancements:
- Note 1399649 prevents a termination of SMIGR_CREATE_DDL if the data classes for DB6 are not maintained in the source system.
- Note 1397192 repairs indexes on DataStore object (DSO) tables in the ABAP Dictionary that are created differently on the DB6 database than on the source database. This applies to write-optimized DSOs in particular.
- Note 1309055 solves the problem that CREATE TABLE commands with the incorrect syntax are created under certain circumstances.
- With Note 1376813, the DB6 database in the report RS_BW_POST_MIGRATION is checked for consistency (existence of correction distribution keys for DPF and the correct indexes). Possible errors are logged in the log file RS_BW_POST_MIGRATION.
- With Note 1450612, options for compression and MDC for DDL generation are proposed in the report SMIGR_CREATE_DDL. You can get the list of options from the F4 help for the "Database Version" field. The selected options are also written as comments in the generated .SQL files. The new options replace the manual setting of RSADMIN parameters before executing the report.
- 8. Special Database Platform MSSQL
----------------------------------
- Detaching the database from the source system and attaching it to the target system
Refer to Note 151603 for more information about this procedure.
Note that this type of system copy cannot be used if
- you want to downgrade to a lower SQL server version,
- you also want to carry out a Unicode migration during the course of the system copy,
- you want to use SQL server table partitioning (see Note 869407) in the target system. existing tables are not converted with this"Detach/Attach" method; this means that they are not automatically partitioned in the target system.
For the specified cases, a system copy via R3Load, as described below, is required.
- R3Load-based system copy along with the export of the structure and data from the source system, and the subsequent import to the target system.
- First, note the general recommendations for a migration or system copy on the MSSQL server platform in Note 1054852.
- In
your source system, ensure that you have already implemented the notes
listed below including the corrections for your BASIS or BW
components, or that the system is at the corresponding Support Package
level. These advance steps are required to guarantee that the program
SMIGR_CREATE_DDL generates a correct export of your database structure.
SAP_BASIS 6.20 SAP_BW 30B
965695 for Support Package 59 or lower 960504 for Support Package 31 or lower
SAP_BASIS 6.40 SAP_BW 310
1010854 for Support Package 19 or lower 960504 for Support Package 25 or lower
996263 for Support Package 19 or lower
962124 for Support Package 17 or lower SAP_BW 350
965695 for Support Package 18 or lower 960504 for Support Package 17 or lower
SAP_BASIS 7.00 SAP_BW 7.00
1157904 for Support Package 15 or lower 1364683 for Support Package 21 or lower
1010854 for Support Package 11 or lower
1460372 for Support Package 21 or lower
1444413 for Support Package 22 or lower
996263 for Support Package 11 or lower
962124 for Support Package 9 or lower
965695 for Support Package 9 or lower
1581700 for Support Package 24 or lower
SAP_BASIS 7.01 SAP_BW 7.01
1460372 for Support Package 9 or lower 1364683 for Support Package 4 or lower
1444413 for Support Package 7 or lower
1581700 for Support Package 9 or lower
SAP_BASIS 7.02 SAP_BW 7.02
1460372 for Support Package 4 or lower 1364683 for Support Package 1 or lower
1444413 for Support Package 4 or lower
1581700 for Support Package 08 or lower
SAP_BASIS 7.10
1460372 for Support Package 11 or lower
1444413 for Support Package 10 or lower
1581700 for Support Package 12 or lower
SAP_BASIS 7.11 SAP BW 7.11
1460372 for Support Package 5 or lower 1364683 for Support Package 3 or lower
1444413 for Support Package 6 or lower
1581700 for Support Package 7 or lower
SAP_BASIS 7.30
1581700 for Support Package 02
Starting with the Support Package stacks below, a completely revised version of SMIGR_CREATE_DDL is available for the target platform MSSQL. This involves a reimplementation of the back end. The user interface does not change.SAP NetWeaver 7.00: SPS 27 (SAP_BASIS SP 27 / SAP_BW SP 29) SAP NetWeaver 7.01: SPS 12 (SAP_BASIS SP 12 / SAP_BW SP 12) SAP NetWeaver 7.02: SPS 12 (SAP_BASIS SP 12 / SAP_BW SP 12) SAP NetWeaver 7.11: SPS 10 (SAP_BASIS SP 10 / SAP_BW SP 10) SAP NetWeaver 7.30: SPS 08 (SAP_BASIS SP 08 / SAP_BW SP 08) SAP NetWeaver 7.30: SPS 04 (SAP_BASIS SP 04/SAP_BW 730 SP 04) SAP NetWeaver 7.31/7.03: SPS 1 (SAP_BASIS SP 01/SAP_BW 731 SP 01)
For more details about the new implementation of SMIGR_CREATE_DDL, read SAP FAQ Note 1593998.
- Proceed with subitem a) or b) depending on which describes your current situation.
Note the following: If you do not select an explicit SQL server release as your target system in the input template of SMIGR_CREATE_DDL, the program uses the SQL server 2000 as the target release. Therefore, select the target release explicitly.
a) System copy from an SQL server source system to an SQL server target system using R3Load
In keeping with the instructions in Note 961591, raise the Support Package level of the source system accordingly to be able to start with the R3load system copy. Install the Support Package that corresponds to the release for your BASIS components.
SAP_BASIS 6.20 Support Package 60
SAP_BASIS 6.40 Support Package 18
SAP_BASIS 7.00 Support Package 09
b) System copy from a non-SQL server source system to an SQL server target system using R3Load
Table partitioning in SAP BW (relevant for system copies from ORACLE to SQL server 2005 or higher)
With the level as of May, 2010, SQL server supports a limit of 1000 partitions that can be generated for each table. Since ORACLE supports the creation of more than 1000 partitions, ensure that the BW cubes and their aggregates contain no more than 800 non-compressed loading processes. For ORACLE, you can check this with the following SQLPlus command:
SELECT table_name FROM user_part_tables WHERE partition_count >= 800 and table_name like '/%'
If the cubes and aggregates contain more than 1000 partitions, you must compress as many loading processes as necessary so that a maximum of 800 non-compressed loading processes for each cube and aggregate remain. Further details about partitioning on the SQL server are in Note 869407.
Starting with SQL Server 2008 Service Pack 2, SQL Server also supports more than 1000 partitions. Read Note 1494789 for more information. This may mean that cubes and aggregates do not have to be compressed.
- Since all MSSQL-specific advance steps are completed with the sections specified earlier, you can now continue with section 2 of the general part of this note and execute the program SMIGR_CREATE_DDL accordingly.
- 9. Special Database Platform MAXDB
----------------------------------
- If you perform a system copy or migration using the Unicode conversion on MaxDB-based systems, implement Note 1002250 during the export preparation.
- If problems occur during the import while report UMG_POOL_TABLE is running, see Note 1002250.
- A correction in Basis Support Package 6 (SAPKB70006) prevents incorrect SQL commands for creating indexes from being generated in SMIGR_CREATE_DDL. See Note 896023.
- To support MaxDB clustering, implement Notes 1020006 and 1095135 in the source system before SMIGR_CREATE_DDL is executed.
- Note 1014782 comprises general questions and answers about MaxDB/liveCache system copy.
- Refer to Note 1529810 if you want to use the bridge/foundation solution.
- 10. Special ORACLE Database Platform
-----------------------------------
Program SMIGR_CREATE_DDL
You do not have to specify a database version.
When you make a homogeneous or heterogeneous system copy without changing the database platform, the necessary information about generating DDL statements is read directly from the system catalog. To do this, the system checks all database objects to see if the memory parameters differ from the parameters in the standard system. This applies, for example, to partitioned tables and bitmap indexes. Note 519407 describes how you can accelerate processing during the DDL generation. Ensure that the cubes are correctly condensed (see Notes 407260 and 5900370) so that your system performance is optimal.
See Note 813562 if you use ASSM on your Oracle database.
- 11. Special Database Platform Sybase ASE
---------------------------------------
1605169: "SYB: SAP BW7.02 Correction Collection"
1608417: "SYB: SAP BW7.30 Correction Collection"
1616726: "SYB: SAP BW7.31 Correction Collection"
Program SMIGR_CREATE_DDL
You do not have to specify a database version.
Only a Unicode migration is supported with the target Sybase ASE.
You can use the program
In addition, you should refer to SAP Note 1691300 "SYB: Unpartitioned F fact tables for InfoCubes". It describes how you can migrate F fact tables with a very large number of requests without table partitioning if required.
After the successful migration, you should check immediately in the new system whether the threshold values for splitting complex BW queries do not exceed the database limit for Sybase ASE. To do this, call transaction
- 12. Special SAP Notes for the SAP HANA database platform
---------------------------------------------------
Refer to the following SAP Notes: We strongly recommend that you implement the correction instructions using SNOTE before the migration in the source system:
1644396: SMIGR: No data export for aggregate tables
1634222: BW corrections for SMIGR_CREATE_DDL
1640703: This SAP Note concerns SAP HANA DB-specific enhancements in RS_BW_POST_MIGRATION.
1657995: Corrections for RS_BW_POSTMIGRATION
1658360: Heterogeneous system copy: "column list already indexed"
Note the following with regard to a homogeneous system copy; that is, with SAP HANA database as the source and target database:
The data of the change logs of SAP HANA DB-optimized DSOs is not exported; this means that the change logs of SAP HANA DB-optimized DSOs are empty after a homogeneous system copy. The administrative and status data of the corresponding change logs must be corrected in the post-migration phase. For more information, see SAP Note 1697865.
Header Data
Release Status: | Released for Customer |
Released on: | 06.09.2013 14:22:33 |
Master Language: | German |
Priority: | Correction with medium priority |
Category: | Installation information |
Primary Component: | BW-SYS Basis System and Installation |
Secondary Components: | BW-SYS-DB-ORA BW ORACLE |
BW-SYS-DB-MSS BW Microsoft SQL Server | |
BW-SYS-DB-DB2 BW DB2 for z/OS | |
BW-SYS-DB-DB4 BW DB2 for AS/400 | |
BW-SYS-DB-DB6 BW DB2 Universal Database | |
BC-INS Installation Tools | |
BC-DB-HDB SAP HANA database | |
BW-SYS-DB-HDB BW HDB | |
BW-SYS-DB-SYB BW on Sybase ASE Database Platform |
Affected Releases
|
Related Notes
No comments:
Post a Comment