Extraction from CO and PA using SAP BW Continued

Update Model  Overview

To speed up access to data in SAP,CO-PA can also maintain summarized copies of segment tables  and segment levels called "summarization levels". Each summarization level contains all of the value fields and a subset of the characteristics. To ensure that a correct aggregate is formed, some detailed characteristics are excluded in each summarization level. If a characteristic X (e.g. the customer number) is contained in a summarization level, all of the characteristics derived from X (e.g. the customer group) should also be contained in the same summarization level. Summarization levels are updated from time to time in a mass update run and not each time a line item is posted. The update procedure is controlled by time stamp information that is stored in each line item and in the summarization level catalog. You can use the time stamp to determine at any time whether a certain line item has already been entered in a specific summarization level.

In BW, you define an Info Package for the initial uploading process and further Info Packages for a delta update from a CO-PA Info Source. When an Info Package is executed, the SAP BW Staging Engine calls the CO-PA transaction data interface. When you have generated an Info Source, you must then perform the initial loading process. Following this you can execute de lta updates. Note that the CO-PA extraction program for the SAP BW uses the same replication method as the update program for CO-PA summarization levels for updating summarization levels. On the BW side, only data that is "at least 30 minutes old" is received. Section 9 of note 21773 contains further details on this "safety delta", as it is known.

Data Staging  Overview

To provide Data Sources for BW, all CO-PA Data Sources (and the operating concern itself) must be generated.A CO-PA Data Source can be defined at the operating concern and client level.The Data Source itself comprises the following information: 

Name of the operating concern
Data source for full updates (summarization level, segment level, line item)
Subset of the characteristics for defining the degree of detail for replication purposes
Subset of the value fields
Update status (Info Source defined, full update running, delta update running, update completed,invalid)
Time stamp, which can be used to determine which data has already been replicated to BW.

Overview  BW-IMG in the R/3 System

CO-PA in the BW IMG of the R/3 System (transaction SBIW ->Customer-defined Data Sources ->
Profitability and Market Sector Analysis):
Procedure for setting up the replication model
ŸThis written chapter delivers information needed at the beginning of the installation scenario.
Assigning Key Figures
In this step you assign elements of an accounting schema to some of the predefined key figures.
you can record these fields to Data Sources.
Creating Data Sources

Create a Data Source to extract transaction data from accruing or accounting CO_PA.
Tools - Displaying detailed information on the Data Source
Displaying details on the Data Source, such as header information, information on the delta process
and the field list.
ŸTools - Simulating the initialization of the delta process
Advanced tool to enable testing of existing CO-PA (with already high volumes of data).
ŸTools - Activating debugging support
This transaction is exclusively used by the SAP support team.

Creating Data Sources

Since an Info Source is always defined at the source system, operating concern and client leve l, a standard name is always generated such as 1_CO_PA_<%SY>_<%CL>_<%ERK>. You can change this name if necessary. The prefix 1_CO_PA, however, is mandatory. SAP recommends that you only extend the standard name.

Example: in an R/3 System with the ID P31 and client 100, the standard name of the Data Source for the IDES operating concern is: 1_CO_PA_P31_100_IDES.Modify the name of the Data Source, if necessary, to distinguish it from other Data Sources. You can create a new Data Source with this name using the relevant button on the initial screen.

Data Source: Header Information

Some fields of the operating concern are not replicated 1:1 in BW.

ŸThe field PALEDGER (internal/external values 01/B0, 02/10, 03/B2, 04/12) is divided into currency type (field CURTYPE in the extract structure, InfoObject 0CURTYPE, values B0, 10) and reporting view (VALUTYP, 0VALUATION, values 0, 2).

The plan/actual indicator PLAID (values 0, 1) is converted into the currency type for the reporting system (VALTP, 0VTYPE, value 010, 020).

ŸThe week ALTPERIO (internal/external values YYYYWWW/WWW.YYYY) is converted in the calendar week (CALWEEK, 0CALWEEK, internal/external values YYYYWW/WW.YYYY). Ÿ The fiscal variant PERIV is added to the fiscal year FISY R to describe the period PERIO. Ÿ With accounting DataSources, the account plan ACCPL is added to cost accounting COACC to describe the cost type CSTTYP.

The name of the DataSource is not only maintained in the logon language.
The field name for partitionin g can be released.

Data Source  Characteristics

The Characteristics of the object table are the characteristics that are maintained in the transaction KEQ3 (characteristics of the results objects) and form the results objects.By default, all the characteristics are selected, since it is based on the scenario "CO-PA Reporting in SAP BW", that is, "KE30 via BEx".

Exceptional cases with characteristics:

ŸCustomer hierarchy levels have KNA1 as their master data.There is a special procedure for compound characteristics. You must create InfoObjects, master data and text extractors.The company code (COMCO) is not a compulsory field but you must select it if the system contains an item with PALEDGER¹01, or if the DataSource is accounting-based.Ÿ You must also specify a controlling area for accounting-based Data Sources. Ÿ The PSP element is shown as it appears externally (24 characters) in SAP BW, since there are no 8-character PSP elements in SAP BW.

Related Posts

Extraction from CO and PA using SAP BW

The business scenario for extracting the data from SAP BW is as explained below.The ABC Company has implemented CO-PA and wants to use this data in the SAP Business Information Warehouse to compile reports for the management.The management requires analyses of the gross and net sales figures from the individual sales organizations.You are responsible for extracting the data from the R/3 System and importing it into BW. To do so, you have to create the necessary components in the R/3 and BW Systems.

Position of the CO-PA Application

CO-PA collects all of the OLTP data for calculating contribution margins (sales, cost of sales, overhead costs)
CO-PA also features powerful reporting tools and planning functions
The CO-PA reporting facility, however, is limited in two respects:
The integrated cross-application reporting concept is n ot as differentiated as in the SAP BW.The OLTP System is optimized for transaction processing, and a high reporting load would impact the overall performance of the system.Using BW as a reporting solution for CO-PA eliminates these problems.

Flow of Actual Values

During billing in SD, revenues and discounts are transferred to profitability segments in Profitability Analysis. At the same time, sales quantities are valuated using the standard cost of goods manufactured, as specified in the cost component split from CO-PC.In Overhead Cost Controlling, primary postings are made to objects in Overhead Cost Controlling and assigned to the relevant cost object on a source-related basis. The actual cost of goods manufactured is also assigned to the cost object. At the same time, the performing cost centers are credited. From the point of view of profitability analysis, this leads to under or over absorbtion of the performing cost centers and to production variances for the cost objects involved (such as production orders).

The production variances calculated for the cost objects (in this case, production orders), i.e. the difference between the actual cost of goods manufactured and the standard costs, are divided into variance categories and settled to profitability segments. The overhead costs remaining for the objects in Overhead Cost Controlling are assigned to the source profitability segments.What are the top products and customers in our different divisions? This is just one of the typical questions that can be answered with the Profitability Analysis (CO-PA) module. 

The wide variety of analysis and planning functions in CO-PA allow you to plan, monitor, and control the success of your company in terms of product-oriented, customer-oriented and organizational dimensions of multidimensional profitability segments. 

Basic Concepts

Characteristics:Characteristics are levels on which information is required.
Example: Divisions, regions, customer groups
Characteristic values:Characteristic values are values that can be assumed by a characteristic.
Example for the characteristic 'region': eastern region, northern region
Profitability segment :A profitability segment is a combination of existing characteristic values.
Example: computer division, eastern region

Characteristics are the fields in an operating concern according to which the data can be differentiated in Profitability Analysis.Each characteristic in an operating concern has a series of valid characteristic values.A profitability segment is a fixed combination of valid characteristic values.


Some characteristics are predefined in each operating concern. This includes customer, material, company code, and others. You can call up a full list of the fixed characteristics by displaying the data structures in the operating concern.In addition to these fixed characteristics, you can define up to 50 of your own characteristics. In most cases, you will be able to satisfy your profitability analysis requirements with between 10 and 20 characteristics.

Descriptive attributes (customer, material, time) are stored as characteristics so that the data can be analyzed according to several dimensions.In addition to the independent characteristics which can be found on a sales document (customer, material, fiscal period, sales area, etc.), several characteristics can be derived from these (customer group, material hierarchy, sales hierarchy). When costs are posted from CO-OM, the most detailed characteristics are usually initial values (blank or zero) since the costs can only be properly assigned to objects that are less detailed. Marketing costs, for example, could be correctly assigned to the customer group and the article.

Value Fields

Key figures (revenue, cost of goods sold, overhead costs) are stored in value fields to make the contribution margin more transparent. Depending on the data source, some value fields are equal to zero, while others are not. In a sales document, for example, the sales quantity, revenue, rebate, cost of goods sold, (calculated from the product costs in CO-PC) and any accruals are not equal to zero. When costs are posted from CO-OM, however, all of these value fields are equal to zero. Other fields
that are used to record different cost elements (e.g. marketing costs) are not equal to zero in these data records.

Organizational Structure

The value fields and characteristics that are required to conduct detailed analyses of the contribution margin vary considerably both from industry to industry and between individual customers. In COPA, therefore, you can configure the structure of one or more operating concerns in each individual installation. An operating concern is an organizational structure that groups controlling areas together in the same way as controlling areas group companies together. Each installation usually comprises only one operating concern.

Since value fields and characteristics can be defined individually in each customer installation, it is not possible to ship all of the required data structures (and the programs for accessing these structures) with the R/3 installation CD. Instead, these structures have to be generated when CO-PA is configured (similar to an Info Cube in BW).

Database Structures in CO-PA

Line items are stored in separate tables: CE1xxxx (xxxx is the name of the operating concern) contains the actual line items and CE2xxx the plan line items. Note that these tables contain the value
fields and characteristics and, therefore, are generated tables.
Line items contain some information at document level (CO-PA document number, sales document number, posting date) that, in most cases, is too detailed for analysis purposes. CO-PA maintains an initial stigmatization of the data used by all CO-PA functions (reporting, planning, assessments, settlements, realignments, etc.).
The characteristics that describe the market are first separated from the rest of the line items. Each combination of characteristic values is coded in a profitability segment number. The link between the profitability segment number and characteristic values is maintained in a separate table - the segment table CE4xxxx. Certain characteristics can be excluded from this process. These are then stored in the line items only and not in the segment table. These characteristics can only be analyzed to a limited extent. Characteristics that are differentiated to a large extent (such as the customer order number) are usually excluded to reduce the volume of data.

The value fields are summarized at the profitability segment and period levels (as well as other characteristics at the business transaction level: plan/actual indicator, record type, and plan version) and stored together with these fields in a second table known as the segment level CE3xxxx. This table contains the total values of the period for each profitability segment number. 

You can compare an operating concern (which is represented by the associated segment table and segment level) with an Info Cube. The Info Cube comprises a dimension table (the segment table) and a fact table (the segment level). Unlike the fact table of an Info Cube, the segment level key contains other keys (e.g. the record type) in addition to the key field from the segment table.

Characteristics in CO-PA correspond to the characteristics (or attributes) in Info Cubes; the value fields can be regarded as key figures with an additional summarization in each characteristic.Summarization levels for an operating concern have the same function as aggregates for an Info Cube. The only difference is that aggregates for Info Cubes are managed with the Info Cube itself (i.e. all aggregates always contain the same numbers as the Info Cube they are based on), while summarization levels are updated at regular intervals (usually every day).

Line items in CO-PA can be compared with the line items in the Operational Data Store (ODS). These are also comparable with line items in the communication structure directly before they are posted to an Info Cube.

Related Posts

SAP BW LO Data Extraction Part Three

Generally speaking, the volume of transaction data is considerably high. A full update here is mostly only justified for the first time data is transferred or if the statistics data has to be rebuilt following a system failure.Delta update mechanisms that restrict the volume of data to within realistic limits, therefore, are required to implement a performance-oriented, periodical data transfer strategy.

Therefore, when sales figures are updated every week in the Business Information Warehouse, only the sales document information that has been added or changed in the last week should be extracted. This functionality is used by the central D-management with the LO extraction method.According to how the V3 update is set up, document data (customer orders and so on) is processed when the update is started. That is, the relevant (active) communication structures are filled and forwarded to the central D- management.The cDM then 'parks' it in the update table ARFCSDATA until it is requested by the BW System. The data remain in the table until the last request for it is successfully processed.

Compared to the transfer and information structure extraction, the new LO extractor has the following advantages:

improved performance and reduced volumes of data
Ÿsimple handling
Ÿsingle solution for all logistics applications
Ÿno use of LIS functionality

Advantages I: Performance and Data Volumes

Detailed extraction: You can deactivate the extraction of, for example, scheduling data ('thin extractors' lead to small upload volumes)
Document changes: only BW-relevant data changes* are updated (smaller upload volumes)
LIS tables are not updated: Reduced data volumes due to removed redundant data storage
Update with batch processes: No overload of 'everyday work'

Advantages II: Single Handling

No LIS functionality: No knowledge of LIS Customizing, update settings, etc, required
Function enhancement: Creating simply and without modifications
BW Customizing Cockpit: Central, single, maintenance tool for Logistics applications

(Design) Problems with transfer structures:
Technical restrictions (OSS (Online Service System)
Delta tables (SnnnBIW1/-2): duplicated update, duplicated data storage

LO Extraction: Conversion Scenario

LO data extraction can be used to completely remove the extraction with transfer structures (for example, S260 - S264 in SD) and extraction using 'traditional' information structures in the LIS (for example, S001 - S006 in VIS), without missing out on any information.This means that more information with more detail can be loaded into the SAP BW System with LO extraction.

To go from the Info Sources that you want to replace to the new Data/Info Sources, you must perform
the steps as described below.

Steps 1-5:
Transfer new Content objects from Content (in OLTP (Data Sources) and in BW (Info Sources, update rules, transfer rules, ...))
Find time frames in which there are no updates for the info structures you want to replace.
Steps 6-8:
IMPORTANT: Make sure that no R/3 data updated for the DataSources in question when you carry out these steps.

Switch off update for LIS extraction. Switch on update for LO extraction (TA SBIW).Load the delta queue of the old Info Source into BW so that all delta records of the old source come into and are posted to BW (important so that no changes or new records are lost and that the data stays consistent).

Do not refresh the statistics for the new Data Sources, or else records will be posted twice in the Info Cube.

Steps 9-11:Simulate an initial load for the new Info Sources' delta upload (option in the Info Package).This simulation run must 'pass green'.The update can take place now. When the V3 update has been scheduled for the new Data Sources,the delta queue is filled and the data can be loaded for the new Data Sources.

Related Posts

SAP BW LO Data Extraction Continued

Setup Extraction

When you extract transaction data with the LO method, you need a so-called  extraction setup, which is similar in its usage to the LIS method for refreshing statistical data.The extraction setup reads the data set you want to edit (for example , customer orders with the tables VBAK, VBAP and so on) and fills the relevant communication structure. The data is stored in cluster tables (_SETUP), from where they are read when initialization is run.n You can also carry out a full update into BW with the tables setup.

Creating Info Packages

Data requests sent to the source system are managed with the Scheduler. These requests are formulated for each Info Source and source system and are divided into master and transaction data.An Info Source in a source system can have several different extraction jobs with different selection criteria. These are referred to as Info Packages.Info Packages or groups of Info Packages can be distributed to background processes using the R/3 job planning facility. This means that you can fully automate the data transfer process and decide when and at what intervals data is to be extracted. Or, for example, you can use a factory calendar to specify that data is not to be extracted on public holidays, and so on.The term 'transaction' in the Business Information Warehouse covers data retrieval in the OLTP and the successful updating of this data to the relevant Info Cubes. You can monitor the transfer and processing of the data closely using the Monitor function in the Administrator Workbench.You can compile lists containing information on Info Package processing for each user, source system, day, etc.


Initializing the Delta Process (Scheduler)

If you want to transfer data in the delta process, the process must be initialized in the first data request. In this process, the selection conditions for connecting delta uploads are set and its complete data set is loaded into BW from the R/3 System.To do this, you must select the check box Initialize delta process for the Info Package under the update parameters of the Scheduler.

Monitoring the Upload Procedure

The data transfer procedure is handled by a series of IDocs that facilitate communication and data transfer across system boundaries. These IDocs are divided into info IDocs and data IDocs.The task of the individual info IDocs is indicated by their status: Status 0: data request received by source system, source system starts to select data.
Status 1: this status indicates a data IDoc that contains further data processing information.
Status 5: an error has occurred in the source system.
Status 9: data selection in the OLTP System has been successfully completed.
With data IDocs, you can display a list of the processing steps that have been carried out to date.These steps are described by call numbers, such as: 50: start of update 51: determine Info Cubes to be updated 52: InfoCube successfully updated
70: end of IDoc processing


The traffic light changes color depending on the context.
Note that a red light does not necessarily mean that an error has occurred.

Delta Update (Scheduler)

Generally speaking, the volume of transaction data is considerably higher.Delta update mechanisms that restrict the volume of data to within realistic limits, therefore, are required to implement a performance-oriented, periodical data transfer strategy.Therefore, when sales figures are updated every week in the Business Information Warehouse, only the sales document information that has been added or changed in the last week should be extracted.In this case, you have to activate the 'Delta Update' check box for the Info Package in the updating parameters of the Scheduler.

Related Posts