SAP BI INTERVIEW
QUESTIONS
Go to Transaction LBWE (LO
Customizing Cockpit)
1). Select Logistics Application
e.g. SD Sales BW
Extract Structures
2). Select the desired Extract Structure and deactivate it first.
3). Give the Transport Request number and continue
4). Click on `Maintenance' to maintain such Extract Structure
Select the fields of your choice and continue
Maintain DataSource if needed
5). Activate the extract structure
6). Give the Transport Request number and continue
Next step is to delete the setup tables
7). Go to T-Code SBIW
8). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Delete the content of Setup tables (T-Code LBWG)
vi. Select the application (01 – Sales & Distribution) and Execute
Now, Fill the Setup tables
9). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Filling the Setup tables
vi. Application-Specific Setup of statistical data
vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)
1). Select Logistics Application
e.g. SD Sales BW
Extract Structures
2). Select the desired Extract Structure and deactivate it first.
3). Give the Transport Request number and continue
4). Click on `Maintenance' to maintain such Extract Structure
Select the fields of your choice and continue
Maintain DataSource if needed
5). Activate the extract structure
6). Give the Transport Request number and continue
Next step is to delete the setup tables
7). Go to T-Code SBIW
8). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Delete the content of Setup tables (T-Code LBWG)
vi. Select the application (01 – Sales & Distribution) and Execute
Now, Fill the Setup tables
9). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Filling the Setup tables
vi. Application-Specific Setup of statistical data
vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)
Specify a Run Name and time and Date (put future date)
Execute
Check the data in Setup tables at RSA3
Replicate the Data Source
Use of setup tables:
You should fill the setup table in the R/3 system and extract the data to BW - the setup tables is in SBIW - after that you can do delta extractions by initialize the extractor.
Full loads are always taken from the setup tables
Q2:
HOW DELTA WORKS FOR LO EXTRACTION AND WHAT ARE UPDATE
METHODS?
Type 1: Direct Delta
Each document posting is
directly transferred into the BW delta queue
Each document posting with delta
extraction leads to exactly one LUW in the respective BW delta queues
Type 2: Queued Delta
Extraction data is collected for the
affected application in an extraction queue
Collective run as usual for transferring data
into the BW delta queue
Type 3: Un-serialized V3 Update
Extraction data for written as before
into the update tables with a V3 update module
V3 collective run transfers the
data to BW Delta queue
In contrast to serialized V3, the
data in the updating collective run is without regard to sequence from the update tables
Q3: HOW TO CREATE GENERIC EXTRACTOR?
1. Select the DataSource
type and give it a technical name.
2. Choose Create.
The creating a generic DataSource screen appears.
3. Choose an application component to which the DataSource is to be assigned.
4. Enter the descriptive texts. You can choose any text.
5. Choose from which datasets the generic DataSource is to be filled.
2. Choose Create.
The creating a generic DataSource screen appears.
3. Choose an application component to which the DataSource is to be assigned.
4. Enter the descriptive texts. You can choose any text.
5. Choose from which datasets the generic DataSource is to be filled.
Choose Extraction from View,
if you want to extract data from a transparent table or a database view.
Choose Extraction
from Query, if you want to use a SAP query InfoSet as the data source. Select the required InfoSet from the
InfoSet catalog.
Choose Extraction using FM, if
you want to extract data using a function module. Enter the function module and extract structure.
With texts, you also have the option of extraction from domain fixed
values.
6. Maintain the settings for
delta transfer where
appropriate.
7. Choose Save.
When extracting, look at SAP Query: Assigning to a User Group.
7. Choose Save.
When extracting, look at SAP Query: Assigning to a User Group.
Note: when extracting from a transparent table or view:
If the extract structure contains a key figure field, that references to
a unit of measure or currency unit field, this
unit field must appear in the same extract structure as the key figure field.
A screen appears in which you can
edit the fields of the extract structure.
8. Choose DataSource ® Generate.
The DataSource is now saved in the source system.
Q4: HOW TO
ENHANCE A DATASOURCE?
Step 1: Go to T Code CMOD and choose
the project you are working on.
Step 2: Choose the exit which is called when the data is extracted.
Step 3: There are two options
Normal Approach: CMOD Code
Function Module Approach: CMOD Code
Step 4: Here in this step we create a function module for each data source. We create a
Step 2: Choose the exit which is called when the data is extracted.
Step 3: There are two options
Normal Approach: CMOD Code
Function Module Approach: CMOD Code
Step 4: Here in this step we create a function module for each data source. We create a
new FM
(Function Module in SE37)
(Function Module in SE37)
Data Extractor Enhancement - Best Practice/Benefits:
This is the best practice of data source enhancement. This has the following benefits:
- No
more locking of CMOD code by 1 developer stopping others to enhance other extractors.
- Testing
of an extractor becomes more independent than others.
- Faster
and a more robust Approach
Q5: WHAT ARE VARIOUS WAYS TO MAKE GENERIC EXTRACTOR
DELTA ENABLED?
This field from the extraction structure of a DataSource meets one of
the following criteria:
1. The field has the following type:
Time stamp. New records to be loaded into the BW using a delta upload have a higher entry
in this field than the time stamp of the last extraction.
2. The field has the following type:
Calendar day. The same criterion applies to new records as in the
time stamp field.
3. The field has another type. This
case is only supported for SAP Content DataSources. In this case, the maximum
value to be read must be displayed using a DataSource-specific exit when
beginning data extraction.
Q6: WHAT ARE SAFETY INTERVALS?
This field is used by Data Sources
that determine their delta generically using a repetitively-increasing field in
the extract structure.
The field contains the discrepancy between the current maximum when the delta or delta init extraction took place and the data that has actually been read.
Leaving the value blank increases the risk that the system could not extract records arising during extraction.
Example:
A time stamp is used to determine
the delta. The time stamp that was last read is 12:00:00. The
next delta extraction begins at 12:30:00. In this case, the selection
interval is 12:00:00 to 12:30:00. At the end of extraction, the pointer is set
to 12:30:00.
A record - for example, a document- is created at 12:25 but not saved until 12:35. It is not contained in the extracted data but, because of its time stamp, is not extracted the next time either.
A record - for example, a document- is created at 12:25 but not saved until 12:35. It is not contained in the extracted data but, because of its time stamp, is not extracted the next time either.
Q7: HOW IS COPA DATASOURCE SET UP?
R/3 System
1. Run KEB0
2. Select Datasource 1_CO_PA_CCA
3. Select Field Name for Partitioning (Eg, Ccode)
4. Initialize
5. Select characteristics & Value Fields & Key Figures
6. Select Development Class/Local Object
7. Workbench Request
8. Edit your Data Source to Select/Hide Fields
9. Extract Checker at RSA3 & Extract
BW System
1. Replicate Data Source
2. Assign Info Source
3. Transfer all Data Source elements to Info Source
4. Activate Info Source
5. Create Cube on Infoprovider (Copy str from Infosource)
6. Go to Dimensions and create dimensions, Define & Assign
7. Check & Activate
8. Create Update Rules
9. Insert/Modify KF and write routines (const, formula, abap)
10. Activate
11. Create InfoPackage for Initialization
12. Maintain Infopackage
13. Under Update Tab Select Initialize delta on Infopackage
14. Schedule/Monitor
15. Create Another InfoPackage for Delta
16. Check on DELTA Option
17. Ready for Delta Load
Q8: WHAT ARE VARIOUS WAYS TO TRACK DELTA RECORDS?
Ans:
RSA7, LBWQ, Idocs and SMQ1.
Q1: WHAT ARE START ROUTINES, TRANSFORMATION
ROUTINES, END ROUTINES, EXPERT ROUTINE AND RULE GROUP?
Start Routine
The start routine is run for each data package at the start of the transformation. The start routine has a table in the format of the source structure as input and output parameters. It is used to perform preliminary calculations and store these in a global data structure or in a table. This structure or table can be accessed from other routines. You can modify or delete data in the data package.
Routine for Key Figures or Characteristics
This routine is available as a rule type; you can define the routine as a transformation rule for a key figure or a characteristic. The input and output values depend on the selected field in the transformation rule.
End Routine
An end routine is a routine with a table in the target structure format as input and output parameters. You can use an end routine to postprocess data after transformation on a package-by-package basis. For example, you can delete records that are not to be updated, or perform data checks.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE). You have to use a dummy rule to override this.
Expert Routine
This type of routine is only intended for use in special cases. You can use the expert routine if there are not sufficient functions to perform a transformation. The expert routine should be used as an interim solution until the necessary functions are available in the standard routine.
You can use this to program the transformation yourself without using the available rule types. You must implement the message transfer to the monitor yourself.
If you have already created transformation rules, the system deletes them once you have created an expert routine.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE).
Rule Group
A rule group is a group of transformation rules. It contains one transformation rule for each key field of the target. A transformation can contain multiple rule groups.
Rule groups allow you to combine various rules. This means that for a characteristic, you can create different rules for different key figures.
Q2: WHAT ARE DIFFERENT TYPES OF DSO'S AND THEIR
USAGE
Ans: See this table
Type
|
|
Structure
|
Data
Supply
|
|
SID
Generation
|
Details
|
|
Example
|
|
|
|
||||||
Standard
DataStore Object
|
|
Consists
of three tables: activation queue, table of active data, change log
|
From data transfer process
|
|
Yes
|
Standard
DataStore Object
|
|
Operational
Scenario for Standard DataStore Objects
|
Write-Optimized
DataStore Objects
|
|
Consists
of the table of active data only
|
From
data transfer
process
|
|
No
|
Write-Optimized
DataStore Object
|
|
Operational
Scenario for Write-Optimized DataStore Objects
|
DataStore
Objects for Direct Update
|
|
Consists
of the table of active data only
|
From
APIs
|
|
No
|
DataStore
Objects for Direct Update
|
|
Operational
Scenario for DataStore Objects for Direct Update.
|
Q3: WHAT IS
COMPOUNDING?
You sometimes need to compound Info Objects in order to map the data
model. Some Info Objects cannot be defined uniquely without compounding.
For example, if storage location A for plant B is not the same as storage location A for plant C, you can only evaluate the characteristic Storage Location in connection with Plant. In this case, compound characteristic Storage Location to Plant, so that the characteristic is unique.
Using compounded Info Objects extensively, particularly if you include a lot of Info Objects in compounding, can influence performance. Do not try to display hierarchical links through compounding. Use hierarchies instead.
A maximum of 13 characteristics can be compounded for an Info Object. Note that characteristic values can also have a maximum of 60 characters. This includes the concatenated value, meaning the total length of the characteristic in compounding plus the length of the characteristic itself.
Q4: WHAT IS LINE
ITEM DIMENSION AND CARDINALITY?
1. Line item:
This means the dimension contains
precisely one characteristic. This means that the system does not create a
dimension table. Instead, the SID table of the characteristic takes on the role
of dimension table. Removing the dimension table has the following advantages:
- When
loading transaction data, no IDs are generated for the entries in the
dimension table. This number range operation can compromise performance
precisely in the case where a degenerated dimension is involved.
- A
table- having a very large cardinality- is removed from the star schema.
As a result, the SQL-based queries are simpler. In many cases, the
database optimizer can choose better execution plans.
Nevertheless, it also has a
disadvantage: A dimension marked as a line item cannot subsequently include
additional characteristics. This is only possible with normal dimensions.
It is recommended that you use Data Store objects, where possible, instead of Info Cubes for line items.
2. High cardinality:
This means that the dimension is to
have a large number of instances (that is, a high cardinality). This
information is used to carry out optimizations on a physical level in depending
on the database platform. Different index types are used than is normally the
case. A general rule is that a dimension has a high cardinality when the number
of dimension entries is at least 20% of the fact table entries. If you are
unsure, do not select a dimension having high cardinality.
Q5: WHAT IS REMODELING?
You want to modify an InfoCube that
data has already been loaded into. You use remodeling to change the structure
of the object without losing data.
If you want to change an InfoCube that no data has been loaded into yet, you can change it in InfoCube maintenance.
You may want to change an InfoProvider that has already been filled with data for the following reasons:
- You
want to replace an InfoObject in an InfoProvider with another, similar
InfoObject. You have created an InfoObject yourself but want to replace it
with a BI Content InfoObject.
- The
structure of your company has changed. The changes to your organization
make different compounding of InfoObjects necessary.
Q6: HOW IS ERROR HANDLING DONE IN DTP?
At runtime, erroneous data records
are written to an error stack if the error handling for the data transfer
process is activated. You use the error stack to update the data to the target
destination once the error is resolved.
With an error DTP, you can update the data records to the target manually or by means of a process chain. Once the data records have been successfully updated, they are deleted from the error stack. If there are any erroneous data records, they are written to the error stack again in a new error DTP request.
- On
the Extraction tab page under Semantic Groups,
define the key fields for the error stack.
- On
the Update tab page, specify how you want the system to
respond to data records with errors:
- Specify
the maximum number of incorrect data records allowed before the system
terminates the transfer process
- Make
the settings for the temporary storage by choosing Goto ® Settings
for DTP Temporary Storage
- Once
the data transfer process has been activated, create an error DTP on
the Update tab page and include it in a process chain. If
errors occur, start it manually to update the corrected data to the
target.
Q7: WHAT IS THE DIFFERENCE IN TEMPLATE/REFERENCE?
If you choose a template InfoObject,
you copy its properties and use them for the new characteristic. You can edit
the properties as required
Several InfoObjects can use the same reference InfoObject. InfoObjects of this type automatically have the same technical properties and master data.
Q1: WHAT IS THE USE OF CONSTANT SELECTION?
In the Query Designer, you use selections to determine the data you want to display at the report runtime. You can alter the selections at runtime using navigation and filters. This allows you to further restrict the selections.
The Constant Selection function allows you to mark a selection in the Query Designer as constant. This means that navigation and filtering have no effect on the selection at runtime. This allows you to easily select reference sizes that do not change at runtime.
e.g: In the InfoCube, actual values exist for each period. Plan values only exist for the entire year. These are posted in period 12. To compare the PLAN and ACTUAL values, you have to define a PLAN and an ACTUAL column in the query, restrict PLAN to period 12, and mark this selection as a constant selection. This means that you always see the plan values, whichever period you are navigating in.
When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell.
Cell-specific definitions allow you to define explicit formulas and selection conditions for cells as well as implicit cell definitions. This means that you can override implicitly created cell values. This function allows you to design much more detailed queries.
In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help formulas.
1. In the new formula window right click on Formula Variable and choose New Variable
2. Enter the Variable Name, Description
and select Replacement Path in the Processing by field.
Click the Next Button
3. In the Characteristic screen, select the date characteristic that represents the first date to use in the calculation
4. In the Replacement Path screen select Key in the Replace Variable with field. Leave all the other options as they are (The offset values will be set automatically).
5. In the Currencies and Units screen select Date as the Dimension ID
Repeat the same steps to create a formula variable for second date and use them in the calculation.
Click the Next Button
3. In the Characteristic screen, select the date characteristic that represents the first date to use in the calculation
4. In the Replacement Path screen select Key in the Replace Variable with field. Leave all the other options as they are (The offset values will be set automatically).
5. In the Currencies and Units screen select Date as the Dimension ID
Repeat the same steps to create a formula variable for second date and use them in the calculation.
continued.....