Sunday, November 18, 2012

SAP BI INTERVIEW QUESTIONS


SAP BI INTERVIEW QUESTIONS

Q1:  WHAT ARE THE STEPS INVOLVED IN LO EXTRACTION?
Go to Transaction LBWE (LO Customizing Cockpit)
          1). Select Logistics Application
               e.g. SD Sales BW
                      Extract Structures
          2). Select the desired Extract Structure and deactivate it first.
          3). Give the Transport Request number and continue
          4). Click on `Maintenance' to maintain such Extract Structure
              Select the fields of your choice and continue
               Maintain DataSource if needed
          5). Activate the extract structure
          6). Give the Transport Request number and continue
               Next step is to delete the setup tables
          7). Go to T-Code SBIW
          8). Select Business Information Warehouse
                    i. Setting for Application-Specific Datasources
                    ii. Logistics
                    iii. Managing Extract Structures
                    iv. Initialization
                    v. Delete the content of Setup tables (T-Code LBWG)
                    vi. Select the application (01 – Sales & Distribution) and Execute
                 Now, Fill the Setup tables
          9). Select Business Information Warehouse
                    i. Setting for Application-Specific Datasources
                    ii. Logistics
                    iii. Managing Extract Structures
                    iv. Initialization
                    v. Filling the Setup tables
                    vi. Application-Specific Setup of statistical data
                    vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)

                  Specify a Run Name and time and Date (put future date)

                  Execute

                  Check the data in Setup tables at RSA3
                  Replicate the Data Source

Use of setup tables:

          You should fill the setup table in the R/3 system and extract the data to BW - the         setup tables is in SBIW - after that you can do delta extractions by initialize the extractor.
          Full loads are always taken from the setup tables



Q2: HOW DELTA WORKS FOR LO EXTRACTION AND WHAT ARE UPDATE METHODS?           
Type 1: Direct Delta
Each document posting is directly transferred into the BW delta queue
Each document posting with delta extraction leads to exactly one LUW in the respective BW delta      queues
Type 2: Queued Delta
Extraction data is collected for the affected application in an extraction queue
Collective run as usual for transferring data into the BW delta queue
Type 3: Un-serialized V3 Update
Extraction data for written as before into the update tables with a V3 update module
 V3 collective run transfers the data to BW Delta queue
In contrast to serialized V3, the data in the updating collective run is without regard to sequence from  the update tables

Q3: HOW TO CREATE GENERIC EXTRACTOR?
1.  Select the DataSource type and give it a technical name.
2.  Choose Create.
      The creating a generic DataSource screen appears.
3.  Choose an application component to which the DataSource is to be assigned.
4.  Enter the descriptive texts. You can choose any text.
5.  Choose from which datasets the generic DataSource is to be filled.
     Choose Extraction from View, if you want to extract data from a transparent table or a                database view.   
     Choose Extraction from Query, if you want to use a SAP query InfoSet as the data   source. Select the required InfoSet from the InfoSet catalog.
     Choose Extraction using FM, if you want to extract data using a function module. Enter     the function      module and extract structure.
     With texts, you also have the option of extraction from domain fixed values.
6.  Maintain the settings for delta transfer where appropriate.
7.  Choose Save.
     When extracting, look at SAP Query: Assigning to a User Group.

 
Note: when extracting from a transparent table or view:

    If the extract structure contains a key figure field, that references to a unit of  measure       or currency unit field, this unit field must appear in the same extract structure as the key     figure field.

A screen appears in which you can edit the fields of the extract structure.

8. Choose DataSource ® Generate.
   
   The DataSource is now saved in the source system.






Q4: HOW TO ENHANCE A DATASOURCE?

Step 1: Go to T Code CMOD and choose the project you are working on.
Step 2: Choose the exit which is called when the data is extracted.
Step 3: There are two options
            Normal Approach: CMOD Code
            Function Module Approach: CMOD Code
Step 4: Here in this step we create a function module for each data source. We create a            
            new FM
           (Function Module in SE37)

Data Extractor Enhancement - Best Practice/Benefits:

This is the best practice of data source enhancement. This has the following benefits:

  • No more locking of CMOD code by 1 developer stopping others to enhance other      extractors.
  • Testing of an extractor becomes more independent than others.
  • Faster and a more robust Approach

Q5: WHAT ARE VARIOUS WAYS TO MAKE GENERIC EXTRACTOR DELTA ENABLED?  
This field from the extraction structure of a DataSource meets one of the following criteria:
1. The field has the following type: Time stamp. New records to be loaded into the BW using            a delta upload have a higher entry in this field than the time stamp of the last extraction.
2. The field has the following type: Calendar day. The same criterion applies to new records as in the time stamp field.
3. The field has another type. This case is only supported for SAP Content DataSources. In this case, the maximum value to be read must be displayed using a DataSource-specific exit when beginning data extraction.
Q6: WHAT ARE SAFETY INTERVALS?
This field is used by Data Sources that determine their delta generically using a repetitively-increasing field in the extract structure.

The field contains the discrepancy between the current maximum when the delta or delta init extraction took place and the data that has actually been read.

Leaving the value blank increases the risk that the system could not extract records arising during extraction.

Example:

 A time stamp is used to determine the delta. The time stamp that was last read is 12:00:00. The next delta extraction begins at 12:30:00. In this case, the selection interval is 12:00:00 to 12:30:00. At the end of extraction, the pointer is set to 12:30:00.
A record - for example, a document- is created at 12:25 but not saved until 12:35. It is not contained in the extracted data but, because of its time stamp, is not extracted the next time either.


Q7: HOW IS COPA DATASOURCE SET UP?
R/3 System

1. Run KEB0
2. Select Datasource 1_CO_PA_CCA
3. Select Field Name for Partitioning (Eg, Ccode)
4. Initialize
5. Select characteristics & Value Fields & Key Figures
6. Select Development Class/Local Object
7. Workbench Request
8. Edit your Data Source to Select/Hide Fields
9. Extract Checker at RSA3 & Extract

BW System

1. Replicate Data Source
2. Assign Info Source
3. Transfer all Data Source elements to Info Source
4. Activate Info Source
5. Create Cube on Infoprovider (Copy str from Infosource)
6. Go to Dimensions and create dimensions, Define & Assign
7. Check & Activate
8. Create Update Rules
9. Insert/Modify KF and write routines (const, formula, abap)
10. Activate
11. Create InfoPackage for Initialization
12. Maintain Infopackage
13. Under Update Tab Select Initialize delta on Infopackage
14. Schedule/Monitor
15. Create Another InfoPackage for Delta
16. Check on DELTA Option
17. Ready for Delta Load

Q8: WHAT ARE VARIOUS WAYS TO TRACK DELTA RECORDS?
Ans:

RSA7, LBWQ, Idocs and SMQ1.
BW Data Modeling












Q1: WHAT ARE START ROUTINES, TRANSFORMATION ROUTINES, END ROUTINES, EXPERT ROUTINE AND RULE GROUP? 

Start Routine

The start routine is run for each data package at the start of the transformation. The start routine has a table in the format of the source structure as input and output parameters. It is used to perform preliminary calculations and store these in a global data structure or in a table. This structure or table can be accessed from other routines. You can modify or delete data in the data package.

Routine for Key Figures or Characteristics

This routine is available as a rule type; you can define the routine as a transformation rule for a key figure or a characteristic. The input and output values depend on the selected field in the transformation rule.

End Routine

An end routine is a routine with a table in the target structure format as input and output parameters. You can use an end routine to postprocess data after transformation on a package-by-package basis. For example, you can delete records that are not to be updated, or perform data checks.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE). You have to use a dummy rule to override this.

Expert Routine

This type of routine is only intended for use in special cases. You can use the expert routine if there are not sufficient functions to perform a transformation. The expert routine should be used as an interim solution until the necessary functions are available in the standard routine.
You can use this to program the transformation yourself without using the available rule types. You must implement the message transfer to the monitor yourself.
If you have already created transformation rules, the system deletes them once you have created an expert routine.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE).

Rule Group 

A rule group is a group of transformation rules. It contains one transformation rule for each key field of the target. A transformation can contain multiple rule groups.
Rule groups allow you to combine various rules. This means that for a characteristic, you can create different rules for different key figures.
          



Q2: WHAT ARE DIFFERENT TYPES OF DSO'S AND THEIR USAGE

Ans: See this table
Type

Structure
Data Supply

SID Generation
Details

Example



Standard DataStore Object



Consists of three tables: activation queue, table of active data, change log
From data transfer process


Yes
Standard DataStore Object



Operational Scenario for Standard DataStore Objects
Write-Optimized DataStore Objects




Consists of the table of active data only
From data transfer
process

No
Write-Optimized DataStore Object

Operational Scenario for Write-Optimized DataStore Objects
DataStore Objects for Direct Update

Consists of the table of active data only
From APIs

No
DataStore Objects for Direct Update



Operational Scenario for DataStore Objects for Direct Update.

Q3: WHAT IS COMPOUNDING?

    You sometimes need to compound Info Objects in order to map the data model. Some Info Objects cannot be defined uniquely without compounding.

For example, if storage location A for plant B is not the same as storage location A for plant C, you can only evaluate the characteristic Storage Location in connection with Plant. In this case, compound characteristic Storage Location to Plant, so that the characteristic is unique.

Using compounded Info Objects extensively, particularly if you include a lot of Info Objects in compounding, can influence performance. Do not try to display hierarchical links through compounding. Use hierarchies instead.

A maximum of 13 characteristics can be compounded for an Info Object. Note that characteristic values can also have a maximum of 60 characters. This includes the concatenated value, meaning the total length of the characteristic in compounding plus the length of the characteristic itself.




Q4: WHAT IS LINE ITEM DIMENSION AND CARDINALITY?

1. Line item:

This means the dimension contains precisely one characteristic. This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table. Removing the dimension table has the following advantages:

  • When loading transaction data, no IDs are generated for the entries in the dimension table. This number range operation can compromise performance precisely in the case where a degenerated dimension is involved.

  • A table- having a very large cardinality- is removed from the star schema. As a result, the SQL-based queries are simpler. In many cases, the database optimizer can choose better execution plans.

Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include additional characteristics. This is only possible with normal dimensions.

It is recommended that you use Data Store objects, where possible, instead of Info Cubes for line items.

 2. 
High cardinality:

This means that the dimension is to have a large number of instances (that is, a high cardinality). This information is used to carry out optimizations on a physical level in depending on the database platform. Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality.

Q5: WHAT IS REMODELING?
You want to modify an InfoCube that data has already been loaded into. You use remodeling to change the structure of the object without losing data.

If you want to change an InfoCube that no data has been loaded into yet, you can change it in InfoCube maintenance.

You may want to change an InfoProvider that has already been filled with data for the following reasons:

  • You want to replace an InfoObject in an InfoProvider with another, similar InfoObject. You have created an InfoObject yourself but want to replace it with a BI Content InfoObject.

  • The structure of your company has changed. The changes to your organization make different compounding of InfoObjects necessary.

Q6: HOW IS ERROR HANDLING DONE IN DTP?

At runtime, erroneous data records are written to an error stack if the error handling for the data transfer process is activated. You use the error stack to update the data to the target destination once the error is resolved.

With an error DTP, you can update the data records to the target manually or by means of a process chain. Once the data records have been successfully updated, they are deleted from the error stack. If there are any erroneous data records, they are written to the error stack again in a new error DTP request.

  1. On the Extraction tab page under Semantic Groups, define the key fields for the error stack.
  2. On the Update tab page, specify how you want the system to respond to data records with errors:
  3. Specify the maximum number of incorrect data records allowed before the system terminates the transfer process
  4. Make the settings for the temporary storage by choosing Goto ® Settings for DTP Temporary Storage
  5. Once the data transfer process has been activated, create an error DTP on the Update tab page and include it in a process chain. If errors occur, start it manually to update the corrected data to the target.

Q7: WHAT IS THE DIFFERENCE IN TEMPLATE/REFERENCE?

If you choose a template InfoObject, you copy its properties and use them for the new characteristic. You can edit the properties as required

Several InfoObjects can use the same reference InfoObject. InfoObjects of this type automatically have the same technical properties and master data.
BW Reporting (BEx)         

Q1: WHAT IS THE USE OF CONSTANT SELECTION?

In the Query Designer, you use selections to determine the data you want to display at the report runtime. You can alter the selections at runtime using navigation and filters. This allows you to further restrict the selections.

The Constant Selection function allows you to mark a selection in the Query Designer as constant. This means that navigation and filtering have no effect on the selection at runtime. This allows you to easily select reference sizes that do not change at runtime.

e.g:     In the InfoCube, actual values exist for each period. Plan values only exist for the entire year. These are posted in period 12. To compare the PLAN and ACTUAL values, you have to define a PLAN and an ACTUAL column in the query, restrict PLAN to period 12, and mark this selection as a constant selection. This means that you always see the plan values, whichever period you are navigating in.


Q2: WHAT IS THE USE OF EXCEPTION CELLS?

When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell.

Cell-specific definitions allow you to define explicit formulas and selection conditions for cells as well as implicit cell definitions. This means that you can override implicitly created cell values. This function allows you to design much more detailed queries.

In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help formulas.

Q3: HOW TO TAKE DIFFERENCE IN DATES AT REPORT LEVEL?

1. In the new formula window right click on Formula Variable and choose New Variable
2. Enter the Variable Name, Description and select Replacement Path in the Processing by               field.
   Click the Next Button
3. In the Characteristic screen, select the date characteristic that represents the first date    to use in the calculation
4. In the Replacement Path screen select Key in the Replace Variable with field. Leave all    the other options as they are (The offset values will be set automatically).
5. In the Currencies and Units screen select Date as the Dimension ID
Repeat the same steps to create a formula variable for second date and use them in the calculation.


continued.....

Performacne Tuning


Performance Tuning
Info Cube Partitioning
           is the concept can be used for the info Cube which are using Time Dimension Info Objects as "0CALMONTH" , "0FISCYEAR". These two predefined time dimension info Objects will supports for Partitioning.
           While getting the business data based on date , if we apply partitions getting the data from partition is much faster than complete data in single info Cube area.

Navigations : if any info cube is added with Time Dimension Info Objects using above info Objects ( "0CALMONTH" , "0FISCYEAR" ),  then save the cube and activate .
After activation -> Click on Extras menu -> DB Performance -> Partitioning -> select
"0CALMONTH" radiobutton / 0FISCPER -> Click on Continue -> enter From ( 2000.01 ) and TO date ( 2012.12 ) -> specify number of partitions ( 4 ) -> continue -> save and again activate.

-> Indexing ( if we delete the index before loading the data increases performance of loading).
If we create index then after loading reporting efficiency increases.

Navigations...
 Right click on info Cube -> Click on "manage" -> Click on "performance" -> Click on "Create index" / "Delete" index based on report requisition or loading requisition -> click on "Start" -> keep refresh -> click on Close.
DB Statistics
    DB Statistics : is another performance tuning option  used to create complete summary log of the info cube such as what type of data, type of columns and type of validations etc. Using these information reporting becomes easy with summary data.

 Right click on info Cube -> Click on "manage" -> Click on "performance" -> Click on "Create database Statistics" -> Click on "Start" -> Keep Refresh -> Close the screen.

Info Cube Compression
is another performance tuning option to remove 0 records from F fact table into E Fact table after compression. Only F fact table data will be used in reporting.  because of Unwanted data is moved into E Fact table data retrieval of main data from F Fact table becomes faster.

 Right click on info Cube -> Click on "manage" -> Click on Collapse tab ->  select the check box with 0 elimination  -> Click on "Release"  -> keep refresh.


Index :   will create a file with extension as .idx in BI Server system  under OS Environment.












Types of INFOCUBES


BI GENERIC DATA EXTRATION INTERVIEW QUESTIONS&ANSWERS FROM TOP MNCS


   BI GENERIC DATA EXTRATION INTERVIEW  QUESTIONS&ANSWERS

1. How many types of Exactions are there?
ANS: Two types
- Application Specific
- Cross Application

2. What is meant by Extraction?
ANS: Extraction is nothing but loading up of the data from one source system to the other source system.

3. What is the T-CODE for creating GENERIC DATASOURCE?
ANS: RSO2

4. Why do you go for GENERIC DATASOURCE?
ANS: There are mainly two reasons for going GENERIC EXTRACTION.
- When we don’t have any business content data source readily available.
- Even if a business content data source is available, and it is already being used up and if we want to simulated the same kind of data source, we go for generic extraction.

5. How many ways did you create Generic datasource?
ANS: Three ways.
- Transaction data
- Master data
- Texts

6. How many ways did you extract generic data source?
ANS: 4 ways
- Table
- View
- Function Module
- Info set
- Domain (for TEXTS data sources only).

7. What is the difference between creating GENERIC DATASOURCE using VIEW and FUNCTION MODULE?
ANS: In VIEW the EXTRACT STRUCTURE is automatically created, but in FUNCTION MODULE we must created the EXTRACT STRUCTURE explicitly.

8. When do you go for creating VIEW and FUNCTION MODULE?
ANS:  If there is 1:1 relationships then go for creating VIEW and if there is 1:M relationship go for creating FUNCTION MODULE.

9. What is the importance of SELECTION, HIDE, INVERSION and FIELD ONLY check boxes?
ANS:
- SELECTION - > What ever the fields we select here those fields can appear at DATA SELECTION tab of INFOPACKAGE.
- HIDE -> What ever the fields we select here those fields can’t be appear in BW side.
- INVERSION -> This option is available for Key figures. It takes –ve sign to the existing values.
- FIELD ONLY -> This option also available for Key figures. If there is any Enhancement for a particular field at that time this is enabled.

10. Did you create Generic Extraction? If yes tell me the scenario?
ANS: 1. FUNCTION MODULE
To Extract Open deliveries: The data source 2LIS_12_VAITM is available as a part of business content extractor, all the delivery item level information. So to extract only open deliveries we have built generic data source with function module. We used 3conditions to identify an open delivery.
• If delivery is consider to bean open delivery, if it has no PGI (post goods issue) document.
• If delivery is consider to bean open delivery, if it has PGI (post goods issue) documents, and if this PGI document doesn’t contain billing document.
• If delivery is consider to bean open delivery, if it has PGI (post goods issue) documents, and if this PGI documents contain billing document, and this billing document is not yet to be posted.

We have a database table LIPS, which contains all delivery item level information. So we collect all the records from this LIPS table and cross check whether give delivery is open delivery or not. If it is open delivery insert the record into E_T_DATA (internal table).
LIPS – Delivery item level information.
VBFA – sales Document Flow
VBRP – Billing Document Item data.
2. VIEW
In SAP R/3 3.1v there is no business content data source to extract FI_AR information so we went about building the generic data source using the view built on two tables BSID and BSAD.
BSID – Accounting: Secondary index for customers
BSAD - Accounting: Secondary index for customers (cleared items)
Some common fields in these two tables are COMPANY CODE, CUSTOMER NUMBE etc).

11. Why do you go for ALE DELTA? And for which type of data sources we are going for ALE DELTA?
ANS: When we are generating MASTER DATA delta we go for ALE delta. If there is any changes in MASTER DATA those effects can also occurs in BW side.

12. How do you find CHANGE DOC OBJECT field for master data?
ANS: Go to SE11 – Data Dictionary and see the table TCDOB by selecting corresponding table

13. In which T-CODE did you change the data for MATERIAL MASTER DATA?
ANS: MM01 – creation of Material master data
MM02 – Change
MM03 – Display

14. What is the use of GENERIC DELTA? And when do you go for GENERIC DELTA?
ANS: When we are generating TRANSACTION DATA for delta we go for GENERIC delta. If there is any changes in TRANSACTION DATA those effects can also occurs in BW side.
VA01 – Creation of SALES and DISTRIBUTION
VA02 – change
VA03 – Display

15. What are the different ways of setting up the delta in generic extraction for transaction data?
ANS: There are mainly 3 options available for generic delta. Those are
• Calday – if we setup delta on base of calday we can run delta only once per day that to at the end of the clock to minimize the missing of delta records.
• Numeric pointer – This type of delta is suitable only when we are extracting data from a table which supports only creation of new records, but not change of exiting records
• Time stamp – Using timestamp we can run delta multiple times per day but we need to use the safety lower limit and safety upper limit with minimum of 5 minutes.

16. When you setup delta for generic data source, where does data come from?
ANS: RSA7 (DELTA QUEUE)

17. When you are setting up the delta in generic data sources, what are the different images we can setup?
ANS: We can setup 2 types of images,
- New status for changed records
- Additive delta


18. When I am setting up the delta for the generic data source if I select the update method a NEW ATATUS for changed records can I load the data directly to the cube?
ANS: NO. If the update type is setup to NEW STATUS for changed records then it becomes mandatory for us to load data into ODS and then ODS to CUBE.

19. What is the difference between BUSINESS CONTENT DATASOURCES and GENERIC DATA SOURCE?
ANS: In the case of business content data source the data source is ready madely available in delivered version. In order to use it we have to install that particular data source to create a copy of the data source in active version by using RSA5 t-code, select the data source and click on transfer data source. In case of generic data source we are creating our own data sources. We prefer business content data sources with specific to performance.

20. What is DATA SOURCE?
ANS: Data source defined ES and TS at source system and TS at BW system.

21. When does the Transfer structure at SAP R/3 side get generated?
ANS: When you activate transfer rules in BW, it creates a transfer structure at SAP R/3 side.

22. Steps to generate GENERIC EXTRACTION METHOD?
ANS: Go to RSO2 -> selected the type of extractor to extract the data from R/3 to BW- > click on create -> select Application Component based on which type of data we are extracting -> give the name from which we are extracting the data like table/view, function module, info set etc. -> give short, medium, long descriptions and click on save -> select the field here and again click on SAVE-> go to BW side replicate your data source based on the type of application component to which you are extracted.-> Assign transfer rules and activate-> create cube and updated rules -> go to source system and create info package and schedule the info package.