Thursday, November 29, 2012

Difference between Delta and Full Load


Explain about deltas load and where we use it exactly.
A data load into a BI ODS/master data/cube can be either FULL or DELTA.

Full load is when you load data into BI for the first time i.e. you are seeding the destination BI object with initial data. A delta data load means that you are either loading changes to already loaded data or add new transactions.

Usually delta loads are done when the process has to sync any new data/changed data from the OLTP system i.e. SAP ECC or R/3 to SAP BI (DSS/BI). DSS stands for Decision Support Systems or system that is used for deriving Business Intelligence.

Example:
Let's say you are trying to derive a report to empower the management to figure out who are the customers who have bought the most from your company.

On the BI side, you create the necessary master data elements. You use the master data elements to create an ODS and a cube. The ODS and the cube will house the daily transactions that get added to the OLTP systems via a variety of applications.
Now you identify the datasource in ECC that will bring the necessary transactions to BI. You replicate the datasource in BI and map the data source to the ODS and map the ODS to the cube. Hence you create the Transformation and DTP as a full load for the first time.
At this point of time, your ODS and cube has the data for the last last x number of years where x stands for the life of your company. You also need to capture the daily transactions from here onwards going forward. What you do now is change the DTP to allow only delta records.
Now you schedule the execution of the datasource and loading of the data in a process chain. At run time, the process chain will get the new records from OLTP (since the datasource is already replicated keeping in mind that the datasource structure has not changed) and import those changes to the ODS and hence to the cube.
Any such loads that brings in new transactions or changes to earlier transactions will be called delta records and hence the load is called delta load.

Tuesday, November 20, 2012

200 BW Questions and Answers for INTERVIEWS




-->> For some questions no answers are maintained, you can also answer questions as a "Comment" (end of post) by refering questio no.... dont forget to drop your comments .. :-)


1) Please describe your experience with BEx (Business Explorer)A) Rate your level of experience with BEx and the rationale for you’re self-rating
B) How many queries have you developed? :
C) How many reports have you written?
D) How many workbooks have you developed?
E) Experience with jump targets (OLTP, use jump target)
F) Describe experience with BW-compatible ETL tools (e.g. Ascential)

2) Describe your experience with 3rd party report tools (Crystal Decisions, Business Objects a plus)



3) Describe your experience with the design and implementation of standard & custom InfoCubes.
1. How many InfoCubes have you implemented from start to end by yourself (not with a team)?
2. Of these Cubes, how many characteristics (including attributes) did the largest one have.
3. How much customization was done on the InfoCubes have you implemented?

4) Describe your experience with requirements definition/gathering.

5) What experience have you had creating Functional and Technical specifications?

6) Describe any testing experience you have:

7) Describe your experience with BW extractors
1. How many standard BW extractors have you implemented?
2. How many custom BW extractors have you implemented?

8) Describe how you have used Excel as a compliment to BEx
A) Describe your level of expertise and the rationale for your self-rating (experience with macros, pivot tables and formatting)B)

9) Describe experience with ABAP

10) Describe any hands on experience with ASAP Methodology.

11) Identify SAP functional areas (SEM, CRM, etc.) you have experience in. Describe that experience.

12) What is partitioning and what are the benefits of partitioning in an InfoCube?
A) Partitioning is the method of dividing a table (either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of the fields in the table. By partitioning an infocube, the reporting performance is enhanced because it is easier to search in smaller tables. Also table maintenance becomes easier.

13) What does Rollup do?
A) Rollup creates aggregates in an infocube whenever new data is loaded.

14) What are the inputs for an infoset?
A) The inputs for an infoset are ODS objects and InfoObjects (with master data or text).

15) What internally happens when BW objects like Info Object, Info Cube or ODS are created and activated?
A) When an InfoObject, InfoCube or ODS object is created, BW maintains a saved version of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use.

16) What is the maximum number of key fields that you can have in an ODS object?
A) 16.

17) What is the specific advantage of LO extraction over LIS extraction?
A) The load performance of LO extraction is better than that of LIS. In LIS two tables are used for delta management that is cumbersome. In LO only one delta queue is used for delta management.

18) What is the importance of 0REQUID?
A) It is the InfoObject for Request id. OREQUID enables BW to distinguish between different data records.

19) Can you add programs in the scheduler?
A) Yes. Through event handling.

20) What is the importance of the table ROIDOCPRMS?
A) It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.

21) What is the importance of 'start routine' in update rules?
A) A Start routine is a user exit that can be executed before the update rule starts to allow more complex computations for a key figure or a characteristic. The start routine has no return value. Its purpose is to execute preliminary calculations and to store them in a global data structure. You can access this structure or table in the other routines.

22) When is IDOC data transfer used?
A) IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.

23) What is partitioning characteristic in CO-PA used for?
A) For easier parallel search and load of data.

24) What is the advantage of BW reporting on CO-PA data compared with directly running the queries on CO-PA?
A) BW has a better performance advantage over reporting in R/3. For a huge amount of data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an OLTP system and is good for transaction processing rather than analytical processing.

25) What is the function of BW statistics cube?
A) BW statistics cube contains the data related to the reporting performance and the data loads of all the InfoCubes in the BW system.

26) When an ODS is in 'overwrite' mode, does uploading the same data again and again create new entries in the change log each time data is uploaded?
A) No.

27) What is the function of 'selective deletion' tab in the manage->contents of an infocube?
A) It allows us to select a particular value of a particular field and delete its contents.

28) When we collapse an infocube, is the consolidated data stored in the same infocube or is it stored in the new infocube?
A) Data is stored in the same cube.

29) What is the effect of aggregation on the performance? Are there any negative effects on the performance?
A) Aggregation improves the performance in reporting.

30) What happens when you load transaction data without loading master data?
A) The transaction data gets loaded and the master data fields remain blank.

31) When given a choice between a single infocube and multiple InfoCubes with a multiprovider, what factors does one need to consider before making a decision?
A) One would have to see if the InfoCubes are used individually. If these cubes are often used individually, then it is better to go for a multiprovider with many cubes since the reporting would be faster for an individual cube query rather than for a big cube with lot of data.

32) How many hierarchy levels can be created for a characteristic info object?
A) Maximum of 98 levels.

33) What is open hub service?
A) The open hub service enables you to distribute data from an SAP BW system into external data marts, analytical applications, and other applications. With this, you can ensure controlled distribution using several systems. The central object for the export of data is the Infospoke. Using this, you can define the object from which the data comes and into which target it is transferred. Through the open hub service, SAP BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.

34) What is the function of 'reconstruction' tab in an infocube?
A) It reconstructs the deleted requests from the infocube. If a request has been deleted and later someone wants the data records of that request to be added to the infocube, one can use the reconstruction tab to add those records. It goes to the PSA and brings the data to the infocube.

35) What are secondary indexes with respect to InfoCubes?
A) Index created in addition to the primary index of the infocube. When you activate a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes.

36) What is DB connect and where is it used?
A) DB connect is database connecting piece of program. It is used in connecting third party tools with BW for reporting purpose.

37) Can we extract hierarchies from R/3 for CO-PA?
A) No We cannot, “NO hierarchies in CO/PA?.

38) Explain ‘field name for partitioning’ in CO-PA
A) The CO/PA partitioning is used to decrease package size (eg: company code)

39) What is V3 update method ?
A) It is a program in R/3 source system that schedules batch jobs to update extract structure to data source collectively.

40) Differences between serialized and non-serialized V3 updates

41) What is the common method of finding the tables used in any R/3 extraction
A) By using the transaction LISTSCHEMA we can navigate the tables.

42) Differences between table view and infoset query
A) An InfoSet Query is a query using flat tables.

43) How to load data from one InfoCube to another InfoCube ?
A) Thro DataMarts data can be loaded from one InfoCube to another InfoCube.

44) What is the significance of setup tables in LO extractions ?A) It adds the Selection Criteria to the LO extraction.

45) Difference between extract structure and datasource
A) In Datasource we define the data from diff source sys,where as in extract struct it contains the replicated data of datasource n where in we can define extract rules, n transfer rulesB) Extract Structure is a record layout of InfoObjects.C) Extract Structure is created on SAP BW system.

46) What happens internally when Delta is Initialized

47) What is referential integrity mechanism ?
A) Referential integrity is the property that guarantees that values from one column depend on values from another column.This property is enforced through integrity constraints.48) What is activation of extract structure in LO ?

49) What is the difference between Info IDoc and data IDoc ?

50) What is D-Management in LO ?A) It is a method used in delta update methods, which is based on change log in LO.

51) What is entity relationship model in data modeling ?A) An ERD(Entity Relation Diagram) that can be used to generate a physical database.B) It is an high level data model.C) It is a schematic that shows all the entities within the scope of integration and the direct relationship between the entities.

52) What is the difference between direct delta and queued delta updates in LO ?

53) What is non-cumulative infocube ?

54) What kind of tools are available to monitor the overall Query Performance?

55) How can we have a delta update for generic data source ?

56) What are the methods available to debug the load failures ?

57) What is datamining concept ?A) Process of finding hidden patterns and relationships in the data.B) With typical data analysis requirements fulfilled by data warehouses,business users have an idea of what information they want to see.C) Some opportunities embody data discovery requirements,where the business user wants to correlate sets of data to determine anomalies or patterns in the data.

58) What is scoring ?59) Usage of Geo-coordinates ?A) The georelevant data can be displayed and evaluated on a map with the help of the BEx Map.60) What are the different query areas related to Infoset ?A) Jump queries,ODS queries areas are related to InfoSet

61) How does the time dependency works for BW objects ?A) Time Dependent attributes have values that are valid for a specific range of dates(i.e valid period).62) What is I_ISOURCE?A) Name of the InfoSource

63) What is I_T_FIELDS?A) List of the transfer structure fields. Only these fields are actually filled in the data table and can be sensibly addressed in the program.

64) What is C_T_DATA?A) Table with the data received from the API in the format of source structure entered in table ROIS (field ROIS-STRUCTURE).

65) What is I_UPDMODE?A) Transfer mode as requested in the Scheduler of the Business Information Warehouse. Not normally required.

66) What is I_T_SELECT?A) Table with the selection criteria stored in the Scheduler of the SAP-Business Information Warehouse. This is not normally required.

67) What is Serialized V3 Update?A) This is the normal update method. Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence is not the same as the order in which the data was created in all scenarios.

68) What is Direct Delta?A) In this method, extraction data is transferred directly from document postings into the BW delta queue. The transfer sequence is the same as the order in which the data was created.

69) What is Queued Delta?A) In this method, extraction data from document postings is collected in an extraction queue, from which a periodic collective run is used to transfer the data into the BW delta queue. The transfer sequence is the same as the order in which the data was created.

70) What is Unserialized V3 Update?A) This method is almost exactly identical to the serialized update method. The only difference is that the order of document data in the BW delta queue does not have to be the same as the order in which it was posted. We only recommend this method when the order in which the data is transferred is not important, a consequence of the data target design in the BW.

71) What are the different Update Modes?A) Serialized V3 UpdateB) Direct DeltaC) Queued DeltaD) Unserialized V3 Update

72) What are the different ways Data Transfer?A) Complete Update: All the data from the information structure us transferred according to the selection criteria defined in the scheduler in the SAP BW.
B) Delta Update: Only the data that has been changed or is new since the last update is transferred. To use this option, you must activate the delta update.

73) What is the major importance for the usage of ODS Object?A) ODS is majorly used as a staging area.

74) What is the benefit of using BW reporting over SAP Reporting?A) PerformanceB) Data AnalysisC) Better front end reporting.D) Ability to pull the data from SAP and Non - SAP sources.

75) Differences between star and extended star schema ?A) Star schema: Only characteristics of the dimension tables can be used to access facts. No structured drill downs can be created. Support for many languages is difficult.B) Extended starschema: Master data tables and their associated fields(attributes). External hierarchy tables for structured access to data. Text tables with extensive multilingual descriptions.

76) What are the new features of SAP BW 30b?

77) What are the new features of the R3 Plugin PI2002_1.

78) What are the major errors in BW and R3 pertaining to BW?A) Errors in loading data (ODS loading, Cube loading, delta loading etc)B) Errors in activating BW or other objects.C) Issues in delta loadings

79) When are tables created in BW?A) when the objects are activated, the tables are created. The location depends on the Basis installation.

80) What is a start routine and return table, how do they synchronize with each other?A) Start routine is used at update rules and return table is used to return the Value following the execution of start routine

81) What is the difference between start routine and update routine, when, how and why are they called?A) Start routine can be used to access INFOPACKAGE, update routines cant.

82) What are the different Non - R/3 systems that BW supports?

83) In a general project, how many InfoCubes, InfoObjects, InfoSources, Multi-Providers can you expect?A) It depends on size of the project inturn their business goal.Differs from project to project.

84) What does a M table signify?
A) Master table.

85) What does a F table signify?
A) Fact table

86) What is data warehousing?
A) Data Warehousing is a concept in which the data is stored and analysis is performed over it.

87) What is process chain and how you used it?
A) Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
B) In one of our scenario we wanted to upload wholesale price infoobject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject masterdata table. This dependency of first uploading masterdata and then uploading transaction data was done through the process chain.

88) What are Remotecubes and how you accessed and used it in your project?
A) A RemoteCube is an InfoCube whose transaction data is not managed in the Business Information Warehouse but externally. Only the structure of the RemoteCube is defined in BW. The data is read for reporting using a BAPI from another system.B) Using a RemoteCube, you can carry out reporting using data in external systems without having to physically store transaction data in BW. You can, for example, include an external system from market data providers using a RemoteCube.

89) Hope you have worked on enhancements and on which userexit you worked can you explain?A) Extended the Data source 0MATERIAL_ATTR , 0PLANT_ATTR, 0MAT_PLANT_ATTR for Master Data load from R/3 to BW. Edited User exit EXIT_SAPLRSAP_002 to populate Master Data for extended fields and EXIT_SAPLRSAP_001 for transaction data to extract from R/3 to BW

90) What is the t-code for generic extractor?
A) RSO2

91) What is infoset query?
A) InfoSet is special kind of InfoProvider. It is used to report by Joining ODS Objects and InfoObjects. InfoSets have been used in the Business Information Warehouse for InfoObjects (master data), ODS objects, and joins for these objects. The InfoSet Query can be used to carry out tabular (flat) Reporting on these InfoSets.

92) What is the purpose of aggregates?
A) Aggregates are like indices to database tables. They are rolled up data on few characteristics on which report is run frequently. They are created for performance improvement of reporting. If a report is used very extensively and its performance is slow then we can create aggregate on the characteristics used in the report, so that when the report runs OLAP processer selects data from aggregate instead of cube.

93) How you did Datamodeling in your project? Explain
A) We had collected data from the user and created HLD(High level Design document) and we analyzed to find the source for the data. Then datamodels were done indicating dataflow, lookups. While designing the datamodel considerations were given to use existing objects(like ODS and Cube) not storing redundant data, volume of data, Batch dependency.

94) As you said you have worked on Cubes and ODS,Which one is better suited for reporting? Explain and what are the drawbacks n benefits of each one
A) Cubes are best for reporting to queries. It runs faster. In ODS we can have only simple reports. If we query based on Nonkey fields(Data fields) in ODS then, report runs slower. But in ODS we can overwrite, non key fields. But we can not overwrite in Cube. This is one of the disadvantage in Cube.

95) What are the different cubes you worked in FI?
A) Please look at Business content cubes and BW documentation on them to answer this question.

96) What is delta upload? What is the use of delta upload? Data that has been changed or added is extractor or full data is extractor?
A) When transactional data is pulled from R3 system instead of pulling all the data daily(Instead of having full load), if we pull only the changed records, or newly added records, the load on the system will be very less. So where ever it is possible we have to go for delta load than full load.

97) What are hierarchies? Explain how you used in your project?
A) Hierarchies are organizing data in a structured way. For example BOM(Bill of material) can be configured as hierarchies.

98) What is t-code for CO-PA?
A) KEB0

99) What is SID? what is the impact in using SID?
A) In BW the information is stored as SIDs. SIDs are Auto generated number assigned to each characteristic value when they are uploaded. Search on Numeric character is always faster than Alpha characters and hence SIDs are assigned for each characteristic values.

100) What is Table partitioning? What are Return Tables?
A) If we have 0Calmonth or 0Fiscper as time characteristic, then we can partition the fact table physically. Table portioning has to be supported by the Database. Oracle, Informix, IBM DB2/390 supports table partitioning. SAP DB, Microsoft SQL Server IBM DB2/400 does not support table portioning. Table partitioning helps to run the report faster as data is stored in the relevant partition.
B) In Update rule routine, If we want to return multiple records, instead of single value, we can use this return table.

101) What is the t-code for Query Monitor?
A) RSRT

102) Apart from R/3 ,which legacy db you used for extraction ?
A) We had legacy system called CAM. CAM system had Open order information which was full load every day to OM Schedule line ODS. CAM system was connected to R3 through DB connect.

103) What are the three ODS Objects table explain?
A) ODS Object has three tables called New, Active and Change log. As soon as new data comes into ODS, that is stored in ODS. When it is activated, the new data is written to Active table. Change is written in the change log.

104) Can you explain about Start routines how you used in your project give me an example?
A) In start routine is used for mass processing of records. In start routine all the records of data package is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 nos in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.

105) In update rules for an infocube we can specify separate update rules for characteristics of each of the key figures. In which situations is the above used?
A) To be discussed(TBD).

106) Other than BW, what are the other ETL tools used for SAP R/3 in industry?
A) Informatica, ACTA, COGNOS, Business Objects are other ETL tools.

107) Does any other ERP software use BW for data warehousing.
A) NO.
108) What is the importance of hierarchies?
A) One can display the elements of characteristics in hierarchy form and evaluate query data for the individual hierarchy levels in the Business Explorer (in Web applications or in the BEx Analyzer).

109) Where is 0RECORDMODE infoobject used?
A) It is used in Delta Management. ODS uses ORECORDMODE info object for delta load. ORECORDMODE has values as X,D,R. In delta data load X means rows to be skipped, D & R for delete and Remove of rows.

110) What is operating concern in CO-PA?
A) An organizational structure that combines controlling areas together in the same way as controlling areas group companies together.

111) Does all the characteristics present in ODS, are key fields.
A) No. An ODS object contains key fields (for example, document number/item) and data fields that can also contain character fields (for example, order status, customer).

112) What is the use BAPI, ALE?
A) BAPI, ALE => set of programs which will Extract data from data sources. BW connects SAP systems(R/3 or BW) and flat files via ALE. BW connects with non SAP systems via BAPI.

113) What is the importance of ‘Compounding’ of infoobjects?
A) A Compound attribute differentiates a characteristic to make the characteristic uniquely identifiable. For example, in a Plant, there can be some similar products manufactured. (Plant A-- Soap,Paste,Lotion; plant B--Soap, paste, Lotion) In this case Plant A and Plant B should be made unique. So the characteristics can be compounded to make them unique.

114) Are there any limitations for BEx analyzer?
A) TBD

115) How does BEx analyzer connect to BW?
A) Bex Analyzer is connected with OLAP Processor. OLE DB Connectivity makes Bex Analyzer connects with BIW.

116) What is field partitioning in CO-PA?
A) Internally allocates space in database. If needed table resides in one or few partitions, then only these partitions will be selected and examined by SQL statement, therby significantly reducing I/O volume.

117) Where to check the log for warning messages appearing in activation of transfer rules?
A) If transfer rules are not defined for Info objects, then traffic lights will not be green.

118) What are the advantages of reporting on an infocube to that of reporting on an ODS?
A) Query performance will be good with Infocube. Infocube has multidimensional model where as ODS is a flat table. Aggregates and Multi provider can be built upon Infocube, which will enhance the Query performance. Aggregates and mutiproviders cannot be built on ODS.

119) How does a navigational attribute differ from other attributes in terms of linking it with the infocube?
A) TBD

120) How does delta update mechanism work in ODS?
A) ODS has three database tables. New Table, Active Table and Change Log Table. Initially new data are loaded and their traces are kept in Change log table. When another set of data comes, it actually compares with change log and transfers the data (delta data) into active table and also notes in Change log. Everytime the tables are compared and data is written into the targets.

121) What is time dependent master data?
A) Time dependant master data are one which keeps changing according to time. For example: Assume a Scenario, Sales person A works in East Zone till (Jan 30th 2004), and then moves to North Zone from Jan31 st 2004.Thus the master data with regard to Sales person A, should be changed to differnt zone based on a time

122) Can we load transaction data into infocube without loading the master data first?
A) yes.

123) What is difference between ‘saving’ and ‘activating’?
A) In BIW, Saving--> actually saves the defined structure and retrieves whenever required.B) Activating---> It saves and generates required tables and structures.

124) Why do we use only one client in BW?

125) What is time dependent master data?
A) Time dependant master data are one which keeps changing according to time. For example: Assume a Scenario, Sales person A works in East Zone till (Jan 30th 2004), and then moves to North Zone from Jan31st 2004. Thus the master data with regard to Sales person A, should be changed to different zone based on a time

126) What are the advantages of aggregates?
A) Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates serve, in a similar way to database indexes, to improve performance.

127) In which situations we cannot use aggregates?
A) if data provider is ODS.

128) Aggregates are recommended in the following cases,
A) The execution and navigation of query data leads to delays with a group of queries.B) You want to speed up the execution and navigation of a specific query.C) You often use attributes in queries.D) You want to speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels.

129) What does delta initialization do?
A) It makes BW to expect the data from Sources, after full update. It initializes the delta Update mechanism for that datasource.

130) What is difference between delta and pseudo delta?
A) Some data target and module has delta Update feature. Those can be used for delta Update of data. Say ODS, COPA are delta capable. data can be expected stage wise. After first accumulation of data, BIW expects the data in delta wise for these data target. When the other data target do not have these feature (delta update), they can be made delta capable using ODS as data target.

131) What are the Third Normal Form and its comparison with Star Schema?
A) Third normal form is normalized form of storing data in a relational database. It eliminates functional dependencies on non-key fields by putting them in a separate table. At this stage, all non-key fields are dependent on the key, the whole key and nothing but the key.B) Star schema is a denormalized form of storing data, which paves the path for storing data in a multi-dimensional model.

132) What is ASAP methodology
A) ASAP is a standard methodology for efficiently implementing and continually optimizing the SAP software. ASAP supports the implementation of the R/3 System and of mySAP.com Components, and can also be used for upgrade projects. It provides a wide range of tools that helps in all stages of implementation project - from project planning to the continual improvement of the SAP System. The two key tools in ASAP are: The Implementation Assistant, which contains the ASAP Roadmap, and provides a structured framework for your implementation, optimization or upgrade project. The Question & Answer database (Q&Adb), which allows you to set your project scope and generate your Business Blueprint using the SAP Reference Structure as a basis.

133) Significance of infoset.
A) Infoset describes data sources that are defined as a rule as joins of ODS objects or Info Objects. An Infoset is a semantic view of data sources and is not a physical data target in itself. One can define reports in the BEx Query designer using activated info sets.

134) Differences between multicube and remote cube.
A) A Multicube is a type of Info Provider that combines data from a number of Info Providers and makes them available as a whole to reporting.B) A Remote Cube is an InfoCube whose transaction data is not managed in the Business Information Warehouse but externally. Only the structure of the Remote Cube is defined in BW. The data is read for reporting using a BAPI from another system.

135) Life period of data in “Change Log? of an ODS.
A) The data of Change Log can be scheduled to be deleted periodically. Usually the Data is removed after it has been updated into the data targets.

136) Drilldown method of Infocube to ODS.
A) A multi provider can be designed to include the ODS and the Infocube in question. This gives a chance to drilldown from Infocube to the ODS.

137) What are “inbound ODS? and “consistent ODS??
A) In an Inbound ODS object, the data is saved in the same form as they are when delivered from the source system. This ODS type can be used to report the original data as it comes from the source system.B) In a Consistent ODS object, data is stored in granular form and consolidated. This consolidated data on a document level creates the basis for further processing in BW.

138) Life period of data in PSA.
A) Data in PSA is deleted when one feels that there is no need for any use of it in future. There is a trade off between wastage of space and usage as a back up for data in the source system.

139) How to load data from one infocube to another ?
A) A data source is created from the infocube which is supposed to feed. This can be done by right-clicking on the infocube and selecting “export data source?. Then a suitable infosource can be created for this data source. And the intended data target infocube can be fed.

140) What is “activation? of objects ?
A) Activation of objects enables them to be executed, in other words used elsewhere for different purposes. Unless an object is activated it cannot be used.

141) Are key figures navigable ?
A) No, key figures are not navigable.

142) What is transactional ODS?
A) A transactional ODS object differs from a standard ODS object in the way it prepares data. In a standard ODS object, data is stored in different versions (active, delta, modified), whereas a transactional ODS object contains the data in a single version. Therefore, data is stored in precisely the same form in which it was written to the transactional ODS object by the application.

143) Are SIDs static or dynamic?
A) SIDs are static.

144) Is data in Infocube editable?
A) No.

145) What are data-marts?
A) A data mart is also known as a local data warehouse. It is an implementation of a data warehouse with a restricted scope of content, with support for analytical processing and serving a single department, part of an organization, or a particular data analysis problem domain.

146) Which one is more denormalized; ODS or Infocube?
A) Infocube is more normalized than ODS.147) Is CO-PA delta capable ?A) Yes, CO-PA is delta capable.

148) What is “replication of data source? process ?
A) Replication of data source enables the extract structure from the source system to be replicated in the target system.

149) Any quality checks available for inefficient cube designs ?
A) Huge Dimension tables make a cube inefficient.

150) Why not star-schema is implemented for ODS as well ?
A) Because ODS is meant to store a detailed document for quick perusal and help make short-term decisions.

151) Why do we need separate update rules for characteristics on each key figure?
A) It is dependent on the Business requirement.

152) Use of Hierarchies.
A) Efficient reporting is one of the targets of using hierarchies. Easy drilldown paths can be built using hierarchies.

153) What is "Referential Integrity"?
A) A feature provided by relational database management systems (RDBMS's) that prevents users or applications from entering inconsistent data. For example, suppose Table B has a foreign key that points to a field in Table A. Referential integrity would prevent you from adding a record to Table B that cannot be linked to Table A. In addition, the referential integrity rules might also specify that whenever you delete a record from Table A, any records in Table B that are linked to the deleted record will also be deleted. This is called cascading delete. Finally, the referential integrity rules could specify that whenever you modify the value of a linked field in Table A, all records in Table B that are linked to it will also be modified accordingly. This is called cascading update.

154) What is a Transactional Cube and when is it preferred?
A) Transactional InfoCubes differ from Basic InfoCubes in their ability to support parallel write accesses. Basic InfoCubes are technically optimized for read accesses to the detriment of write accesses. Transactional cubes are designed to meet the demands of SEM, where multiple users write simultaneously into a cube and data is read as soon as possible.

155) When is the data in Change Log table of ODS deleted.
A) Deleting data from the change log for an ODS object is recommended if several requests, which are no longer required for the delta update and also are no longer used for an initialization from the change log, have already been loaded into the ODS object. If a delta initialization for the update exists in connected data targets, the requests have to be updated first before the respective data can be deleted in the change log.

156) On what occasions do we have different update rules for each of the Key Figures in an Info Cube and how would data be stored in such cases.
A) If we want to give different values to characteristics depending on each of the key figure values, we have different update rules. Say we have two keyfigures, cost and profit, if we have a entry for account type, depending on each of keyfigure we can classfiy account as high cost, low cost or high profit or low profit. If we have seperate update rules for each of the key Figures, there can be multiple rows in the infocube corresponding to each row in the transaction data.

157) When are "Hierarchies" used in an info object and how do they differ from the hierarchies available in BEx while querying.
A) Hierarchies are used for modeling hierarchical structures. Hierarchies defined in info objects should be loaded like master data, whereas it is needed creating hierarchies in BEx while querying. Further in BEx we have the flexibility of exchanging the nodes and leaves.

158) What kinds of data fields are used in Line Items, Transactional Figures and Cost of Sales Ledger?
A) Check the respective tables in R/3.

159) What are Aggregates and when are they used?
A) An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form into the database. Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates can be used in following cases:1. The execution and navigation of query data leads to delays with a group of queries.2. You want to speed up the execution and navigation of a specific query.3. You often use attributes in queries.4. You want to speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels.

160) How is the data of different modules stored in R/3?
A) Data is stored in multiple tables in R/3 based on ERM (Entity Relationship) model to prevent the reduntant storage of data.

161) In what cases to we transfer data from one info cube to another.
A) Modifications can't be made to an infocube if there is data present in the infocube. If we want to modify an infocube and no backup for data exist then we can design another infocube with the parameters specified and load data from the old infocube.

162) How often do we have a Multi-layered structure in ODS stage and in what cases.
A) Multi-layered structure in ODS stage is used to consolidate data from different data sources.

163) How is data extracted from systems other than R/3 and Flat files?
A) Data is extracted from systems other than R/3 and flat files using staging BAPI's.

164) When do TRFC and iDOC errors occur?
A) An intermediate document (IDoc) is a container for exchanging data between R/3, R/2 and non-SAP systems. IDocs are sent in the communication layer by transactional Remote Function Call (tRFC) or by other file interfaces (for example, EDI). tRFC guarantees that the data is transferred once only. Was not able to find out when the errors occur.

165) On what occasions do the key figures become attributes of characteristics?
A) When we want to display that particular key figure as display attribute in the report. Key figures can only be made a display attribute of infoobjects. Suppose we are reporting on performance of each of sales person, we can declare salary of the sales person, as an attribute. Further key figures like net price (price per unit quantiy or price per item) used as an attribute of product can be used to calculate key figures like total price ( by multiplying net price with quantity using formulas).

166) Why is there a restriction of 16 Dim tables in an Info Cube and 16 key fields in an ODS.

167) On what factors does the loading time depend on?
A) Loading time depends on the work load both on the BW side and source system side. It might also depend upon the network connectivity.

168) How long does it take to load a million records into an info cube from an R/3 system?
A) Depending on work load on BW side and source system side loading time varies. Typically it takes half an hour to load a million records.

169) Will the loading time be same for the same amount of data for non-SAP systems like Flat files.
A) It might not be the same, it depends on the extraction programs used on the source system side.

170) Can you tell me about a situation when you implemented a Remote Cube.
A) Remote cube is used when we like to report on transactional data. In a remote cube data is not stored on BW side. Ideally used when detailed data is required and we want to bypass loading of data into BW.

171) What is mySAP.com?
A) SAP solution to integrate all relevant business processes on the Internet. mySAP.com integrates business processes in SAP and non-SAP systems seamlessly, and provides a complete business environment for electronic commerce.

172) How is BW superior to other data warehousing tools (if it is superior)?
A) SAP BW provides, good compatibility with other SAP products.

173) Can we just load the transaction data without loading the master data from a source system when we are sure we are not going to query on the master data.
A) Yes you can.

174) What is operating concern and partitioning in CO-PA.
A) Operating concern is set of characteristics based on which we want to analyze the performance of company. Partitioning is dividing the data into different datasets depending on a certain characteristics. Partitioning enables parallel access of data.

175) What is the difference between value fields and key figures in CO-PA.
A) Value fields comprises of data which CO-PA gets from various modules in R/3. Whereas key figures are derived from these value fields.

176) How is the performance of an info cube measured?
A) Infocube performance can be measured based upon query response time.

177) What factors are used in measuring the performance of a query?
A) Query response time is used for measuring the performance of a query.

178) What is process chain and how you used it?
A) We have used process chains to automate the delta loading process. Once you are finished with your design and testing you can automate the processes listed in RSPC. I have a real time example in the attachment.

179) What are Remote cubes and how you accessed and used it in your project?
A) Its an Info Provider which does not physically store data, but used for non-trivial reporting. I have not used but an example would be say you want to compare the data consistency b/w R/3 and BW you can generate report on a remote cube and compare with a report in BW

180) Hope you have worked on enhancements and on which user exit you worked can you explain?

181) What is the t-code for generic extractor?
A) RSO2

182) What is infoset query?
A) InfoSet is an Info Provider which does not store data, its only a view and needs to be built as a join. In treasury we have built the currency exchange report. This report is not used often and so its stored in an ODS. So we built an InfoSet to get data from another object and built the report. On an ODS once you say its reportable and start running a query its no longer a flat table but follows a star schema and reporting becomes slow

183) What is the purpose of aggregates?
A) They are used to store frequently reporting data. Once you fill in an aggregate and activate, Bex checks for aggregates before running a query and brings the data much faster. So basically query performance improves a lot.

184) How you did Data modeling in your project? Explain
A) Initially we study the business process of client, like what kind of data is flowing in the system, the volume, changes taking place in it, the analysis done on the data by users, what are they expecting in the future, how can we use the BW functionality. Later we have meetings with business analyst and propose the data model, based on the client. Later we give a proof of concept demo wherein we demo how are we going to build a BW data warehouse for their system. Once you get an approval start requirement gatherings and building your model and testing follows in QA

185) As you said you have worked on Cubes and ODS,Which one is better suited for reporting?Expalin and what are the drawbacks n benefits of each one
A) Depending on what you want to report we store the data in Cube/ODS. Generally BW is used to store high volumes of data and faster reporting, wherein InfoCube is used as it stores normalized data. We store master data in other tables and transaction data which are basically numbers are stored in cube. So basically the property of indexing works here and the reporting is fast as we have only numeric in a cube.B) When you load master data first the SIDs are created for that data. When you load the transaction data it looks for the master data SIDs and gets linked using DIMs. You have this in a cube. So your reporting is going to be fast as both of them are numbers.C) In an ODS we store data which is of more detail utilizing its structure of flat file . reporting on this will be slow because of the reason in ans 5.

186) What are the different cubes you worked in FI?

187) What is deltaupload?What is the use of deltaupload?Data that has been changed or added is extractor or full data is extractor?
A) To load real time data and make accurate decisions we use delta upload.

188) What are hierarchies?Explain how you used in your project?

189) What is t-code for CO-PA?
A) KEB0
190) What is SID ? what is the impact in using SID?

191) What is Table partitioning? What are Return Tables?

192) What is the t-code for Query Monitor?RSRT193) Apart from R/3 ,which legacy db you used for extraction ?
A) Access, Informatica

194) What are the three ODS Objects table explain?

195) Can you explain about Start routines how you used in your project ,give me an example?


Read more: http://sapbimaterials.blogspot.com/2012/08/200-bw-questions-and-answers.html#ixzz03LV375IX

Sunday, November 18, 2012

SAP BI INTERVIEW QUESTIONS


SAP BI INTERVIEW QUESTIONS

Q1:  WHAT ARE THE STEPS INVOLVED IN LO EXTRACTION?
Go to Transaction LBWE (LO Customizing Cockpit)
          1). Select Logistics Application
               e.g. SD Sales BW
                      Extract Structures
          2). Select the desired Extract Structure and deactivate it first.
          3). Give the Transport Request number and continue
          4). Click on `Maintenance' to maintain such Extract Structure
              Select the fields of your choice and continue
               Maintain DataSource if needed
          5). Activate the extract structure
          6). Give the Transport Request number and continue
               Next step is to delete the setup tables
          7). Go to T-Code SBIW
          8). Select Business Information Warehouse
                    i. Setting for Application-Specific Datasources
                    ii. Logistics
                    iii. Managing Extract Structures
                    iv. Initialization
                    v. Delete the content of Setup tables (T-Code LBWG)
                    vi. Select the application (01 – Sales & Distribution) and Execute
                 Now, Fill the Setup tables
          9). Select Business Information Warehouse
                    i. Setting for Application-Specific Datasources
                    ii. Logistics
                    iii. Managing Extract Structures
                    iv. Initialization
                    v. Filling the Setup tables
                    vi. Application-Specific Setup of statistical data
                    vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)

                  Specify a Run Name and time and Date (put future date)

                  Execute

                  Check the data in Setup tables at RSA3
                  Replicate the Data Source

Use of setup tables:

          You should fill the setup table in the R/3 system and extract the data to BW - the         setup tables is in SBIW - after that you can do delta extractions by initialize the extractor.
          Full loads are always taken from the setup tables



Q2: HOW DELTA WORKS FOR LO EXTRACTION AND WHAT ARE UPDATE METHODS?           
Type 1: Direct Delta
Each document posting is directly transferred into the BW delta queue
Each document posting with delta extraction leads to exactly one LUW in the respective BW delta      queues
Type 2: Queued Delta
Extraction data is collected for the affected application in an extraction queue
Collective run as usual for transferring data into the BW delta queue
Type 3: Un-serialized V3 Update
Extraction data for written as before into the update tables with a V3 update module
 V3 collective run transfers the data to BW Delta queue
In contrast to serialized V3, the data in the updating collective run is without regard to sequence from  the update tables

Q3: HOW TO CREATE GENERIC EXTRACTOR?
1.  Select the DataSource type and give it a technical name.
2.  Choose Create.
      The creating a generic DataSource screen appears.
3.  Choose an application component to which the DataSource is to be assigned.
4.  Enter the descriptive texts. You can choose any text.
5.  Choose from which datasets the generic DataSource is to be filled.
     Choose Extraction from View, if you want to extract data from a transparent table or a                database view.   
     Choose Extraction from Query, if you want to use a SAP query InfoSet as the data   source. Select the required InfoSet from the InfoSet catalog.
     Choose Extraction using FM, if you want to extract data using a function module. Enter     the function      module and extract structure.
     With texts, you also have the option of extraction from domain fixed values.
6.  Maintain the settings for delta transfer where appropriate.
7.  Choose Save.
     When extracting, look at SAP Query: Assigning to a User Group.

 
Note: when extracting from a transparent table or view:

    If the extract structure contains a key figure field, that references to a unit of  measure       or currency unit field, this unit field must appear in the same extract structure as the key     figure field.

A screen appears in which you can edit the fields of the extract structure.

8. Choose DataSource ® Generate.
   
   The DataSource is now saved in the source system.






Q4: HOW TO ENHANCE A DATASOURCE?

Step 1: Go to T Code CMOD and choose the project you are working on.
Step 2: Choose the exit which is called when the data is extracted.
Step 3: There are two options
            Normal Approach: CMOD Code
            Function Module Approach: CMOD Code
Step 4: Here in this step we create a function module for each data source. We create a            
            new FM
           (Function Module in SE37)

Data Extractor Enhancement - Best Practice/Benefits:

This is the best practice of data source enhancement. This has the following benefits:

  • No more locking of CMOD code by 1 developer stopping others to enhance other      extractors.
  • Testing of an extractor becomes more independent than others.
  • Faster and a more robust Approach

Q5: WHAT ARE VARIOUS WAYS TO MAKE GENERIC EXTRACTOR DELTA ENABLED?  
This field from the extraction structure of a DataSource meets one of the following criteria:
1. The field has the following type: Time stamp. New records to be loaded into the BW using            a delta upload have a higher entry in this field than the time stamp of the last extraction.
2. The field has the following type: Calendar day. The same criterion applies to new records as in the time stamp field.
3. The field has another type. This case is only supported for SAP Content DataSources. In this case, the maximum value to be read must be displayed using a DataSource-specific exit when beginning data extraction.
Q6: WHAT ARE SAFETY INTERVALS?
This field is used by Data Sources that determine their delta generically using a repetitively-increasing field in the extract structure.

The field contains the discrepancy between the current maximum when the delta or delta init extraction took place and the data that has actually been read.

Leaving the value blank increases the risk that the system could not extract records arising during extraction.

Example:

 A time stamp is used to determine the delta. The time stamp that was last read is 12:00:00. The next delta extraction begins at 12:30:00. In this case, the selection interval is 12:00:00 to 12:30:00. At the end of extraction, the pointer is set to 12:30:00.
A record - for example, a document- is created at 12:25 but not saved until 12:35. It is not contained in the extracted data but, because of its time stamp, is not extracted the next time either.


Q7: HOW IS COPA DATASOURCE SET UP?
R/3 System

1. Run KEB0
2. Select Datasource 1_CO_PA_CCA
3. Select Field Name for Partitioning (Eg, Ccode)
4. Initialize
5. Select characteristics & Value Fields & Key Figures
6. Select Development Class/Local Object
7. Workbench Request
8. Edit your Data Source to Select/Hide Fields
9. Extract Checker at RSA3 & Extract

BW System

1. Replicate Data Source
2. Assign Info Source
3. Transfer all Data Source elements to Info Source
4. Activate Info Source
5. Create Cube on Infoprovider (Copy str from Infosource)
6. Go to Dimensions and create dimensions, Define & Assign
7. Check & Activate
8. Create Update Rules
9. Insert/Modify KF and write routines (const, formula, abap)
10. Activate
11. Create InfoPackage for Initialization
12. Maintain Infopackage
13. Under Update Tab Select Initialize delta on Infopackage
14. Schedule/Monitor
15. Create Another InfoPackage for Delta
16. Check on DELTA Option
17. Ready for Delta Load

Q8: WHAT ARE VARIOUS WAYS TO TRACK DELTA RECORDS?
Ans:

RSA7, LBWQ, Idocs and SMQ1.
BW Data Modeling












Q1: WHAT ARE START ROUTINES, TRANSFORMATION ROUTINES, END ROUTINES, EXPERT ROUTINE AND RULE GROUP? 

Start Routine

The start routine is run for each data package at the start of the transformation. The start routine has a table in the format of the source structure as input and output parameters. It is used to perform preliminary calculations and store these in a global data structure or in a table. This structure or table can be accessed from other routines. You can modify or delete data in the data package.

Routine for Key Figures or Characteristics

This routine is available as a rule type; you can define the routine as a transformation rule for a key figure or a characteristic. The input and output values depend on the selected field in the transformation rule.

End Routine

An end routine is a routine with a table in the target structure format as input and output parameters. You can use an end routine to postprocess data after transformation on a package-by-package basis. For example, you can delete records that are not to be updated, or perform data checks.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE). You have to use a dummy rule to override this.

Expert Routine

This type of routine is only intended for use in special cases. You can use the expert routine if there are not sufficient functions to perform a transformation. The expert routine should be used as an interim solution until the necessary functions are available in the standard routine.
You can use this to program the transformation yourself without using the available rule types. You must implement the message transfer to the monitor yourself.
If you have already created transformation rules, the system deletes them once you have created an expert routine.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE).

Rule Group 

A rule group is a group of transformation rules. It contains one transformation rule for each key field of the target. A transformation can contain multiple rule groups.
Rule groups allow you to combine various rules. This means that for a characteristic, you can create different rules for different key figures.
          



Q2: WHAT ARE DIFFERENT TYPES OF DSO'S AND THEIR USAGE

Ans: See this table
Type

Structure
Data Supply

SID Generation
Details

Example



Standard DataStore Object



Consists of three tables: activation queue, table of active data, change log
From data transfer process


Yes
Standard DataStore Object



Operational Scenario for Standard DataStore Objects
Write-Optimized DataStore Objects




Consists of the table of active data only
From data transfer
process

No
Write-Optimized DataStore Object

Operational Scenario for Write-Optimized DataStore Objects
DataStore Objects for Direct Update

Consists of the table of active data only
From APIs

No
DataStore Objects for Direct Update



Operational Scenario for DataStore Objects for Direct Update.

Q3: WHAT IS COMPOUNDING?

    You sometimes need to compound Info Objects in order to map the data model. Some Info Objects cannot be defined uniquely without compounding.

For example, if storage location A for plant B is not the same as storage location A for plant C, you can only evaluate the characteristic Storage Location in connection with Plant. In this case, compound characteristic Storage Location to Plant, so that the characteristic is unique.

Using compounded Info Objects extensively, particularly if you include a lot of Info Objects in compounding, can influence performance. Do not try to display hierarchical links through compounding. Use hierarchies instead.

A maximum of 13 characteristics can be compounded for an Info Object. Note that characteristic values can also have a maximum of 60 characters. This includes the concatenated value, meaning the total length of the characteristic in compounding plus the length of the characteristic itself.




Q4: WHAT IS LINE ITEM DIMENSION AND CARDINALITY?

1. Line item:

This means the dimension contains precisely one characteristic. This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table. Removing the dimension table has the following advantages:

  • When loading transaction data, no IDs are generated for the entries in the dimension table. This number range operation can compromise performance precisely in the case where a degenerated dimension is involved.

  • A table- having a very large cardinality- is removed from the star schema. As a result, the SQL-based queries are simpler. In many cases, the database optimizer can choose better execution plans.

Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include additional characteristics. This is only possible with normal dimensions.

It is recommended that you use Data Store objects, where possible, instead of Info Cubes for line items.

 2. 
High cardinality:

This means that the dimension is to have a large number of instances (that is, a high cardinality). This information is used to carry out optimizations on a physical level in depending on the database platform. Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality.

Q5: WHAT IS REMODELING?
You want to modify an InfoCube that data has already been loaded into. You use remodeling to change the structure of the object without losing data.

If you want to change an InfoCube that no data has been loaded into yet, you can change it in InfoCube maintenance.

You may want to change an InfoProvider that has already been filled with data for the following reasons:

  • You want to replace an InfoObject in an InfoProvider with another, similar InfoObject. You have created an InfoObject yourself but want to replace it with a BI Content InfoObject.

  • The structure of your company has changed. The changes to your organization make different compounding of InfoObjects necessary.

Q6: HOW IS ERROR HANDLING DONE IN DTP?

At runtime, erroneous data records are written to an error stack if the error handling for the data transfer process is activated. You use the error stack to update the data to the target destination once the error is resolved.

With an error DTP, you can update the data records to the target manually or by means of a process chain. Once the data records have been successfully updated, they are deleted from the error stack. If there are any erroneous data records, they are written to the error stack again in a new error DTP request.

  1. On the Extraction tab page under Semantic Groups, define the key fields for the error stack.
  2. On the Update tab page, specify how you want the system to respond to data records with errors:
  3. Specify the maximum number of incorrect data records allowed before the system terminates the transfer process
  4. Make the settings for the temporary storage by choosing Goto ® Settings for DTP Temporary Storage
  5. Once the data transfer process has been activated, create an error DTP on the Update tab page and include it in a process chain. If errors occur, start it manually to update the corrected data to the target.

Q7: WHAT IS THE DIFFERENCE IN TEMPLATE/REFERENCE?

If you choose a template InfoObject, you copy its properties and use them for the new characteristic. You can edit the properties as required

Several InfoObjects can use the same reference InfoObject. InfoObjects of this type automatically have the same technical properties and master data.
BW Reporting (BEx)         

Q1: WHAT IS THE USE OF CONSTANT SELECTION?

In the Query Designer, you use selections to determine the data you want to display at the report runtime. You can alter the selections at runtime using navigation and filters. This allows you to further restrict the selections.

The Constant Selection function allows you to mark a selection in the Query Designer as constant. This means that navigation and filtering have no effect on the selection at runtime. This allows you to easily select reference sizes that do not change at runtime.

e.g:     In the InfoCube, actual values exist for each period. Plan values only exist for the entire year. These are posted in period 12. To compare the PLAN and ACTUAL values, you have to define a PLAN and an ACTUAL column in the query, restrict PLAN to period 12, and mark this selection as a constant selection. This means that you always see the plan values, whichever period you are navigating in.


Q2: WHAT IS THE USE OF EXCEPTION CELLS?

When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell.

Cell-specific definitions allow you to define explicit formulas and selection conditions for cells as well as implicit cell definitions. This means that you can override implicitly created cell values. This function allows you to design much more detailed queries.

In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help formulas.

Q3: HOW TO TAKE DIFFERENCE IN DATES AT REPORT LEVEL?

1. In the new formula window right click on Formula Variable and choose New Variable
2. Enter the Variable Name, Description and select Replacement Path in the Processing by               field.
   Click the Next Button
3. In the Characteristic screen, select the date characteristic that represents the first date    to use in the calculation
4. In the Replacement Path screen select Key in the Replace Variable with field. Leave all    the other options as they are (The offset values will be set automatically).
5. In the Currencies and Units screen select Date as the Dimension ID
Repeat the same steps to create a formula variable for second date and use them in the calculation.


continued.....