This document covers frequently asked questions from Consultants, Partners and Customers about SAP Integrated Business Planning Solution. The purpose of this document is to cover important questions that come up during IBP Implementations in application areas like Excel Add-In , Model Configuration, Analytics , Process Management, System Setup and Transports, Releases and Data Integration. This page will be updated frequently. If you have additional questions that need to be added, please contact Raghav Jandhyala (email@example.com) and Alecsandra Ghita (firstname.lastname@example.org)
Planning with IBP Excel Add-In
Q: How to do I get started with creating a simple template?
A: Refer SAP Note: 1790530 S&OP / IBP Planning View Templates for the Excel Add-In
Q: How to install add in and create a logon?
A: Refer to the Application help > Application Help > User Interface > Planning with Microsoft Excel > Information for Administrators. Also refer to note: Install the S&OP / IBP Excel Add-In: Supported Configurations / Prerequisites 2135948
Q: IBP add on disappeared from the Menu Bar. How can I bring it back?
A: This happens usually after excel session has closed unexpectedly. Check if IBP add on is disabled and enable it back: Excel Options -> Add-Ins -> Manage Disabled Items -> Select IBP add on -> Enable. Then go to COM Add -> Ensure that IBP add on is checked.
Q: Can a Planning View display different time granularity throughout the displayed time horizon?
A: This is planned with version 1705. Until then a planning view can display only one time profile level (one type of time granularity) at a time. To view multiple time levels you can use an EPM Local Member to aggregate to other time periods. Another option would be to create one template with 2 different planning views on two different tabs: For example, one to display data on monthly basis and the other one in quarterly buckets.
Q: Is S&OP Excel Add-In multilingual?
A: Yes, upon installation of the Add-in you can choose from multiple languages to install. Only one language can be installed. This is the list of currently supported languages:
- Chinese (Simpliefied)
Q: Are the planning views charts IBP or Excel functionality? What is the recommended number of rows to be taken into account for the total calculations in the planning view charts?
A: In the SAP example templates (of note 1790530) they are implemented with Microsoft Excel functionality. Only the data and the axes of the planning view are generated by the add-in. The entire header area including the headings for the row axis are part of the template.
There are two types of templates: Formula based templates and VBA driven templates. The performance of the formula based templates deteriorates if more than 300 rows are included in the chart.
There are other ways to implement charts in Microsoft Excel, of course, and for specific layouts of planning views something simpler could be implemented.
In general template development beyond the SAP samples is done by implementation projects.
Q: What are the performance recommendations for Excel Planning Views?
A: Refer to the following SAP Notes on Performance and Sizing recommendations of Planning Views
Q: Are the planning views transferable?
A: Planning views are not transferable unless the planning areas are identical in key figures and attributes. If the planning areas are identical, you log on to the one you want to copy from, load the planning view, log off, log on to the one you want to copy to and refresh. Then you just add it as a favorite or template.
If you need transfer between different planning areas, you need to have an empty template (i.e. the chart, the hidden sheets, the header information etc., but no planning view in it). SAP delivers some example empty templates with note 1790530.
Q: Is it possible to use Excel macros and save them at template level?
A: Yes macros can be used and saved in templates or favorites. You may have to add the path in which the add-in stores your workbooks (typically …\Documents\SAP_IBP) a trusted location for these workbooks to work properly.
Q: Could it be possible to track master data changes independent of impact on Key Figures?
A: We currently track changes to Master Data as they are related to key figure values. Tracking changes of master data by itself is not a requirement that customers have asked about but we’re interested in such use cases.
Q: Is it possible to view history changes in IBP Excel UI?
A: Yes, with IBP version 6.3. or higher.
Q: Do we have the ability to automatically run an MS Excel macro when opening a favorite?
A: Depends on what you want to do in the macro. The IBP Excel Add-In also uses the events triggered by Microsoft Excel upon opening the workbook and activating a sheet. If your macro interferes with what the add-in does, your add-in may not work properly. So you need to test your macro with the add-in.
Q: Can you also use Excel calculations in cells? If yes, are these saved in HANA DB?
A: You can use formulas to compute editable key figures. The resulting numbers will be stored in HANA DB, but not the formulas. With the sheet option Keep Formula on Data you can keep those formulas in the planning view.
Q: I entered a calculation directly in the planning view, why is it not automatically converted into local member?
A: The formula is automatically converted into local members only if the Local Member Recognition feature is activated. Edit View > Sheet Options > Activate Local Member Recognition
Q: If multiple users are using the same data, how do table locks works?
A: To avoid blocking users unnecessarily there is no locking on application level. The assumption is that users rarely work on the same data at the same point in time in an actual S&OP or demand planning process.If two users save data the system takes the last change saved. Whenever you query in Excel (that is after refresh or simulate or changing filter) you see the newest saved data, except for changes to data you did yourself in simulation mode.
Q: When you press Simulate in the MS Excel UI, do all key figures in the model get calculated or just the ones you have chosen in that specific planning view?
A: When pressing simulate, the system disaggregates and updates the changed values and from there computes all relevant calculated key figure values to show the correct result for everything that is shown in the planning view. This involves typically many more steps and calculations than just updating the changed values.
When you change your filter or planning level or list of key figures, the system dynamically computes what’s necessary for the new visible set of key figure values without having to leave the simulation.
Q: What happens if we copy one of the SAP Sample Planning Areas into a new Planning Area, if it uses an attribute whose length was already changed? Will the 'active' attributes length change back to SAP delivered lengths?
A: No, the model will keep the consistency and use the new attribute length previously defined. For example, if you have changed "CUSTID" attribute length from 20 to 30 in your Attributes definition, and SAP4 Sample model have this same attribute with length = 20; the new copied planning area will have the same with the new length defined, that is 30 on this case.
Q: Is it possible to activate more than one planning area at the same time?
A: Currently, activation of two Planning Areas concurrently is not possible. The system is designed in that way to avoid back-end deadlocks during the process and so that the activation process itself is also faster.
Q: How do I copy and activate a SAP delivered planning area?
A: For guidance on how to copy and then activate a SAP sample planning area, please refer to the following blog post:
SAP4 Planning Area Copy / Activate / Load
Going futher, part 2 of this blog will guide you on how to load data into IBP via flat file.
SAP4 Planning Area Copy / Activate / Load - part 2
Q: How do I model material dependent Unit of Measure conversions?
A: The base UOM can be modeled as an attribute of the Product Master. This will allow loading transactional key figure data in the base unit of measure. Refer to the model configuration guide.
Another option that would allow specifying the unit of measure at the time of data load would be modeling UOM conversion similar to currency conversion (maintaining the base UOM as a planning level root attribute).
Q: What are some guidelines when to use a compound master data type?
A: A Compound Master Data Type is used to store attributes that belong to multiple Master Data Type Keys. For example in the SAP2 planning area, Market Family and Market Segment are attributes of the combination of Customer and Product and are maintained via the master data type “Customer Product”. There is a foreign key relationship between the keys of the compound and the referenced master data types with individual keys.
Based on the Planning Area Settings if an attribute of a compound master data is chosen as mandatory then data will not be loaded for a key figure unless a corresponding master data entry exists in the compound master data type.
Q: How do we use compound master data in planning levels build?
A: For planning levels which include primary master data types like Product and Customer, the Compound Master Data Type “Customer Product” should also be included so that the Market Size and Market Revenue attributes are available. Also, a Planning Level can include a root attribute coming from a Compound Master Data Type where this attribute is not a key. Example: Forecast Location is an attribute of a compound master data type “Customer Sales Org”. This attribute can be root of a planning level such as Product - Forecast Location.
Q: What is a virtual master data type and how is it used?
A: Virtual Master Data Type doesn’t store data but models a relationship between other master data types. It is used to join two or more MDTs with a join condition. This is usually used when an attribute of one Master Data Type should be available for another. Example: The master data type “Product Customer Group” has an attribute Active. This attribute should be available for all Customers belonging to the Customer Group. To achive this we need to join the Product Customer Group and Customer Master Data Types. The join is on Customer Group.
Q: When should I use a reference master data type?
A: A reference master data object is a view on an existing master data object. Reference Master Data Type is master data type that refers to another master data type. It does not contain actual master data but simply refers to the data contained in its underlying Master Data Type. It is required when the underlying data is the same but can play different semantic roles and you want to avoid loading the same data twice. For example, consider “Product” and “Component”. Data-wise a component is also a product. So one could model “Product” as an independent Master Data Type whereas “Component” could become a Reference Master Data Type, which refers to master data type “Product”. Of course, this is optional as the master data type “Component” could also be defined separately from “Product” and a separate set of data could be loaded for components.
Q: Key Figure creation, explain the concept of base planning level and request level. At what level calculation should happen etc.
A: Base planning level defines the planning level at which a Key Figure is Stored. On top of this one can build a graph of calculations that iteratively define how other key figures are calculated based on the stored key figure. The graph ends with request level nodes, which are the ones that a user queries form the IBP Excel UI or IBP Analytics. At query time the graph is evaluated from request level to stored key figures to determine what needs to be read and then in the reverse order to do the actual calculations.
Usually the base planning level is the most detailed planning level for a key figure. That means you can usually display results for a key figure at base planning level (if this is your request level) or any aggregated (request) level according to the defined configuration. However there are instances where you want to configure a KF stored at an aggregated level (for example a financial qunatity at Product Family and Customer Group level) but want to display same at product/customer detailed level. Then you can use for example a split factor calculation to calculated more detailed plannibg levels than the base planning level so that at request level you can show the key figure for customers and products.
Q: How do I perform addition and subtraction calculations where the Key Figures are at different base planning levels?
A: You can perform additions or subtractions on KFs at different plan levels. Eg KF1@PERPRODCUST + KF2@PROD. In this case an inner join is performed and KF2@PERPROD value will be available at plan level PRODCUST. You need to check for Null conditions eg ISNULL(KF2@PROD)
Q: For cross period calculations, what guidelines can I use to determine whether L-Code, attribute transformations, or some other solution should be used?
A: For cross period calculation, there are several configuration options available: Period transformations, Time Aggregation Calculation, Period to Data calculations etc.
Q: Where can I find documentation for L-Code?
A: L-Code is internal to SAP and there is no documentation available. LCODE is not recommended to be used. There are many cases where LCODE can be avoided. Only use case for LCODE is for calculations like Cumulative Sum where KF at prior period is used to calculate KF at current period.
Q: Are there plans to add additional statistical forecast models to the solution?
A: More statistical forecasting methods than the ones available with IBP for sales and operations come with IBP for demand. In release 5.0, IBP for demand already supports additional Statistical Models like MLR, Croston, Auto-Fit, and Weighted Moving Average.
Q: Is custom code like function modules and BADIs possible in IBP?
A: There is no concept of Custom Code / Custom Development, However, the entire model of planning area and master dara types is highly configurable to adapt to a Customer’s use case. There are no code exits but in practice most calculations can be performed through configuration. If more complex Custom Code needs to be done, then you can contact SAP IBP Product Management.
Q: Can you partially copy configuration from one planning area to another?
A: Partial copy is available for the unified planning area since IBP 1608 release.
Q: Can certain calculations be batch scheduled as in other applications?
A: Yes, you can use Copy Operator to perform calculations and store in a target key figure. The Copy Operator can be scheduled or run on request from the IBP Excel UI.
Q: Are the delivered modes just examples or can I use those as starting point for my customer configuration?
A: Delivered Models are example content. If Customer’s requirement fits closely to the delivered model, then they can use the Delivered Model and adapt further to their requirements. In most cases we see that Customer starts with their own planning Model. If Supply Planning or Inventory Optimization are needed, then SAP4 and SAP3 planning areas are copied as starting points for the Customer Planning Area.
Q: Where can I find documentation on what each model does?
A: The Model Configuration Guide contains details on SAP delivered models. Also refer to EKT materials where some of teh model details are covered. More comprehensive details of each model will be available in Future release.
Q: Do different application modules like Demand, Inventory etc. need to be in separate planning areas or can the functionality of both be configured in one planning area?
A: They can be all configured in one planning area
Q: Can customer buy just Demand and then add Inventory? And incorporate it into the same planning area?
Q: Possibility to copy KF across different planning level?
A: Calculations can be defined to copy KF values across planning levels. Refer Model Configuration Guide – Split Factor Calculation.
Q: Product Interchangeability - Transfer of forecast from one product (obsolete /discontinue) to another product (new product). Possibility for product attribute status as discontinue not be to be considered for demand disaggregation.
A: Interchangeability and new product introduction are on the IBP roadmap. Already today, you can generate statistical forecast for a new product based on the history of like products.
Q: If building from scratch, what master data should follow a naming convention referring to the standard model SAP4 so that supply planning operators can be used at a later stage?
A: Refer to application help on Supply Planning . There are fixed semantics of Attributes Master Data Types and Key Figures. eg Attributes: PRDID, LOCID, CUSTID, RESID,SOURCEID, etc ; Master Data Types : <Prefix>PRODUCT, <Prefix>CUSTOMER,etc , KFs: CONSENSUSDEMAND, PROJECTEDINVENTORY,etc. Refer to SAP4 Delivered Model.
Q: Should we copy SAP standard delivered model and modify it or should we be starting building it from scratch? Pro and cons of it.
A: This decisions completely depends on the business blueprint identified during the blueprint phase. Every Customer is different in how the Master data and Key Figures are used in IBP process. When using Supply Planning or Inventory Optimization the Standard Delivered Models SAP4 and SAP3 should be used as a starting point and additional KFs can be added to this.
Q: Data does not show up in Versions, though the same appears in baseline. What could be the issue?
A: Check if UOM and Currency have also been initialized in Versions.
Q: Marking attributes as root/key and what that means on to the planning levels and roots and how they work.
A: Root attributes are necessary as keys to identify individual key figure values. They define the independent dimensions in which the key figure values exist. The root attributes are often also the keys of master data object types but this is not a necessary condition. See chapter 7 from model configuration guide.
Q: What does the start and end data in the time profile mean and how they are used, how the different past and future periods for the periodicities in the time profile mean and how they relate to the ones in the planning area?
A: A time profile is made up of time profile levels, each of them being made up of periods. These periods are time intervals defined by an individual start and an end date for each period. The transactional data from a planning area is stored or calculated across these time profile periods. One defines start and end date of the time profile overall, so that the IBP process can be run for multiple years without changes to the time profile. Moreover, each planning area has a planning horizon where user defines the past and future periods to be considered in the planning algorithms. The periods that are part of this horizon are available for selection in the planning view - time settings. For more details check chapter 5 from the Model Configuration Guide.
Q: What are my options to disaggregate based on a calculated Key Figure at a different planning level during simulation?
A: Before calculated key figures can be used for disaggregation, they must be copied into another stored key figure at the same base planning level as the key figure to be disaggregated. This means that disaggregation works only if the disaggregation key figure can be calculated at the base planning level of the to-be disaggregated key figure. For more details, check 10.2 and 10.3 from the Model Configuration Guide. In special cases also the ADVSIM operator might be used.
Q: What is the difference between a KF, Helper KF and Attribute transformation?
A: A key figure is a quantitative measure of numbers that very by a configurable set of keys and time periods, which is either an input for planning or is an output of planning). A helper key figure is a key figure that is used to hold intermediate calculation result in case of a complex multi-step calculation. Such key figures are not available for display in the Planning View.
A calculation in which the value of an attribute gets calculated is referred to as attribute transformation.
System Setup and Transports
Q: How would I get a brand new system set up for a customer, and how many environments is standard (dev, test, prod)?
A: After Subscription Contract is signed, the Customer gets an On-Boarding Checklist and a System is provided based on the Contract terms. Usually the contract covers a Test and a Production Tenant.
Q: Do customers have to buy JAM separately?
A: Yes, JAM is a separate license from IBP.
Q: Is IBP a cloud only solution or is there an On-Premise option?
A: SAP IBP is only available in the Cloud version. S&OP 3.0 is available as an on-premise solution.
Q: Are updates added for free or is there a charge for them?
A: Being on-demand (subscription based), updates are included for paid customer/partner systems.
Q: How to work in Cloud environment in terms of system architecture? Create a Test planning area and development planning area etc.
A: Usually Cloud environments have a Test Tenant and Production tenant. Please refer Model Configuration Guide on how to setup Models and best practices for transports:
- Best Practices for Transporting Planning Models
- Model Transport in a 2-Phase Configuration Project.
Q: What are the Minimum Network Requirements for SAP IBP?
A: Minimum bandwidth and latency requirements for SAP Integrated Business Planning:
- Upstream: 2 mbps
- Downstream: 2 mbps
- Latency: 200ms or better
- All network time out setting should be equal or more than 10 min.
Q: How do you transport planning model and roles from one environment to another?
A: Using the self-service Transport Model Entities app.For details how to use the app and what are the prerequisites please check the relevant chapter from the Model Configuration Guide. It also includes best practices for setting up models for different phases of the project.
Q: Where can I find more information on Features and Functions in each IBP Release / Patch
A: The help page SAP Help Portal - Integrated Business Planning is the landing page for all information relevant to IBP. The Whats New section highlights the features in each Release or Patch.
Q: What is the logical sequence of steps in an implementation? Do we need to have an active model before we can load the data?
A: The high level steps are as follows:
1. Create Role
2. Create User
3. Configure the Model (Attributes, Master Data Types, Time Profiles, Planning Area, Planning Levels, Key Figures, and Scenarios)
4. Activate the Model (Time Profile, Master Data Objects, Planning Area)
5. Load Master Data, Time Periods, and Key Figures
6. Use Excel Add-In to view/change data
7. Create Visualizations/Charts/Dashboards in the UI for analysis
In practice these steps are iterative, of course. Yes, you must activate the model before you can load data.
Analytics, Dashboards and Process Management
Q: What is the source of web UI analytics –are they using HANA?
A: IBP Analytics uses HANA Calculation Engine and HANA Analytical Views. All numbers in planning and Analytics are computed inside the HANA DB.
Q: Can I export graphs from Web UI to PPT or other formats?
A: Analytics in Web UI of IBP provide an option to export static chart content it in PDF/JPG/PNG format
Q: How do I see Process Dashboard ?
A: Process Dashboard needs to be added to Dashboard under Dashboard Actions ->Edit ->Add Process. You can choose two different visualizations for Process: Donut Process or Chevron Process
Q: Can we share Dashboards?
A: Yes, you have an option to share dashboard by user or by role. This is performed from Dashboard Action menu.
Q: Are additional tasks created for a Process Step taken into consideration for Process Progress?
A: Process Progress % advances based on the Task Completion of the task associated with the Process Step definition and for the Users participating in the Process Step.
Q: What functionalities in IBP are dependent on use of JAM?
A: Entire Process Management in S&OP , Collaboration , Feed and Tasks, Excel UI - Comments and Reason Codes,
Q: Are there API available to integrate Third Party Collaboration Tools?
Q: How do the Custom Alerts differ than the key figure alerts defined during configuration?
A: There are two separate alert concepts in IBP.
- Key figure alerts are configured key figures that include a formula evaluation to determine if an alert exists. Key figure alerts are visible in the webUI via the sidepanel and in the Excel UI. These alerts are configured during the implementation. The end user applies the alert in the Excel UI for the current planning view / filter.
- Custom alerts are end-user configured alerts in the webUI. The user can create an alert definition for a combination of rules that determine if an alert should exist, the time period for determination, and the level of aggregation for determination. The definition also descriptions which data should be shown with the alert, such as other key figures (metrics) and charts. The custom alert subscription allows the user to filter which attributes should be used for the alerts. The custom alerts are visible in the webUI on the Custom Alert application screen.Note that the Custom Alert is functionality that requires licensing Supply Chain Control Tower module in IBP. Custom Alerts are not (yet) available in the Excel UI.
Q: What are the dependencies to use the geographic chart types?
A: The geographic chart types in IBP require a group by definition that includes the geo coordinates for the attribute. For example, to create a chart that shows the key figure by location, the location attribute must include the attributes GEOLONGITUDE and GEOLATITUDE.
Q: What are the dependencies to use the network chart types?
The network chart types are a special type that create a network view based on the master data types. To display a network, the group-by in the chart must include one or more of the following:
- Product (PRDID)
- Location (LOCID)
- Ship-from Location (LOCFR)
- Customer (CUSTID)
- Component (PRDFR)
- Source (SOURCEID)
Q: Where can I find information on User Administration in IBP?
In the Help at SAP Integrated Business Planning - SAP Library and User Management FAQ.
Q: Where can I find an overview slide deck on integration of IBP with external systems?
A: In the innovation discovery https://apps.support.sap.com/innovation-discovery/index.html search for
Enhanced integration of SAP Integrated Business Planning with external systems
Open the entry for the latest release and expand the section below Product Features. There you will find an overview slide deck.
Q: What are the data integration options to and from IBP?
A: SAP Cloud Platform Integration for data services (CPI-DS) is the recommended data integration tool to load time series data to IBP on-demand and to extract data. This is applicable to all modules that are using time series planning (all modules excepting Response piece).
For order based planning in Response SAP Smart Data Integration (SDI) is mandatory. It cannot be used to load or send any time series data. In some cases master data loaded via SDI can be reused for time series based planning.
Another option for upload, mainly in the beginning of the project when interfaces are not ready, is Data Integration app available in IBP web UI. It provides options to download csv templates of Time Profile, Master Data and Key Figures, which you can then fill with data and upload. This method is not suited for large uploads and not recommended for production systems.
Q: Can IBP connect to other SAP and non-SAP applications?
A: IBP can receive inputs from various SAP and non-SAP applications via data integration tools. For supported source systems and adapters, please check the Product Availability Matrix for SAP Cloud Platform Integration for data services and Smart Data Integration.
SAP Cloud Platform Integration for data services (CPI-DS)
Q: Is CPI-DS the same as SAP Hana Cloud Platform, Integration Services (HCI)?
A: SAP Hana Cloud Platform, Integration Services (HCI) was renamed to SAP Cloud Platform Integration (CPI). CPI consists of two different parts:
- SAP Cloud Platform Integration for process services (old name SAP Hana Cloud Platform, Integration Services for process integration, often abbreviated with HCI or HCI-PI)
- SAP Cloud Platform Integration for data services (old name SAP Hana Cloud Platform, Integration Services for data services, often abbreviated with HCI or HCI-DS)
So if people talk about HCI they might mean one of the two parts of CPI. In IBP the main integration middleware is CPI-DS and SDI, not CPI-PS.
CPI-DS - SAP Data Services Agent
Q: How many SAP Data Services Agents are needed per customer?
A: SAP Cloud Platform Integration for data services (CPI-DS) needs only one agent per customer. One agent can be connected to many different systems. Optionally you can have more agents that are grouped together. Agent groups are collections of agents that are logically grouped to enable high-availability solutions for your production tasks. It is also a method for load balancing tasks onto multiple agents.
Q: Which folders can be accessed by SAP Data Services Agent?
A: You can only access folders within the network for which you have direct access from the Agent. The Windows user that is used to run the data services agent service must be a domain user that has access to the network locations you want to read from. Additionally, all folders you want to read from need to be “white-listed” in the agent configuration tool (only root folder needs to be white-listed, sub folders will be accessible automatically).
Q: Agent Upgrades: When is it necessary?
A: For tasks running in production, SAP does not require customers to upgrade their agent. They can keep the old version of the Agent. Only when there is need to do new development and promote new tasks to production, an upgrade of the agent is required. For updates of Agents, there is no auto-update. The upgrade install is very simple: it will detect the previous version, keep all configurations and just replace the binaries.
Q: Where do I find agent configuration guide?
A: SAP Data Services Agent Guide
The link will lead you to the latest online version of the guide. On the right side of the page you can download the pdf version.
CPI-DS - Datastores
Q: Can we have multiple configurations for one datastore? (e.g ECC DEV / ECC TEST)
A: Yes, you can maintain multiple sets of datastore parameters. In the datastore`s configuration menu click on “Create New Datastore Configuration”. One Datastore will be maintained as Default Configuration. Unless user picks a different configuration from the dropdown list at run/schedule time, the default configuration is used. In order to be able to select a different configuration at run-time, you also need to create system configuration. System configurations group datastore configurations from different datastores together as one logical entity.
Q: How can I view data in SAP Cloud Platform Integration for data services ?
A: From the Datastores tab, select your IBP datastore and then the target object. Click the View Data icon (). This is only valid for Sandbox environment, view data in production is not allowed for security reasons.
Note: If View Data is not available in you sandbox target datastores, contact SAP Support and request that they activate View Data functionality on your target application. Once the IBP stored procedure has finished the data that had been loaded into the staging table will be deleted. This means that data for a particular run of a task may only be in the staging table for a matter of minutes – until the stored procedure is completed.
Q: REP Tables: When browsing the IBP metadata you will see for each table a version with the same table name but with _REP added. Can you load data into these _REP tables? If not, why does the system show these in the metadata view?
A: No, data cannot be loaded into IBP REP tables. These tables are for reporting purposes only. You can see from SAP Cloud Platform Integration for data services what data was loaded via these REP tables. This allows you to review error & success for individual rows. The IBP REP tables are purged periodically as well (default: after 7 days).
CPI-DS - Administration
Q: SSO support: Is SSO supported with SAP Cloud Platform Integration for data services?
A: We do have SSO with SAP ID Service. So if a user logs-in via the browser, and has certificate login enabled, the user will login automatically without needing to type username and password.
Q: Security - How are usernames and passwords used in CPI-DS and in the Agent? Are these usernames and passwords downloaded in the Agent configuration file?
A: The username/password entered in the Agent Configuration tool is only used for the initial connection from the agent to the server, once the initial connection is made, the agent is “trusted” by the server and the username/password is no longer needed. So the username/password is not stored in a configuration file. The certificates that are exchanged between agent and server are used for the trusted connection between agent and server for future communication. Other passwords to connect to source or target datastores, are entered in the SAP Cloud Platform Integration for data services web UI. Whenever SAP Cloud Platform Integration for data services stores a password, it’s always encrypted (e.g.store credentials to connect to source systems). Via our web UI, it’s even possible to generate a new encryption key and re-encrypt all passwords in case there would be a suspicion that the encryption key is compromised (or if the company has a policy to update encryption keys every x months.)
Q: Authorizations/Access Privilege - Why can't I do certain tasks?
A: You may not have the necessary privileges. SAP Cloud Platform Integration for data services has a role-based architecture. Developers have privileges only in the sandbox environment; Production Operators have privileges only in the production environment. Your Security Administrator can tell you what roles you've been assigned. You can also click on Help in the Administration Page for a description of each role and which privileges it has.
Q: It is an open SAP system required with “Generate and Execute” mode?
A: Yes. The ABAP execution mode, Generate and Execute, will cause an ABAP program file to be generated on the Agent machine. Next the file is automatically sent to the NetWeaver server, executed and the results (data) are sent back over RFC to the agent. The ABAP program itself is not stored on the system.
Q: Tablename Error - My task fails to run. I get a message that says "<tablename> is an invalid ABAP program name. Program names must be less than 40 characters and start with 'Z' or 'Y'". What do I do?
A: In the SAP application datastore, check if the ABAP execution option is set to Execute preloaded. If it is, make sure that the ABAP program has been installed on the SAP application server. Moreover, when you upload the program in SAP Transaction SE38 you must use the name specified in the ABAP Query. Edit any ABAP Query in the Dataflow, and Go to the ABAP Options tab. Use the ABAP Program Name shown.
Templates and global variables
Q: Templates – what are they and is there a catalog listing all templates that can be used with SAP Cloud Platform Integration for data services?
A: A template includes a pre-set Global Variables, pre-load/post-load scripts and may or may-not have a dataflow. Here is the catalog that lists ths pre-delivered templates.
Q: Template - For a template that was used in a task and that task was loaded and placed into production, can that task and template be modified?
A: 1) Tasks are developed in the sandbox environment, never in production. Tasks can be created from scratch, or they can be based on a template. For a task based on a template, once imported in the sandbox repository, developers can completely customize the template; SAP Cloud Platform Integration for data services does not have any restrictions here. 2) But, once tasks are tested and an Administrator moves them to production, no changes [at all] are allowed in the production repository. The only changes in production would be configuration changes like the connection parameters (connect to prod ERP and prod IBP instead of test systems) or variables (e.g. a variable filter to set a start date of which data to extract).
Q: Templates – why do the names of extractors have a suffix _SOP or _IBP?
A: When loading an extractor to a datastore you not only can specify the extractor name, e. g. 2LIS_11_VASCL or 2LIS_12_VCITM, but also a subscriber. The subscriber is especially important for delta enabled extractors. It is used in the source system to identify to which target delta information has been sent already. The templates delivered by IBP have been created for extractors using subscriber SOP or IBP. The name of the extractor in the datastore is a concatenation of the extractor name and the subcriber separated by an underscore. In general it is sufficient to use one subsciber ID, but if you need to replicate the same delta information several times, for example in case they are used in different planning areas, you should define several subscribers for the same extractor.
Q: What are the Global Variables?
A: Global variables are symbolic placeholders that are populated with values during task runs. Some of them are required by SAP IBP during data post processing (e.g: $G_PLAN_AREA, $G_SCENARIO, $G_TIME_PERIOD, $G_BATCH_COMMAND, $G_LOAD_DATE). For a complete list and descriptions see the integration guide.
Q: Global Variables: Can I change/edit the name of an existing Global Variable?
A: If you enter an incorrect name for a Global Variable and you save this, you cannot edit this name for the Global Variable. You must enter a new [correct name].
Q: Time Zone - What time zone is set for the times that display in the projects page, task schedule, and so on?
A: UTC time zone (Coordinated Universal Time) is displayed in all locations except the Schedule dialog. For scheduling a task, in the UI, you can set the timezone you want to use to schedule your task (behind the scenes CPI-DS will convert to UTC and store this in the repository).
Q: If I check integration job log in IBP Data Integration app, I see that Start At and End At are reported in different time zones. How can I correct this?
A: This gap will appear if the global variable $G_LOAD_DATE is set to SYSDATE(). Sysdate() returns the date and time in the local time zone of the SAP Data Services Agent machine while IBP tracks time as UTC (Universal Coordinated Time). Set the global variable to SYSUTCDATE().
CPI-DS - Tasks and data flows
Q: How can we run the task vs SAP Test environment?
A: Your Test and Production systems will not be open for modifications, so you cannot use the “Generate and Execute” mode. Instead, you need to set the datastore (configuration) to “Execute pre-loaded” for which you need to transport to the TEST system the ABAP program that was previously generated into DEV. The generated ABAP program can be found on the agent machine in the ABAP folder (in %DS_COMMON-DIR%).
Q: Mapping Target - In a task that I created from a template there are columns in the Output pane of the Target Query which are not mapped. Is this a problem?
A: The templates were created to cover a broad range of requirements. Unmapped columns in the Target Query may or may not be relevant. You may need to verify your specific requirements. Unmapped columns in the Output pane of the Target Query may give a warning when validating a task. The task can still be executed if validation has only warnings. However, if a column is required in the target stage table, you will get an error at task run time.
Q: Copy a Task - Can I copy a data flow to a different task?
A: Yes – Highlight your dataflow, and from the actions menu, choose REPLICATE. A list of other tasks will be presented. Choose the task you like to copy the dataflow to.
Q: Edit a Task - How to rename a source or target object?
A: If you want to rename a target object you just can copy the task to a new target object. If you want to rename a source object, like a table or an extractor the easiest way is as follows:
- Add the object with the new name to the dataflow in addition to the old one
- Link the new object the same way as the old one with the target(s)
- Open the successor step(s) in the data flow and replace the old source object name by the new one in all mapped fields, joins and filter conditions
- Delete the old object in the dataflow
- Validate the task
- In case of errors do the missing adaptions
Q: Multiple Data Flows – How are they executed - When I run a task containing multiple data flows, in what order are the data flows executed?
A: The dataflows will be executed sequentially following the order in which the target tables are listed in the task. You can modify the order of target table execution order by choosing Manage target order from the Actions menu in the task editor. You can only change the order of target table execution. You cannot change the execution order of dataflows for a given target table.
Q: Task Execution - On the Projects tab, why doesn't the task execution status update?
A: Click the Refresh button in the upper-right corner of the page to see an updated status. The refresh button is the two swirling arrows rotating in a circle.
Q: Locked by User - A task that I want to edit is locked by another user. How do I unlock it?
A: Only one user at a time may edit a task. If necessary, you can ask your administrator to unlock a task that someone has inadvertently left locked. Tip: After the task has been unlocked, if needed, refresh the Projects tab.
CPI-DS - Data flows syntax
Q: It is possible to have translation (mapping) tables maintained in SAP Cloud Platform Integration for data services?
A: No, as SAP Cloud Platform Integration for data services is not designed to store data. Instead, you can use lookup into file functionality.
Q: Is there any standard function to convert weeks to months?
A: There is no function for week to month transformation as we cannot tell start date and end date of a week from the WW_YYYY syntax. Usually, the customer has a table in their system that gives the start dates and end dates of a week. You would need to join this table into the queries of your task to get the dates you need. Or you could lookup into a table where this information is maintained. Transforming the data directly is SAP Cloud Platform Integration for data services is also possible, being just a matter of math and date time function.
Q: What are the mandatory fields to be mapped in an inbound data flow? (on-premise to IBP)
A: Key Figures must be loaded and the granularity given by the base planning levels (attributes marked as Root in the planning level). If data is not available at this level, then it should be transformed before sending it to IBP. Example: Shipments are available on weekly basis. Before sending them into IBP it is required to aggregate the values for the weeks that fall within same month. First you would need to convert weeks to month, then group by month and aggregate shipments qty.
For Master Data load you will have to map all attributes marked as Key or Required.
Each IBP table has a column named “FILENAME”. This column needs to be mapped to a constant that contains the target table name. This name needs to be enclosed within single quotes as it is a constant (e.g. ‘SOPMD_STAG_SM1CUSTOMER’)
Q: What are the existing options for removing duplicate entries?
A: Based on the business need, there are several ways:
By using “ABAP Aggregation” or “Aggregation” transform with a group by on key columns and use aggregation function in others (e.g. max(date), sum(quantity), …) By selecting the checkbox “Select Distinct Rows” on the filter tab of your Query transform (in case the records are identical in all fields you select)
Q: Is SAP Cloud Platform Integration for data services able to extract data from different sources in one task (e.g: SAP table, file).
A: SAP Cloud Platform Integration for data services cannot use different system sources in one task, but allows lookup in a datastore that is not the source datastore (reading from ERP and looking up for mappings into a flat file).
Q: Join - When performing a Join in a dataflow – what is the number of joins that I can perform?
A: Number of tables in the “Input Data” minus 1 is the number of joins you can do in the dataflow.
Q: Multiple Tables: How efficient is SAP Cloud Platform Integration for data services at performing runtime logic involving multiple tables?
A: When joining multiple tables, SAP Cloud Platform Integration for data services will generate an ABAP program (for ERP) or a SQL statement (for databases) so that the processing of the joins is pushed to the source. So performance depends on your source system, not on SAP Cloud Platform Integration for data services. If you would join files it's a bit more tricky because SAP Cloud Platform Integration for data services can't push down this and needs to do the join in memory on the Agent machine. So for files, SAP Cloud Platform Integration for data services recommends NOT to join large files.
Q: Source Systems: Can you load the same master data table from 2 different systems?
Example is…You want to load ID + attribute 1, 2 & 3 with one batch, next you want to load the same IDs with attribute 4, 5 &6. Will the end result in IBP be that they see all attributed for the loaded IDs?
A: Yes, this is supported.
Q: Time Dependent Data: When selecting time dependent data, can the selection range be relative (e.g., last 12 months to the next 12 months)?
A: Yes, that is possible. In the filter condition, you would use the sysdate() function to get the current date, and subtract x months to get the start date.
Q: Transform, Adding - Why can't I add a new transform after the Target Query transform?
A: The Target Query transform must be the final transform in the data flow. The columns in the Output pane reflect the schema for the target object.
Q: Save Actions - When are my actions saved?
A: In the Data Flow view, actions are periodically auto-saved so your interim changes are retained. However, Join condition information must be specifically saved by clicking the Save button. Note No rollback or history available. Before you make significant changes, you want to use Replicate to save a copy of your data flow.
Run / Schedule tasks
Q: Automating data-loads to run: How to automate data-loads into IBP from on-premise data services?
A: Automating data loads is done by scheduling the tasks with the built-in scheduler in SAP Cloud Platform Integration for data services or from IBP Application jobs app. E.g. you can schedule daily or weekly uploads.
Q: Controlled Release for running a dataflow: How would a user do this?
A: This can be done via SAP Cloud Platform Integration for data services UI the "Run now" button (instead of using the scheduler) or from IBP Application jobs app.
Q: Run Time Parameters: Can a user define the variables to pass the run time parameters for the fields?
A: Yes, variables are supported and can be set at execution time.
Q: Can a job be scheduled on a specific day e.g: every last Sunday of the month?
A: Yes, what can be done is to schedule the task every Sunday, but in the pre-load script do some checks and if not the last Sunday of the month terminate execution. The code should be something like below:
IF (( last_date(sysdate()) - sysdate() ) < 7)
Print('This is the last sunday of the month');
Raise_Exception('TASK WILL NOT EXECUTE - not last Sunday of the month');
Q: Monitoring Log, how to interpret the Row Count – How do you interpret the split between two different queries - a Target_Query1 and a Target_Query_Mapping2? For a successful load, the sum adds up to the full count of records. In a failed job, what might you see and what would it mean?
A: You only care about the final number into the Target Table. In the Monitor Log, it is the very last line where you see where the file was written to (your Target Table.) The rest of the information listed is internal to the engine used by the SAP Data Services Agent. If a job fails, what you care about is the error log and the trace log. Use those two logs to know if it failed on load (typically this will log JDBC error), or on post processing – IBP may return an error from the data upload stored procedure.
Q: Monitoring Log – how to interpret the State column - At different times you have observed different values for the “State” field. How do you use these in troubleshooting?
A: In the Monitor Log you don’t care. For example the value of STOP, means it loaded the data. Look at your trace and error logs to determine your next steps, not the Monitor Log.
Q: History, View Page - While a task is running, why aren't the logs in the History page updated?
A: The monitor log is refreshed every 10 seconds while the task is running. Click the Refresh button in the upper-right corner of the page to update the Trace Log and Error Log. Again show what the refresh icon looks like.
CPI-DS - Cross Topics
Q: Bex Queries in SAP BW - If the transaction data is in form of Bex in SAP BW, can this data be accessed by CPI-DS?
A: No, SAP Cloud Platform Integration for data services cannot access Bex queries, so if the data is only available in Bex (e.g. as calculated key figures), you would need to export the data in some way and store in a table or file and let SAP Cloud Platform Integration for data services access the table/file.
Q: DSO BW data - If the transaction data is in multi-provider (DSOs, InfoCubes) in SAP BW, can SSAP Cloud Platform Integration for data services read this data?
A: SAP Cloud Platform Integration for data services can read the data from the DSO tables (via the ABAP layer, no database access needed.)
Q: Z tables - If the master data (Product & Customer) is stored in custom database tables in ECC, can SAP Cloud Platform Integration for data services read these data?
A: Yes, SAP Cloud Platform Integration for data services can read the from custom Z tables (via the ABAP layer, no database access needed).
Q: Delta Extractors: Can SAP Cloud Platform Integration for data services access the delta extractors included in SAP On-premise products?
A: Yes, SAP Cloud Platform Integration for data services supports the delta extractors from SAP Business Suite applications.
Q: ECC - Sending data back: Can a user be sending back data to ECC such as forecast?
A: SAP Cloud Platform Integration for data services can write back to a file or database table. Or call a BAPI in ERP/APO (enabled as a webservice) to load data back to ERPAPO.
Q: Webservices Calls: Must all web service calls go through the Agent?
A: There are 2 types of webservice calls:
- An external app calling an SAP Cloud Platform Integration for data services webservice to start a task -> the agent is not involved here, WS call is directly to the SAP Cloud Platform Integration for data services server.
- SAP Cloud Platform Integration for data services doing a webservice call to load (or extract) data -> in this case it is the agent that does the call.
Q: Data Exports: are data exports supported by SAP Cloud Platform Integration for data services or are exports only requested using a Web Service (bypassing SAP Cloud Platform Integration for data services)?
A: Data exports are supported,SAP Cloud Platform Integration for data services can read from the IBP calculation views for key figures or attribute views for master data, and can write the on-premise system (via file or Webservices).
Q: IBP Tables - Can you write directly to an IBP core table with SAP Cloud Platform Integration for data services?
A: No, your task writes to IBP staging tables only. After the stage tables are loaded, SAP Cloud Platform Integration for data services task calls the IBP stored procedure. This procedure reads the data loaded into the IBP staging tables, validates the data, and if data passes the validations, it writes them to the IBP core tables, which have a structure that’s different from th core tables.
Q: Adapters Creation: Can a user develop their own adapter on SAP Cloud Platform Integration for data services agent to support other sources system – bottom-line, can user plugin additional adaptors into SAP Cloud Platform Integration for data services other than what is currently available.
A: No. SAP Cloud Platform Integration for data services has 2 adapters, created by the dev team: one for SuccessFactors, one for OData. There is currently no framework to plugin additional adapters.
Q: Third party ETL: Can 3rd party ETL loads data directly into IBP Staging tables?
A: SAP Cloud Platform Integration for data services is the only system-to-system integration tool that can load directly into the IBP staging tables (because SAP Cloud Platform Integration for data services is physically installed in the same datacenter as IBP).
Q: Does on-premise data services have to export a flat file and that flat file is imported in IBP. On-premise data services and IBP cannot be connected without a flat file in between?
A: Yes, on-premise Data Services or any other ETL tool need to go via files or another means to stage data (e.g. a database table). Next, SAP Cloud Platform Integration for data services is used to read the file/stage table and map to IBP. All complex data processing in this case can be done in the on-premise ETL tool, SAP Cloud Platform Integration for data services is only used for the final mapping to the IBP tables
Q: Timeout Period - Is it possible to configure the setting for the timeout period?
A: No. Your session will automatically time out. This feature is to protect the security of your data.
Q: SAP Support - How do I contact SAP Support to report a problem in SAP Cloud Platform Integration for data services?
A: Go to http://service.sap.com/support/ Component LOD-HCI-DS.
Returns one of two values you specify, based on the results of a condition.
IIF( condition, value1 [,value2] )
The condition you want to evaluate. You can enter any valid transformation expression that evaluates to TRUE or FALSE.
Any datatype except Binary. The value you want to return if the condition is TRUE. The return value is always the datatype specified by this argument. You can enter any valid transformation expression, including another IIF expression.
|value2||Optional||Any datatype except Binary. The value you want to return if the condition is FALSE. You can enter any valid transformation expression, including another IIF expression.|
Unlike conditional functions in some systems, the FALSE (value2) condition in the IIF function is not required. If you omit value2, the function returns the following when the condition is FALSE: 1. 0 if value1 is a Numeric datatype. 2. Empty string if value1 is a String datatype. 3. NULL if value1 is a Date/Time datatype. For example, the following expression does not include a FALSE condition and value1 is a string datatype so the PowerCenter Integration Service returns an empty string for each row that evaluates to FALSE: IIF( SALES > 100, EMP_NAME )
|120||Sally Green||Sally Green|
|NULL||Greg Jones||” (empty string)|
Return Value value1 if the condition is TRUE. value2 if the condition is FALSE. For example, the following expression includes the FALSE condition NULL so the PowerCenter Integration Service returns NULL for each row that evaluates to FALSE: IIF( SALES > 100, EMP_NAME, NULL )
|120||Sally Green||Sally Green|
|NULL||Greg Jones||” (empty string)|
If the data contains multibyte characters and the condition argument compares string data, the return value depends on the code page and data movement mode of the PowerCenter Integration Service.
IIF and Datatypes
When you use IIF, the datatype of the return value is the same as the datatype of the result with the greatest precision.
For example, you have the following expression:
IIF( SALES < 100, 1, .3333 )
The TRUE result (1) is an integer and the FALSE result (.3333) is a decimal. The Decimal datatype has greater precision than Integer, so the datatype of the return value is always a Decimal.
When you run a session in high precision mode and at least one result is Double, the datatype of the return value is Double.
Special Uses of IIF
Use nested IIF statements to test multiple conditions. The following example tests for various conditions and returns 0 if sales is 0 or negative:
IIF( SALES > 0, IIF( SALES < 50, SALARY1, IIF( SALES < 100, SALARY2, IIF( SALES < 200, SALARY3, BONUS))), 0 )
Use IIF in update strategies. For example:
IIF( ISNULL( ITEM_NAME ), DD_REJECT, DD_INSERT)
Alternative to IIF
Use “DECODE” instead of IIF in many cases. DECODE may improve readability. The following shows how you use DECODE instead of IIF using the first example from the previous section:
DECODE( TRUE,SALES > 0 and SALES < 50, SALARY1,SALES > 49 AND SALES < 100, SALARY2,SALES > 99 AND SALES < 200, SALARY3,SALES > 199, BONUS)
You can often use a Filter transformation instead of IIF to maximize session performance.