Monthly Archives: November 2016

Mapping Data Between Plan Types in Oracle Planning and Budgeting Cloud Service (PBCS): The Case for Map Reporting, XWRITE, or Data Management

November 23, 2016

Author: Mohan Chanila, Performance Architects

Over the last several years, the number of Oracle Planning and Budgeting Cloud Service (PBCS) applications has exponentially increased and, with this increase, we’ve seen several large on-premise applications migrated or simply moved onto PBCS. When migrating these applications to the cloud, the trend has been to either merge several on-premise applications into one PBCS application with multiple plan types or to simply move them to several PBCS applications.

Whatever the situation, there is always a requirement for moving data between either two plan types in the same application, or between applications, in PBCS. With PBCS, we now have several options to move data.  In this blog, we’ll examine three options and when they are suitable, depending on your individual data and mapping requirements.

At a high level, the three options include creating a “Map Reporting Application;” writing and creating a business rule using the “@XWRITE” function; and “Data Management” functionality. Let’s examine these options in more detail.

Option 1: Create a Map Reporting Application

Creating a “Map Reporting Application” is functionality that has been around for a few years now. It was first introduced in on-premise Oracle Hyperion Planning Version 11.1.2.3.  The purpose was to provide an administrator with the ability to push data from a Planning-only BSO plan type to a reporting ASO plan type or to another BSO plan type (for example, a staff planning cube to a high-level P&L cube). Before this functionality was available, data movement between BSO and ASO plan types was performed using data extracts and manual uploads into ASO.

This was immediately recognized as a useful utility for functional administrators. In the latest release of PBCS, Oracle added functionality to automate map reporting, as well as to dynamically push data from a data form by creating data maps and assigning them to forms. Oracle has come a long way with this particular functionality.

I believe the pros and cons of map reporting include:

Pros:

  1. Very easy to set up.
  2. Fast execution time, providing you only push data and do not run a clear script prior to data push.
  3. Maps dimensions that are not available in the destination application, or vice versa. For example, you could move employee salaries from a staffing cube to a high-level P&L cube which doesn’t contain the employee dimension.
  4. Maps a smart list to a dimension in the destination. For example, you could report on several attributes.

Cons:

  1. If you execute a clear script prior to pushing data, depending on the amount of data being pushed, processing time can take a few minutes to an hour.
  2. Complex data mapping isn’t possible; for example, mapping several dimensions where certain members from the source application need to be mapped to very specific members in the destination application is not possible.
  3. This can only be executed by an administrator.
  4. Does not provide any log files if a job breaks.

Option 2: Create a Business Rule Using the @XWRITE Function

The XWRITE function has been around for much longer than map reporting applications and was previously the go-to method for moving data between BSO plan types. It is also very similar to map reporting in that you can map an entire dimension from one plan type to another that does not contain the dimension and vice versa. It can also be automated using “Jobs” in PBCS, or attached to a task list or a data form and executed by a planner.

The XREF function is also used in conjunction with the XWRITE function. XWRITE pushes the data while XREF pulls the data, which means simply that XWRITEs are written in the source plan type while XREFs are generally called from the destination plan type.

I believe the pros and cons of XWRITE include:

Pros:

  1. Executes extremely quickly with a small data set. In this example, “small” means we’ve typically used this for a handful of accounts, when moving aggregated employee/position data from a staffing model to a P&L model, across one scenario, version, or year. With this “small” data set, execution time could be just seconds. Larger data sets tend to take longer.
  2. Allows for customization such as appending on an aggregation script or performing any complex mathematical calculations prior to the data push.
  3. Can be executed by a planner or administrator.
  4. Can be automated either as part of EPM Automate scripts, jobs, or added to a data form to run on save.

Cons:

  1. Requires an administrator to maintain “Location Aliases” and a minor change could break the job.
  2. Performance depends on how well the script is written and optimized.
  3. Provides a job status, but not a comprehensive log file with details on errors.

Option 3: Use Data Management

“Data Management” is another relatively new module that allows you to push data between plan types in PBCS; PBCS subscriptions include Data Management (also called Financial Data Management Enterprise Edition or FDMEE for the cloud).

Data Management jobs require some setup time since the setup and workflow settings can be difficult to navigate, but it is worth it considering Data Management can automated in PBCS.

The single biggest difference between Data Management and the other two options is the ability to map multiple dimension members from your source plan type to very specific members in the destination. For example, if you have two dimensions, “Account” and “Entity,” and you have complex mapping between the source and target where several source accounts need to be mapped to different destination accounts with a similar concept on the entity, then Data Management is the best solution available to do this from within PBCS.

I believe the pros and cons of Data Management include:

Pros:

  1. Maps multiple dimension members from a source plan type to very specific members in the destination and essentially maintains a data map.
  2. Data maps can be uploaded, with functionality such as ranges, explicit mapping and the “Like” function.
  3. Once the workflow is set up, it can be automated.
  4. Provides comprehensive log files with errors and warnings.

Cons:

  1. Can be complex to set up initially.
  2. Cannot be used in conjunction with a business rule such as a clear script, unless the business rule is executed as a separate job.

In my experience, all three of the options have distinct advantages depending on the specific needs of your application(s). I have personally found all three methods useful at one time or another and have implemented several of these.


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Importing Supporting Detail from PeopleSoft into Oracle Planning and Budgeting Cloud Service (PBCS)

November 9, 2016

Author: Richard Pallotta, Performance Architects

One of Performance Architects’ Oracle Planning and Budgeting Cloud Service (PBCS) customers needed to import existing supporting detail from their PeopleSoft application to PBCS, but the only way they could get data out of PeopleSoft was in CSV format. PBCS requires XML-formatted data for the import, so data needed to be transformed for PBCS to ingest the information.

There are many useful tools available to convert CSV data to XML, but we wanted to provide a way to do this using commonly available tools.  In this case, we used Excel and a text editor.

This blog entry includes the basic steps to create a CSV-to-XML solution using Excel and a text editor based on this customer case study:

  1. Download a template XML file from PBCS
  2. Edit the template XML file and make a schema for Excel
  3. Open the PeopleSoft CSV file in Excel; open the XML schema in Excel; and map the CSV file
  4. Export the CSV file as an XML file
  5. Edit the XML file in a text editor
  6. Upload the XML file to PBCS

The rest of the content delves into the details for each step:

  1. Download a template XML file from PBCS

To make things easier, I’ve downloaded an existing set of supporting level detail data. This will serve as a template for your specific PBCS model and provide guidance for ultimately creating the XML file you’ll need to import.

But first, a little context for those who may not be familiar with the concept of “supporting detail.” This is additional information that exists below the level-0 members in PBCS, usually under “Accounts.” This information is stored in the PBCS relational repository (not Essbase), although the detail will consolidate to a numeric value that is stored in Essbase.

Let’s view a very basic web form intended to strip away everything but what is necessary to illustrate this task. Cells with supporting level detail are displayed in the web form with a green background color:

rich1

This web form has accounts in the rows and periods in the columns. Right-clicking on an account with a green cell will show the detail for account AC_4010:

rich2

Let’s add a sibling and a few more data points in the web form so we can get a more complete sense of the data we’re going to download:

rich3

We want to reverse-engineer the detail shown in this web form, and there are significant differences between the simplified and classic PBCS interfaces in regard to downloading content. Since a lot of good documentation is available on this functionality, I’ll leave that beyond the scope of this blog. We’re going to focus on transforming the data into the XML format that PBCS needs.

  1. Edit the template XML file and make a schema for Excel

Once you download the XML file, you’ll see it is named “Supporting Detail.xml.” We’re going to use this as our guide to create the XML mapping schema (sometimes you’ll see this type of schema called an “xsd file”).  Use a good text editor (one that will color-code the text and make your life easier) to open the XML file. The PBCS XML file is comprised of several primary sections marked by opening and closing tags:

  • columnDetails
  • dimMappings
  • planType
  • columnDetail
  • columnDetailItem

Similar to matching parentheses in a math formula, these sections must be opened and closed with tags. For example, the very top of the page starts with <columnDetails> and the very bottom closes with </columnDetails>. Spelling and upper / lower case letters definitely count here, so be careful.

The first thing we’re going to do is make a copy of this file, then make a simple schema file that we can use in Excel to map the PeopleSoft CSV file so we can export it as an XML file for PBCS to use.

Step 1: Open Supporting Detail.xml and save it as MySchema.xml

Step 2: Delete row 1 of the file and insert these two rows of text in lines 1 and 2 instead (Excel seems to require these for building the schema):

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?>

<data-set xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”>

Step 3: Delete the rows at the beginning and end of the file that says <columnDetails> and </columnDetails>

Step 4: Delete the rows with <dimMappings> and </dimMappings>, and everything between it:

rich4

The above is an example of my dimensional model (yours is probably going to different).

Step 5: Delete the row at the top of the file with <planType planName=”some name here”> and the one at the end of the file that says </planType>.

Step 6: Refer back to your original file because the next section needs some search-and-replacing. If the original file has a line (like above) that shows “Scenario” is being mapped as “DIM1”, then replace “DIM1” with “Scenario” in your new file. Change it from this:

rich5

To this:

rich7

Do the same for the rest of the dimensions. You only have to do this in the very first <columnDetail> section; the rest of the sections are redundant for the purpose of making the schema:

rich8

Leave everything alone in the <columnDetailItem> sections:

rich9

These contain supporting detail items including values, their positions, consolidation operators, and the description text of the item itself. The “Column Detail” and “Column Detail Item” sections operate in unison, just like a row in a data file that has a point of view (POV) and values.

Step 8: Delete everything after the closing </columnDetail> tag to the end of the file. That is, from the next line that starts with <DIM1> all the way to the end. Basically, we just need one complete set of tags to create the schema.

Step 9: Add the closing </data-set> tag to the last line of the file.

That’s it for the XML file. Just to review, these are the sections we should have:

First row:

  • <?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?>
  • An opening <data-set> tag in row 2, and a closing </data-set> tag in the last row
  • One set of opening and closing <columnDetail> tags
  • Multiple opening and closing <columnDetailItem> tags

Here’s my file:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

<data-set xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

   <columnDetail>

      <SCENARIO>Bud</SCENARIO>

      <ACCOUNT>AC_6000</ACCOUNT>

      <DEPARTMENT>DP_4640</DEPARTMENT>

      <PERIOD>Oct</PERIOD>

      <VERSION>Working</VERSION>

      <YEARS>FY18</YEARS>

      <FUND_CODE>Fund_Code_NA</FUND_CODE>

      <PROGRAM>Program_NA</PROGRAM>

      <PROJECT>Project_NA</PROJECT>

     <columnDetailItem>

      <VALUE>100</VALUE>

      <POSITION>0</POSITION>

      <GENERATION>0</GENERATION>

      <OPERATOR>1</OPERATOR>

      <LABEL>Apples</LABEL>

     </columnDetailItem>

   </columnDetail>

   <columnDetail>

      <SCENARIO>Bud</SCENARIO>

      <ACCOUNT>PERIOD</ACCOUNT>

      <DEPARTMENT>DP_4640</DEPARTMENT>

      <PERIOD>Oct</PERIOD>

      <VERSION>Working</VERSION>

      <YEARS>FY18</YEARS>

      <FUND_CODE>Fund_Code_NA</FUND_CODE>

      <PROGRAM>Program_NA</PROGRAM>

      <PROJECT>Project_NA</PROJECT>

     <columnDetailItem>

      <VALUE>100</VALUE>

      <POSITION>1</POSITION>

      <GENERATION>0</GENERATION>

      <OPERATOR>1</OPERATOR>

      <LABEL>Pears</LABEL>

     </columnDetailItem>

   </columnDetail>

</data-set>

Excel seems to really want spacing to be perfect, so use this example as a guide. Also, Excel refused to highlight the entire column when I dragged the mapping over, unless there was more than one set of <columnDetailItem> tags.

  1. Open the PeopleSoft CSV file in Excel; open the XML schema in Excel; and map the CSV file

The PeopleSoft supporting detail file should be a tab or comma-delimited text file; make sure to open it in Excel. If it doesn’t already have a header row, add one. For your own convenience, put the field names that correspond to the dimensions, values, labels, etc. into the Excel sheet.

Next, select the “Developer” tab in Excel and press “Source.” An XML pane will open on the right side of the worksheet:

rich10

Then, import the XML schema. Press the “XML Maps” button in the XML pane:

rich11

Press the “Add” button, and browse for the XML file you just created:

rich12

Don’t be concerned if you see this message, just press “OK”:

rich13

You’ll see something like this in the XML pane:

rich14

Now is a good time to clean up the CSV file. If you need to delete any extraneous columns or rearrange anything, you’ll need to do this before you start to apply the XML schema to the file. Add columns for “Position,” “Generation,” and “Operator” if they’re not already there. There should be a column for “Value,” which represents the data point coming from PeopleSoft.

The order of the columns in the Excel file should match those in the XML pane. Otherwise, you may get an export error in Excel when you try to convert the data.

“Position” is the relative location of the supporting detail member below the account and starts with “0.” The first, second and third siblings will be “1,” “2” and “3,” respectively. You’ll need to sort and process the CSV file to look for non-unique rows and sequentially insert a position value; that’s out of the scope of this document but an important thing to do. Here is an example of where I had to change a position value:

rich15

Now, left-mouse click and drag the column detail icons from the XML pane to the corresponding column in the Excel grid, e.g., row one of the column you’re mapping. If the entire Excel column doesn’t change color, there’s something wrong with your schema XML file and you’ll have to re-check it:

rich16

Repeat this for every column in the file. When it’s done, it should look similar to this:

rich17

  1. Export the CSV file as an XML file

Export the CSV file to XML format. Go to the “Developer” tab in Excel and press “Export”:

rich18

If all has gone well, you’ll not get an error message and will be prompted to save the file.

  1. Edit the XML file in a text editor

Before we can import the XML file to PBCS, we need to make a few changes. Open the XML file you just exported from Excel in a text editor. Remember that PBCS download XML file named “Supporting Detail.xml” that we saved as a backup? Use that one.

Open the Supporting Detail.xml file and copy this section, from row 1 to the row with the <planType> tag:

rich20

Paste that section into the XML file you just exported from Excel. Replace everything above the <columnDetail> tag. That is, replace rows 1 and 2 in your new XML file with what’s shown above.

Every dimension tag (Scenario, Account, Department, etc.) in the XML file to be uploaded to PBCS has to be replaced with its corresponding name in the dimension mappings shown above (DIM1, DIM2, DIM3, etc.) For example:

<SCENARIO>Bud</SCENARIO> becomes <DIM1>Bud</DIM1>
<ACCOUNT>AC_4409</ACCOUNT> becomes <DIM2>AC_4409</DIM2>
<DEPARTMENT>DP_7670</DEPARTMENT> becomes <DIM3>DP_7670</DIM3>, etc.

Do this for every instance for each dimension tag in the file; do a global search-and-replace based in your text editor using the mapping shown so that every tag that starts out like this:

rich21

Becomes this:

rich22

Now we have to make sure all our opening and closing tags are completed. When we pasted the dimension mapping into this file we over-wrote a tag called <data-set> and added 2 new ones called <columnDetails> and <planType>.

Go to the second-to-last row and insert <planType>
Go to the very last row of the file and replace </data-set> with </columnDetails>

  1. Upload the XML file to PBCS

That’s it, you’re ready to upload the XML file to PBCS! Your planners will have their old PeopleSoft supporting detail available as a reference for them.


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Oracle Business Intelligence Cloud Service (BICS): Managing Cache and Session Log for Administrator User

November 2, 2016

Author: Cameron Ross, Performance Architects

Managing the Cache and Session Log as an administrative user in an Oracle Business Intelligence Cloud Service (BICS) environment are very important tasks. You can do everything from filter out the session log by users, to canceling any running queries, to clearing the session log of previous queries to clear the cache. There are many other neat features that you can use on the “Sessions and Query Cache” screen.

 

To get started, log in to the BICS instance and select the “Console” button in the group of icons in the middle of the screen.

cr1

Notice all of the options available in the console for the Administrator. For the purposes of this blog post, select the icon that is labeled “Sessions and Query Cache.”

cr2

The “Session and Query Cache” capability gives you a list of all users in the instance, as well as information about the browser they are using and the time the user logged in and last accessed anything in the instance. Below is a table that gives you information about the users in the environment:

cr3

Underneath the user table are all of the queries that are run along with the user profiles and the logical SQL. You have the ability to close a specific query, as well as viewing the log. Viewing the log allows you to then see the physical SQL of the query.

cr4

Notice at the end of the table that displays the user data that there is now the ability to filter the session log underneath by the user selected above.

cr5

This becomes very useful in order to get to a user’s exact actions in the BICS environment with one click of a button. Notice below how this new feature works:

cr6

Another neat feature in the sessions log is the ability to download a CSV file of the queries. This can then be opened in an Excel document and you can pivot that data.

cr7

The ability to cancel running requests allows the administrator to do exactly as it sounds. If there is a query that is taking a long amount of time, clicking the “Cancel Running Requests” button will cancel that query from being completed.

cr8

Closing all cursors will take all executed queries in the session log with a status of “Finished” and will clear them from the log. This gives the administrator a better idea of any open and running queries.

cr9

Lastly you can see at the end of the session log that there is an area where you can sort your queries in the table. Below is the image of the different options for how the administrator can sort the queries:

cr10

After sorting the queries by “User Name Descending” the administrator should now see all of the queries in the table sorted in order based off of the “User” column in the table.


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.