Monthly Archives: March 2015

Strategic versus Operational Planning: Tale of a Tug-of-War

March 27, 2015

Have you ever been in a situation where your CFO asks you to tweak a few assumptions on your operational forecasts and wonders why it takes many hours or even a few days to review an updated operational forecast? Or asks you to convert the operational forecast just by rolling up numbers at a parent level to arrive at a strategic forecast to review with the Board of Directors? Most organizations experience such challenges as a tug-of-war between operational and strategic planning.

Operational planning primarily involves the Financial Planning and Analysis (FP&A) function, whereas strategic planning involves the breadth of Corporate Finance (and related) functions such as FP&A, Treasury, Controllers, Corporate Strategy, Tax, and Legal. To understand the nuances of operating versus strategic planning, let’s contrast the components of operating and strategic forecasts.

Operational planning typically includes department-level operating components such as operating revenues, cost of goods sold (COGS), compensation expenses, utilities, depreciation, etc.  Strategic forecasting includes these operating components but at a much higher level (usually not at an individual department level). Strategic forecasts are mostly account-based and include several components that are typically not in an operational forecast like debt schedules, debt covenants, capital structure changes, treasury assumptions, mergers and acquisitions, etc.

Taking an operating forecast and converting it to a strategic forecast (or vice versa) is not an easy task by any means. So, how can you integrate and build a consolidated approach? Based on our numerous years of experience, the following stages provide an enterprise-wide approach that we feel fits well:

Sree 1

Performance Architects has been fortunate to design and build a similar approach for many of our clients:

  • Stage 1. Design an account structure that models financial statements in the way the business is actually run (this should not depend on where or how actuals are reported)
  • Stage 2. Build a strategic model that incorporates all inputs, assumptions and drivers from all the stakeholders across the enterprise
  • Stage 3. Use the strategic forecasts output to create baseline targets at the operational level (department, project, etc.)
  • Stage 4. Build an operational model/forecast
  • Stage 5. Validate operational plans against actuals data
  • Stage 6. Based on the analysis and comparisons from Stage 5, either change the way actuals are coded and reported, and sync up with the way the business views and runs the unit or change the way strategic forecasts are modeled by updating the account structures or methodologies on the strategic side. This tug-of-war will end up synchronizing and bringing your operational and strategic models one step closer.

You can use a combination of tools like Oracle Hyperion Strategic Finance (HSF) for strategic forecasting and Oracle Hyperion Planning for operational forecasting to achieve the above multi-staged approach to enterprise planning. Once you design and implement such a solution, the intent is not necessarily to have a clear-cut winner in the ensuing tug-of-war but to ensure that you unlock the full potential of enterprise-wide planning without compromising either the operational or strategic plans. In addition, you will also be able to unlock better analytics from your performance management applications.

Author: Sree Kumar, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Oracle Data Relationship Management Version 11.1.2.4 Release Highlights

March 12, 2015

Oracle recently released Oracle Data Relationship Management (DRM) Version 11.1.2.4 with some exciting enhancements. One great aspect of this release is that the DRM architecture was reworked to improve overall performance for users.  In addition to anticipated updates to the Oracle Data Relationship Governance (DRG) module and a number of native enhancements, another notable change is that DRM is more tightly integrated with Oracle’s EPM (Hyperion) software-as-a-service (SaaS) offerings.

DRM Cloud Integration with Oracle Planning and Budgeting Cloud Service (PBCS)

DRM now integrates with both Oracle Planning and Budgeting Cloud Service (PBCS) and Oracle Hyperion Planning to allow for either the export or import of dimensions out of or into DRM. This should be considered a positive for organizations that maintain a combination of on premise and cloud-based applications (for example, planning in PBCS but using Oracle Hyperion Financial Management (HFM) for financial close and consolidation on premise). This particular integration leverages the outline load utility (OLU) file format that we are all familiar with for managing dimensions and member properties in Hyperion Planning applications.

Data Relationship Governance (DRG)

Data Relationship Governance (DRG) is the change management and workflow module native to DRM as of release 11.1.2.3. Some of the enhancements in this release include:

  • Conditional Workflow Stages. You can assign a conditional action for a stage in the workflow, or follow a separate workflow mid-stream if a particular condition is met. If the condition is not met, you can continue following the primary workflow.
  • Request Items from a File. You can now include a flat file within a request. For example, for a remediation request, the output from a validation could be included within the request as a flat file.
  • Request Auditing. You can now view the final approver of a workflow within the transaction history of DRM. This was not possible prior to this release. To view DRG history in the initial 11.1.2.3.x release the work list would have to be accessed and searched for the specific request ID. This is now an easier way to identify the responsible user, as you can query workflow tasks to see the users, stage, time, and/or type of task for requests. This is similar to the native DRM audit functionality.
  • Request Attachments. You can now add attachments to a workflow request (explain actions and justify changes). Attachments are accessible to all workflow participants.
  • Improved Handling of Request Submissions.
    • Submitter can withdraw a request.
    • Commit Users may access escalation requests from prior stages.
    • Escalation Users can advance requests up the chain for any stage.
    • The Data Manager role can now take control of inflight requests and “unclaim” a request if need be or can delete a request if required to do so.
  • Workflow Page Layout Improvements.
    • The workflow is now laid out in tabs.
    • Inclusion of labels and instructions is now possible and these can either be displayed or hidden.
    • Hierarchies can be filtered by task.

Native DRM Enhancements

Getting back to native DRM, there are a number of enhancements to both the import and export functionalities that I will summarize without going into too much detail.

  • Export Enhancements.
    • Substitution Parameters. This allows for dynamic control of an exported column via a substitution parameter and provides a constant value that generates a column resulting from the combination.
    • Hierarchy Groups. Uses already-defined hierarchy groups to auto-select hierarchies for export.
    • Dynamic Columns. As mentioned above, dynamic columns can be generated within exports in this release.
  • Import Enhancements.
    • Create Hierarchies from Orphans. Hierarchies can be created from a list of orphans (nodes in DRM not assigned to any existing hierarchies).
    • Substitution Parameters. Can be used for setting defaults and run-times.
    • Node Ignoring. You can specify nodes to ignore during an import process and you can also specify to skip header records.
    • Import single section. Per the import file format: a single block section can be imported without the header section.
    • Validations and Node Types. These can now be assigned during an import.
    • String Handling and Memos. Can be loaded via an import; handling of these has been improved.

DRM Improved Infrastructure and Architecture

In this release, how the engines run as associated with server processors has been reworked to improve overall performance. Basically, DRM can now better leverage multi-processor servers to run operations more efficiently. This means that calculation and read operations have more power – making DRM faster and more scalable!

Stay tuned for further updates from our team on the 11.1.2.4 release.

Author: Jason Sawicki, Performance Architects

 

 


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Enhancing Existing Application by Adding a Relational Table as a Data Set in Oracle Endeca Information Discovery (OEID)

March 4, 2015

In my previous post on OEID, we saw how to quickly create an Oracle Endeca Information Discovery (OEID) application from a relational table, and use it for data exploration. Now suppose that during this exploration we discovered some questions that the existing data could not answer. The beauty of OEID is that you can add data to the existing application to help answer such questions. For that reason, today I will show you how to add an additional table to the existing application as a new data set, so you don’t have to make changes to the existing application.

Starting with the application we created in my last OEID post, the data showed what kind of leads different recruiting lead sources generated for an organization. A question may have come up, “What did it cost me to get those leads?” If I had the information, I could perhaps better allocate my budget to get the best bang for my recruiting buck.

Assuming the organization keeps data in another relational table that indicates what each of the lead sources charged per lead, the same steps as the previous post will show how to create a new data source that pulls from the table. Let’s see how we can add that to the existing application:

at1

Open the application that was previously created, and click on the settings icon (the little “gear” on the top right corner) and choose “Application Settings.”

at2

Once there, select “Data Sets” from the menu on the left.

at3

Then click on the button on the right to add a “New Data Set.”

at4

Assuming “LeadSources” is the Data Source that was created from the new table, select it to include in the application.

at5

Make changes to the definition of the attributes being pulled from the Data Source and click “Done” to add this to the application. Notice the check box that says “Automatically create refinement rules.”  This is OEID volunteering to automatically find the links between this new data set and the data set(s) that you have added to the application previously.

at6

OEID Studio automatically creates a new tab/page that displays the data from the new data set. Here you see a chart that shows the number of leads that each type of source generated, and the average cost per lead for each source:

at7

So how does this link up to the existing data set? Remember those refinement rules that OEID volunteered to create? Well say you wanted to narrow down the set of data to only the ones gotten via “Direct Marketing” efforts, hover the pointer over the value on the chart to tell you what the average cost was:

at8

If you click on the graph at that point, you see on the left that the data set has been narrowed down to only “Direct Marketing” sources. The chart changes to show the different sources or efforts that fall under that category.

at9

If you navigate back to the “Leads” page, you see that that data set was also narrowed down to the same value. How did this happen? This is one of the refinement rules that OEID automatically creates because it was able to find the common attributes between the data sets, based on name and data type.

In a future post, I’ll show you how to create or remove such refinement rules yourself, so you can fine tune the application, especially if the attributes are not named exactly the same. As usual, if you have any questions, comments, or need more information about OEID, feel free to leave a comment or send us a note at communications@performancearchitects.com.

Keep discovering!

Author: Andy Tauro, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

A Quick Way to Explore Data from Relational Tables in Oracle Endeca Information Discovery (OEID)

March 4, 2015

Usually when I hear about loading data into an Oracle Endeca Information Discovery (OEID) application for exploration, it involves using the Integrator ETL component. While this is a powerful way of loading all sorts of data that is of varied types and from myriad sources, it is certainly not for everyone. With the addition of the Provisioning Service in OEID 3.X, one can now create quick applications right from the OEID Studio web interface, without having to figure out how to create complex graphs.

Over the next few posts, I am going to walk you through a quick and quite easy process of creating OEID applications from relational tables or flat files that you may have access to. In this post, I will keep it simple and show you how to create an OEID application that lets you explore data from a single relational table.

Andy 1

Before creating the application, we will need to create a data source. From the “Control Panel” -> “Data Source Library” page, click on “New Data Source” to begin the process.

Andy 2

Enter the “Data source name” and connection information and click on “Next”

Andy 3

Enter the SQL query that will define the pull from the relational table. This can be a simple SELECT statement that pulls from a single table, or a view, or a more complex one that joins multiple tables.

Andy 4

You will then have the opportunity to fine tune the “Data Types” of the fields that were picked. OEID does a pretty good job of inferring the types for you.

Andy 5

“Save” the settings to return to the “Data Source Library” and a confirmation that the data source was created.

Andy 6

Now return to the “Home” page, and create a new application, choosing the data source just created.

Andy 7

With a confirmation of the fields (or “attributes” as they will now be called) to be included in the application, click on “Done” to create the application.

Andy 8

The Endeca Server summarizes the data and presents it to you to create visualizations and to help you on your way to “discovering” hidden information in your data.

In future posts, I’ll walk you through the process of adding more data to this “data set” to supplement the data and to answer questions as they come up. If you have any questions regarding this particular post or something similar else on OEID, feel free to leave a comment below, or send us a note at communication@performancearchitects.com

Keep discovering…

Author: Andy Tauro, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.