Monthly Archives: September 2014

Oracle OpenWorld 2014 Presentations are Available!

September 29, 2014

Oracle OpenWorld 2014 is still in full swing in San Francisco but we have already made our presentations available within our Learning Center™!

More to come on OpenWorld 2014 later this week – stay tuned!

Author: Melanie Mathews, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

What the Oracle Exalytics X4-4 Launch Means for Oracle Customers

September 25, 2014

Oracle recently released the next generation of the X4-4 Exalytics In-Memory Engineering System (X=Intel, ‘-4’ = 4 physical processors).  While X4-4 now comes with two terabytes (TB) of RAM, 2.4 TB of PCIe Flash disk, networking hardware upgrades, and the option to use Oracle 12c In-Memory Database, the most noteworthy change is the switch to Intel’s E7-8895v2 processors.

You might ask, “What’s the big deal about the processors, especially since most of the time server CPUs do not run at 100%?” and “Why do I need faster processors? How will they help make a case for moving to Exalytics from my existing infrastructure?”  In short, this new breed of processors from Intel can vary the number of effective CPU cores on the machine and tweak the clock frequency without having to reboot the machine. Does this sound like too much tech-talk? Let me explain.

Firstly, the E7-8895v2 is a new class of Xeon processors from Intel that has been customized for Oracle. Each processor has 15 cores that run at 2.8 GHz, with a maximum turbo-boosted 3.6 GHz. The X4-4 has 4 of them, giving a total count of 60 CPU cores (20 more than before). More importantly, due to this new processor, the X4-4 can vary the total CPU cores between eight and 60. This is because the E7-8895v2 processor can act as a chip in a different configuration. So instead of running 15 cores at 2.8 GHz, it could act as two cores at 4 GHz, or six cores at 3.6 GHz, and so on.

Fancy talk aside, you might still be wondering how this can help you. Typically the TimesTen In-Memory Database cache and Oracle Business Intelligence Enterprise Edition 11g (OBIEE), among other products, are licensed PER CPU CORE. So when X2-4 and X3-4 came out, with 40 cores apiece, customers who were moving existing licenses to Exalytics had to work out some bridge deals. For example, some of our customers who looking to move their existing solutions that were licensed for a 12-core machine to Exalytics had to get a revised licensing agreement to comply with a 40-core machine.

With the X4-4, you can choose to have the machine running with much lesser core count, and pay for only what you need. The possible benefit is that the lesser core count could have the cores (theoretically) running at a higher frequency, so could do things faster.

In addition, since Oracle Business Intelligence Foundation Suite (BIFS or BIF) was a required purchase with an Exalytics machine, customers looking to host only Oracle Enterprise Performance Management (EPM) on an Exalytics machine had to work this additional purchase into the deal. BIFS is no longer a required purchase with an Exalytics X4-4, giving the choice to the customer to purchase this product, or to have the option of spending the money on additional products and/or seats for existing licensed products.

We are not sure at this time how this variable core performance and its pricing will affect products like Oracle Enterprise Performance Management (EPM or Hyperion) and Oracle Endeca Information Discovery (OEID) that can be co-deployed with BIFS. It looks like Oracle is revising the product to make it easier than ever to move a customer solution onto Exalytics, and it is unlikely that it will adversely affect you. In either case, this is one more reason to look forward to Oracle OpenWorld this upcoming week. If you are going, feel free to look out for us there; more information on our participation at OpenWorld is available here.

Look forward to more from us on the Exalytics X4-4, as more information becomes available on it.

Author: Andrew Tauro, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Dissecting Data Load Options in Hyperion Financial Management (HFM)

September 24, 2014

In Hyperion Financial Management (HFM), you can load data using a text file; Oracle Financial Data Quality Management Enterprise Edition (FDMEE) in the newest version of Hyperion; or ERP Integrator (ERPi) in earlier versions of Hyperion.  This blog covers the first method, loading a text file.

The general structure of a data load text file is comprised of a Group Dimension Section and a Data Section.  I call these out as they are going to come into play as we focus on data load options.  In HFM Version 11.1.2.3.500, you load data from the application by navigating to Consolidation -> Load -> Data:

Joef1

When it comes to loading data from a text file into an application, you have multiple options: Merge, Replace by Security, Replace, Accumulate, and Accumulate within File.

joef2

To demonstrate how these options work, we are going to load the same trusty data load file and see how and if results differ.

Merge Option

The Merge option essentially overwrites data in the application with data from the load file.  The saving grace is that application data that is not changed by the load file remains unchanged in the application.  Here are the results after loading the data load file with this option:

Application Data:  joef3   

Data Load File:joef4

Result of data load using Merge option:joef5

With the Merge option you can see that the Cash and Trade Receivables data from the data load file rules the roost and overwrites the application data.  However, since there was no data in the file for Marketable Securities, the application data for this account remains in the system.

Replace Option

The Replace option replaces data in the application with data from the load file.  Maybe this sounds the same as the Merge option to you?  It’s not.  For each combination of the Scenario, Year, Period, Entity and Value dimensions, the Replace option clears all data values from the application.  After the values are wiped out, the data from the data load file is pushed into the system.  The result looks like this:

Application Data: joef6      

Data Load File: joef7

Result of data load using Replace option: joef8

Notice that with the Replace option, the Marketable Securities value has been wiped clean.  This is because it shares the same unique combination of the Scenario, Year, Period, Entity and Value dimensions as stipulated in the Group Dimension Section of our load file below:

joef9

Replace by Security Option

I am not going to devote much space to this option, but essentially you can use the Replace mode only on the dimension members to which you have access.

Accumulate Option

The Accumulate option gives you the best of both worlds, where the data from the data load file is added to the data in the application.  Results for this option look like this:

Application Data: joef10

Data Load File: joef11

Result of data load using Accumulate option: joef12

Accumulate within File Option

The last load option we will investigate is the Accumulate within File option.  Do not confuse this with the Accumulate option covered previously, as they are not the same.  The Accumulate within File option does not appear in the Load Mode drop down, but rather it is displayed next to a check box in the Default Load Options pane highlighted below (as an aside, other Default Load Options include File Contains Ownership Data and File contains Process Management Data, which are topics for another day):

joef13

Consider Accumulate within File an “add-on” to the Merge and Replace load modes.  The Merge and Replace methods will function as outlined above, but with a twist.  With the Accumulate within File option checked, when a data load file contains multiple lines of data for the same point of view, the lines are added together and their total is loaded.  What I didn’t tell you before (truly for your own good – this is to keep your head from spinning!) is that the basic Merge and Replace options will only load the last record for multiple lines of data for the same point of view.

Confused?  Let’s examine one more example, but this time we’ll look to the data load file for an explanation.  Imagine the Data Section (in the red box) of our data load file looked like this:

joef14

In this case, we have two lines for Cash with data for the same intersection (100 and 200).  If we just used the basic Merge option when loading this file we would see a value of 200 populate this cell in our application, as 200 represents the last record for this data point.  However, if we used Merge and checked the Accumulate within File option, our result would be a value of 300 for Cash as it would total the two records (of 100 and 200).  Let’s compare the results of Merge versus Merge with Accumulate within File: 

Result of data load using Merge option: joef15

Result of data load using Merge with Accumulate within File optionjoef16

Using Replace with the Accumulate within File option, HFM acts similarly where the multiple lines of data for the same point of view would be totaled together and then loaded into the system.  The data load then behaves similarly as it would in the basic Replace mode where data with the same unique combination of the Scenario, Year, Period, Entity and Value dimensions would be cleared before loading the text file.

Result of data load using Replace option: joef17                      

Result of data load using Replace with Accumulate within File optionjoef18

Author: Joseph Francis, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Enhanced Oracle Hyperion Planning Validations and Data Integrity

September 17, 2014

With each version of Oracle Hyperion Planning, we see new enhancements to (hopefully) help us implement a more efficient solution. Each of these enhancements brings up another challenge on how to adapt old and new systems to fully take advantage of the new functionality. At times, we get caught up in the excitement of trying to implement the most modern technology, forgetting about  the power of existing functionality. Recently, I started to explore what I believe is a major issue that remains unsolved: data integrity.

The Oracle team developed a solution that is a fairly straightforward introduction of Process Management. If you are not familiar, the idea of Process Management within Hyperion Planning is to allow for a user to “own” a specific Entity and then to promote the Entity to an analyst or administrator for review once a budget or forecast has been completed.

In recent versions, the functionality has been expanded to allow for more specific paths, but the general idea is to control the flow of a specific Scenario, Version, and Entity combination, also known as a Planning Unit. Once a user gives away ownership of a specific Planning Unit, access is also lost for data entry. This allows each user to control the flow of the entity, relieving administrators of redundant tasks.

But, of course, there are some issues that have pushed many away from even attempting to implement Process Management. The idea seems great (well, at least to me), but there are some roadblocks. Let’s have a look.

Too Complex to Implement Efficiently

First of all, Process Management seems complex because, well, it has a lot of complexity. In the new versions, it seems even more difficult to set up with all the different combinations that are possible for just the Process Management functionality. However, one thing to remember is that this is an out-of-the-box solution. It is meant to allow for the most complex solutions….but….that doesn’t mean YOU have to make it that complex. For example, we can have a path that has an owner approve to an analyst, who then approves to another analyst, who then approves to…you get the point. Or, we can simply have an analyst approve to a general administrator user or group.

Time Consuming to Keep Sending Planning Units Back to Users for Corrections

One of Oracle’s best introductions (which passed by many without notice), is the idea that Validation Rules on data forms can also be integrated with Process Management. Planning settings allow for a Planning Unit to be unapproved if its validation is not passed. However, one issue with this is…

The Approval Process Slows with Too Many Validations and Does Not Prevent Business Rule Execution

Agreed. I can’t have a comeback for every question, right? In my opinion, Oracle overlooked the incorporation of business rules within the Process Management flow. This is especially true for any right-click rules that will impact the way the data looks. Even without the ability to input data, these right-click business rules can only be prevented when security is taken away from the business rule.

BUT WAIT! There may be another way. I’ve recently put up some fairly detailed explanations on Cameron Lackpour’s (a very gracious host, indeed) blog to explain how this process would work. The process involves a full validation solution featuring business rules, data forms, and the process management functionality that has been discussed. This can be seen at Part 1 and Part 2.

Summary

Hyperion Planning is implemented to increase efficiency for forecast and budgeting processes in a quick, out-of-the-box manner. One of the issues plaguing Planning system administration is that administrators have to perform manual, redundant tasks like opening and closing the system. Through the proper use of Process Management, we can start to eliminate some of these items and allow administrators and analysts to focus on more pertinent issues. In addition, users can control their own budget flow without any delay time, such as having to wait for the administrator to perform an action.

With the inclusion of the ideas detailed in the blogs I mentioned that are posted on Cameron Lackpour’s blog, these steps can be taken even further to seamlessly maintain data integrity for any Planning application. All of these items can be a little overwhelming in total, so feel free to reach out to the Performance Architects team for any follow-up questions….or maybe….a demo at communications@performancearchitects.com!

Author: Tyler Feddersen, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Oracle Hyperion Profitability and Cost Management (HPCM): All Grown Up Now

September 12, 2014

Oracle Hyperion Profitability and Cost Management (HPCM) has accurately been depicted as a hidden gem within the Oracle EPM (Hyperion) suite. The product has been a part of the suite for a number of years, but has not gotten a lot of attention until recently.

Part of the reason for this lack of traction is that the business community was not fully prepared to take advantage of the powerful analytical capabilities contained within HPCM. As finance departments and entire organizations have grown in their abilities to harvest and manage data, a great opportunity has emerged for boosting the bottom line by focusing on deeper and more detailed contribution and profitability analyses, all made much more simple, transparent, and traceable through the use of HPCM.

In addition, Oracle has also made some solid improvements to the product in recent releases, which we’ll look at more closely below:

Detailed Profitability Capabilities Introduction

HPCM originally allowed only for “Standard Profitability” modeling, which focused on contribution analysis and allocating direct and indirect costs or revenue to a destination through a hierarchical set of stages using drivers and direct assignments. Standard Profitability uses an architecture that includes two Essbase databases (BSO for calculations, ASO for reporting) and a relational database that stores artifacts and definitions of the model, including driver calculations, assignment definitions, etc. While the Standard Profitability architecture allows for handling large volumes of data, there are still practical considerations that can create limitations on dimension sizes because of the multidimensional architecture.

“Detailed Profitability” offers a whole new set of capabilities that can act as a compliment to Standard Profitability based on the needs of the organization. Detailed Profitability can be used for many analytical purposes, such as applying customer support and product activity costs to a large number of invoice lines. The architecture is completely different and only uses relational database technology. This provides great flexibility because tables can be mapped from a user-defined relational schema to corresponding tables in HPCM. Detailed Profitability also only processes flows and allocations for a single source-destination combination. The relational, non-hierarchical nature of the architecture allows for processing enormous volumes of data (hundreds of millions of rows, possibly) because the destination row is not defined by a unique intersection of dimensions.

After the initial introduction of Detailed Profitability, several key enhancements have helped to make the product even stronger. Detailed Profitability now handles allocations through Calculation Rules, a large set of calculation artifacts aimed at making allocation assignments much less tedious through using a top-down approach. Calculation Rules control flows and allocations through a broad application of rules, and exceptions to these rules are managed by way of individual assignments.

Another key feature enhancement is providing for the ability to transfer relational data to an OLAP model. There is new support for deploying three separate Essbase databases that correspond to source stage, contribution, and destination stage data. With this enhancement, users now have the ability to perform multidimensional analysis through the various reporting tools, including Oracle Hyperion Financial Reporting, Oracle Hyperion Web Analysis, Oracle Smart View for Office, or Oracle Business Intelligence Enterprise Edition (OBIEE).

Smart View integrations and Query Customization Capabilities

HPCM used to allow for a default set of views/queries out-of-the-box, where users could view or input data within the application. Newer releases allow for modifying views or creating new ones based on user needs or preferences in retrieving or viewing data.

Oracle recently added new integrations with Smart View. The product comes with a default set of queries, which can be modified, expanded and defined within the application, then run in Smart View. These queries can be saved for future use among all users and are available for migration in Lifecycle Management.

In addition, Standard Profitability expands integration capabilities by providing aggregated totals in the Stage Balancing tab of the application with hyperlinks to Smart View, where a user can drill down on data, and pivot or maneuver data for deeper analysis.

Model Statistics Query Capabilities

In recent releases, Oracle has included a pre-built SQL query as part of the installation called “modelstats.sql.” This can be very beneficial for providing insight into the number and usage of certain statistics. The query is database-neutral and can be executed on a SQL Server or Oracle database backend.

This query can be beneficial for understanding the implications of making a big change. A baseline query can be run then compared after a change is made. For example, an assignment rule could be changed that is used thousands of times. In this case, the query can provide some insight as to the impact of making such a change.

Greater Automation Capabilities via Oracle Web Services Manager (OWSM)

Many data management tasks are handled through automated processes. New support was provided for automation of many common HPCM functions through Oracle Web Services Manager (OWSM). Through a Java API, developers can create custom scripts to automate tasks that a user might perform in HPCM. This enhancement is very significant for organizations looking to layer HPCM on to an existing EPM (Hyperion) environment, most of which is probably already automated (to some extent).

These new features and enhancements are just a few of the many improvements that Oracle has made to HPCM in recent releases, seeming to indicate a new or renewed focus on this very powerful product that – until now – really hasn’t received the attention it deserves. This, combined with growing business focus on producing more detailed and granular analysis of profitability and contribution, makes HPCM poised to be a break-out star in the EPM space going forward.

Author: Shane Hetzel, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.