Monthly Archives: February 2017

Oracle EPM Architect (EPMA) versus EPM Automate: What’s the Difference?

February 22, 2017

Author: Zack Carman, Performance Architects

The introduction of Oracle’s Planning and Budgeting Cloud Services (PBCS) not only saw Oracle EPM’s (Hyperion’s) introduction into the cloud but also the addition of more acronyms. One of these acronyms, EPMA, has caused confusion because this acronym describes different capabilities within PBCS versus on-premise Oracle Hyperion Planning solutions. So, what’s the difference?

Enterprise Performance Management Architect (EPMA) is an on-premise component that is used for creating and managing Oracle EPM (Hyperion) applications. Within EPMA, Hyperion applications can be created and deployed from a shared library, which contains a centralized repository of hierarchies and members. By using this piece of Hyperion software, administrators can streamline the addition of applications across multiple Hyperion technologies, including Essbase, Planning, and Hyperion Financial Management (HFM).

EPM Automate is an Oracle Planning and Budgeting Cloud Services (PBCS) tool that is used to communicate with server utilities for situations including automation, or repetitive manual jobs. EPM Automate requires an installation wherever it needs to be called, whether that be on a server or an administrator’s desktop. Through the use of the tool, EPM Automate can run metadata loads, data loads, business rules, etc. This functionality allows EPM Automate to be the true communication source of any lights-out automation that may be needed.

And there you have it. For further information on either one of these components or how to implement them as part of an Oracle EPM initiative, please don’t hesitate to contact the Performance Architects team at sales@performancearchitects.com.

 

 

 

 

 

 


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Preventing Oracle Planning and Budgeting Cloud Service (PBCS) Data Load Errors Before They Happen

February 15, 2017

Author: Richard Pallotta, Performance Architects

We recently worked with a client to migrate their on-premise Oracle Hyperion Planning application to Oracle Planning and Budgeting Cloud Service (PBCS). Over the years, they developed a supporting infrastructure of business processes that worked well for them. Our concern was to mitigate any disruptive process changes that could result from the transition and pose risks to a successful implementation. As a result, we wanted to adapt familiar concepts and processes from the new platform to facilitate the platform change.

One of those “nice things” I liked about on-premise Essbase data loads was the error rejection logging feature that created a log file for dimension members in the fact data load file that are not in the Essbase outline. I’ve seen many creative ways to incorporate that into data update processes. The manual data upload feature in PBCS does not offer that feature and simply errors-out on the first rejected member. In other words, it displays the member but doesn’t commit any of the data upload to Essbase, which is probably a good thing. My suspicion is the fact data is loaded to a temporary relational table and analyzed as a back-end process before a commit is made. In my opinion, this inhibits the process flow more than the old Essbase process did.

Our new process leverages the customer’s existing process and simply compares metadata files downloaded from PBCS against the newest data files and identifies potential dimension member rejects. That way, the PBCS administrator just updates the PBCS dimensions before the data is actually loaded.

There are certainly other methods than the one we implemented here; pushing data quality checks (which is essentially what this is) upstream and closer to the source system; using EPM Automate to automate and integrate the entire update process; go old-school and use Excel VLOOKUPS or other manual methods; or probably a dozen other hybrid solutions (all of which a testament to the powerful nature of Oracle’s framework philosophy for PBCS).

I’m a big fan of developing simple processes using a minimum of tools and this one uses Windows batch scripts and SQLite. If you’re not familiar with SQLite, it’s a core technology used in every Android and iPhone device, as well as managing things like your passwords, browsing history, and other stored artifacts in desktop Internet browsers such as Safari, Chrome, Firefox, etc. It’s an open source, cross-platform tool, and if your client or company is okay with using the world’s most widely distributed database, it’s a great solution that requires zero installation. You drop it in a folder or directory and it’s good to go.

Windows batch scripts manage the entire process and it can be scheduled or run manually by the administrator. SQLite reads the SQL queries that are stored as text files, as well as importing the text data files into tables in a database file that I also chose to store in the same folder. The process creates log files and produces the results in a folder structure of your choosing; this is the one I created off the root folder:

The SQL scripts are text files that SQLIte reads and processes. This one creates the table to hold the kickouts, does the compare (shown with the yellow arrow.) Then a few results are displayed on the screen as well as written to the results files:

Manually running the process is simply done from a command window:

I created a few prompts along the way so the administrator can monitor the job process:

SQLite can easily handle millions of data rows in just a few seconds. When the process finishes, the results are displayed on screen and are also written to the results files:

Results are then written to a text file:

To summarize: this is a handy data quality validation tool that’s portable and can be easily modified for quick deployment because it consumes a minimum of resources and can be run manually or using a scheduler. This is also handy because it is cross-platform (using shell scripts instead of Windows batch scripts) and is a familiar process for existing on-premise Oracle EPM (Hyperion) customers who currently use manual data validation tools and processes.

Need help figuring this out at your organization?  Contact Performance Architects at sales@performancearchitects.com and we’d be glad to help!


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Oracle Higher Education User Group (HEUG) Alliance 2017: Performance Architects Insider Tips

February 6, 2017

Author: Kirby Lunger, Performance Architects

We’re less than a month away from the Oracle Higher Education User Group (HEUG) Alliance 2017 conference, taking place from February 27th – March 2nd at the MGM Grand in Las Vegas! Alliance is the largest meeting of Higher Education, Public Sector, and Federal users of Oracle applications in the world.  Performance Architects team has attended this conference for almost ten years, so we’re very well-prepared to provide some “insider tips” on “must-sees” at the conference.

The first of these is the community mingle sessions on Sunday afternoon.  These sessions are organized according to HEUG Product Advisory Groups (or “PAGs”) subject areas. As the HEUG web site mentions, the PAGs are “composed of representatives from HEUG member institutions with expertise in various product modules and technical areas within the world of Oracle application software” (more information on PAGs is available here).  This is a great way to meet other subject matter experts and to learn more about the content at the conference related to your area of interest.

The conference is large and offers tons of fantastic content, so it’s easy to get overwhelmed if you don’t plan your activities.  We strongly recommend that you use the Alliance Agenda Builder to organize your time.  We would be remiss if we didn’t mention that you should add the Performance Architects sessions with our clients to your schedule, including:

  • Returning to FIU: A Year Down the Road with PBCS (Planning and Budgeting Cloud Service), Tuesday, February 28th, 11:15 AM – 12:15 PM (Session ID: 3906)
  • Utilizing Hyperion to Integrate Planning, Reporting, and Financials at Clemson University, Tuesday, February 28th, 1:15 PM – 2:15 PM (Session ID: 4057)
  • Moving University of Pennsylvania to the PBCS Cloud: A Journey of Consolidation, Migration and Enhancement, Wednesday, March 1st, 11:00 AM – 12:00 PM (Session ID: 4182)
  • Adoption of a New Budget Model at Cal State Through Cloud Planning Solution Implementation, Wednesday, March 1st, 1:15 PM – 2:15 PM (Session ID: 4084)

We also recommend that you make some time to check out the Exhibit Hall to see what’s going on in the software and services arena.  The Performance Architects team will be at Booth 227.  We would be delighted to finally meet those of you in person who we haven’t yet met, and to reconnect with those of you we haven’t seen in a while!

Speaking of connecting, Performance Architects is also hosting a special, invite-only event at the conference.  If you’re interested in learning more, please send us a note here and we’ll get in touch with more information.

Follow us on Twitter (@PerfArchitects), Facebook (https://www.facebook.com/PerformanceArchitects/), or LinkedIn (https://www.linkedin.com/company/performance-architects-inc-) to keep up with us during the event, and stay tuned for our post-show blog with updates and news from the conference.


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

How to Use the “XML Search and Replace” Tool in OBIEE

February 1, 2017

Author: Margaret Motsi, Performance Architects

Every now and then, changes are made to Oracle Business Intelligence Enterprise Edition (OBIEE) catalog objects.  For example, the column or table names may change as needed. Usually, this requires a quick fix, except when it is a change that affects multiple reports within the OBIEE Presentation Catalog.

I faced this situation when some subject area and column/table names were changed. A few hundred reports were affected, and it would have been tedious to have to tackle these manually. Fortunately, there is an “XML Search and Replace” tool in the OBIEE Catalog Manager that can accomplish this using multiple text strings. The whole process takes just a few seconds.

 Steps to take:

  1. Compile list of changes that need to be made to the reports (e.g., changes in subject area name, table names, column names, etc.)
  1. Create an XML file with commands for searching and replacing multiple text strings; guidelines:
  • command – Specifies what action is to be taken
  • textReplace – Replaces text other than the name of a table, column, formula
  • renameTable – Renames the Table part of a formula alone
  • renameColumn – Replaces the name of a column
  • renameFormula – Renames the entire formula
  • renameSubjectArea – Replaces the name of a subject area
  • subjectArea – Apply this optional attribute to renameTable, renameColumn
  • old Value – Specifies the text string to search for

For example:

<actions> 

<action command=”renameSubjectArea” oldValue=”Sample_Sales” newValue=”Sales” ignoreCase=”false”/> 

<action command=”renameTable” subjectArea=”Sample_Sales_WH” oldValue=”Dim- Product” newValue=”Dim – Prod” ignoreCase=”true”/> 

<action command=”renameColumn” oldValue=”Dim – Product”.”Product Desc” newValue=”Dim – Product”.”Product Desc” ignoreCase=”false”/> 

<action command=”textReplace” oldValue=”&amp;quot;Dim – Brand&amp;quot;.&amp;quot;Product Desc&amp;quot;” newValue=”&amp;quot;&amp;quot;Dim – Product&amp;quot;Product Desc&amp;quot;” ignoreCase=”false”/> 

</actions>

Please note: any special “normal” characters have to be escaped using “&quot;”. This works well when using most of the commands. However when using the TextReplace command (as highlighted above), although you are using “&quot;” to escape the double quotes within the command, the “&” that is used in the “&quot;” has to be escaped as well by using “&amp;quot;”

  1. Create a backup of the folder to be converted. Open Catalog Manager in offline mode and browse to the Catalog folder.
  1. Navigate to the folder to be converted.
  1. From the Tools menu, select “XML Search and Replace.”
  1. In the “Import from File” field, enter the path or click “Browse” to specify the XML file that you created in Step 2.
  1. Click OK to start the search and replace process.

Once applied, the changes to the web catalog objects will be available immediately. If you have any additional questions, please feel free to email info@performancearchitects.com and we will be in touch!


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.