Monthly Archives: July 2014

What’s new in Oracle Hyperion Data Relationship Management (DRM)? Release 11.1.2.3.500 Highlights and What to Expect in Release 11.1.2.4

July 30, 2014

As many of you know, Oracle Data Relationship Management (DRM) was originally called “Razza DS” and was acquired by Hyperion. The product name was then changed to Hyperion Master Data Management (MDM). Shortly after Oracle acquired Hyperion, the product was again renamed to Data Relationship Management (DRM) and is now one of several Oracle offerings in the data governance and integration arena (what Oracle calls its “Data Relationship Governance” or “DRG” solution set).

DRM is a master data management tool used for building and maintaining the consistency of non-transactional, master data. It is a technology-independent solution that supports integration of all data sources and targets and supports data governance by driving a consistent set of master data definitions down to various targets.

Over the past year or so, the Oracle EPM (Hyperion) product management team has focused in a lot more detail on the concept of Data Relationship Governance (DRG) and how this applies to further data integration efforts within the Oracle EPM (Hyperion) product suite, going so far as to implement DRG capabilities in the most recent releases of its data governance and integration tools.

The 2013 Oracle DRM 11.1.2.3 release included a major shift to this new Data Relationship Governance (DRG) model. Before this release, change management, workflow and approvals (if desired) required a customized solution that used the DRM Application Programming Interface (API) and Business Process and Execution Language (BPEL), along with a strong dose of Java programming capabilities. This type of solution usually required third party resources and added another layer of complexity and cost both during development and during ongoing operational ownership. Now, with DRG, there is a configurable tool available right out of the box within DRM that includes a governance workflow and approval process.

To summarize (courtesy of Oracle),the DRM DRG capabilities include:

Workflow models. Controls user tasks, stages of workflow, and types of data involved with governing a set of changes to data in DRM.
Workflow requests. Initiates changes or corrections to be completed, approved, enriched, and committed by other users using governance workflows.
Governance work list. Provides a central location for interacting with change and remediation requests. From the work list, users may submit change requests or review and participate in requests assigned to their user group.
Alerts and notifications. Offers DRM alerts and email notifications to governance users and data managers for requests with which they are associated. The configuration of workflow model stages for a request controls whether users are notified of activities taking place in a particular stage and which users get notified.
Workflow path. Identifies the stages of workflow to be completed for a request; the active stage for the request; the completion status of previous stages; and the approval count for the active stage. The workflow path enables participating users to understand how long a request may take, how many approvals may be involved, and where a request is positioned in the overall approval process.
Workflow tags. Allows governance users to assign a due date for completion of a request and to mark the request as urgent.

With the latest 2014 DRM 11.1.2.3.500 release, much of the focus is still on enhancing Data Relationship Governance (DRG) functionality. Added features and updates include:

1. Copy workflow models and tasks

  • a. Create new models and tasks based on existing configurations
  • b. Apply changes to support different workflow requirements over time
  • c. Hide a model to make it unavailable or to retire it
  • d. Rename models to replace an old model with a new one

2. Automatic updates for request items

3. Governance web service API

  • Initiate workflow requests via external sources or systems without logging into DRM 
  • Create, Retrieve, Update, Delete (CRUD) operations for building a request, validate/submit, and monitor status
  • DRM web service can be used to find existing nodes to include in request items

4. Database integration

  • Imports from read-only tables and DB views
  • Write to external database tables
  • Query database objects using a schema and/or object filter
  • Select or manually add a database table/view to a connection
  • Improves configuration for database external connections and validates database objects defined for the external connection

5. External workflow

  • Workflow development kit integrates external workflows (includes external BPEL processes)

6. Patch 11.1.2.3.302

  • Stateful sessions for DRM web service 

i. Uses stateful sessions when performing multiple API operations together
ii. Improves performance by minimizing overhead of creating/terminating user sessions
iii. Allows for tighter code and better control when interacting with the DRM web service

7. Patch 11.1.2.3.304

  • Indexes for defined properties, which improves performance
  • Provides version deletion in the background, which improves responsiveness for other system operations during delete process
  • Enhances password security

Oracle has also provided additional Certifications for release 11.1.2.3.500, including:

• Windows 8
• IE 10
• Firefox 24 ESR
• Oracle DB 12c
• Excel 2013

Other release compatibility updates include Shared Services 11.1.2.3.500, EPM Architect 11.1.2.3.500 and Fusion Middle Ware (FMW 11.1.1.7).

According to the Oracle DRM Product Management team, in the 11.1.2.4 release, we can expect further enhancements to DRG along with some added investment in integration with other EPM modules. Capabilities may include:

1. DRG enhancements

  • Conditional workflow stages that offer the ability to add conditional logic to the stages of a model
  • Load request items from file
  • Request auditing (the ability to track down requests, models, and users of requests both completed and in-flight or what we refer to as “The who, when and what”)
  • Request attachments (attach documents to your workflow)

2. EPM integration

i. DRM (Dimension Master) to provide a target for metadata within FDM for managing dimensions outbound to EPM (GL->FDM->DRM->EPMA)
ii. DRM (Mapping Master) to chart mapping/sourcing in addition to dimension management for managing DRM outbound to FDMEE (DRM->FDMEE->HFM)

3. Enterprise readiness

  • Offer OAG 2.0 (VPAT)

4. DRM core functionality new features

  • Create hierarchies from orphans during import
  • Import from single section file
  • Assign validations and hierarchy groups during import
  • Provide hierarchy groups for exports
  • Offer substitution parameters for queries/compares/imports/exports
  • Provide dynamic export columns
  • Improve quoted string handling

In closing, there is plenty here to look forward to in the world of Oracle EPM (Hyperion) data integration and governance capabilities. With the addition of the DRG module, DRM is now a full-fledged and enterprise strength master data management solution that includes both workflow and audit capabilities.

Author: Jason Sawicki, Performance Architects

 


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Discovering the Power of Data Discovery Using Oracle Endeca Information Discovery (OEID): U.S. Food and Drug Administration (FDA) Use Case

July 23, 2014

What resources are available to organizations that want to use traditional transactional reporting and analysis capabilities (traditionally know as business intelligence or BI solutions), as well as data discovery functionality (research of trends, hunches and intersections of events that fall outside of standard BI solution offerings)?  How can a non-technical business user act like a “data scientist” and navigate both structured and unstructured data? 

A new breed of solutions offers both kinds of capabilities in one environment.  Performance Architects recently launched a practice focused on a leading solution in this arena, Oracle Endeca Information Discovery (OEID).  This blog post demonstrates OEID’s capabilities against a publicly-available data set, the United States’ Food and Drug Administration’s (FDA’s) Manufacturer and User Facility Device Experience Database (MAUDE) in order to show an example of how you might apply these capabilities within your organization.

OEID is part of the Oracle Business Analytics solution offerings and is designed specifically to synthesize structured and unstructured data in a rapid fashion using an intuitive and easy-to-navigate interface.  The tool is comprised of a core database called the Endeca Server, a data integration and enrichment tool called Clover (part of the total Oracle Integration Suite), and an exploration and analysis application called Endeca Studio.

As a first time user of this technology, I had to wrap my head around the idea that this tool was not generating answers to predefined questions for me, but rather was allowing me to dream up any possible relationship and to investigate correlations.  My background is in engineering, finance and business, and I would not consider myself a career data analyst, but Endeca provided an interface to get to the “why” of specific events in an unfiltered interface, with just a few clicks.

Let me provide an example of how easy this is in OEID, using the FDA’s real-world data set. This body of data tracks device failures in the medical device industry, and stores this information in publicly available databases that combine structured record event information and less structured commentary and related content.

When we were navigating this data in Endeca Studio, one of our first questions was, “Is there a specific date range that aligns with summer vacations or winter weather where device failures are more likely to occur?”  Using a tag cloud component, a date range of roughly 30 days exhibited some dates where failures were three times higher than average. We did not expect this result and therefore dug deeper into possible root causes.

kale 1

After trying a variety of dimension combinations, we filtered on the generic name of the device that failed and the manufacturer name using side-by-side graphs.  This “visual comparative” between different dimensions helped show the specific combination that accounted for a majority of the adverse event records.

kale 2

 

kale 3

With a little more comparison between dimensions, we identified that one product code was the lead contributor for adverse events in this data, although the unstructured data represented in the text cloud did not offer much additional insight as to why this manufacturer’s product failure in this specific date range was more prevalent than others.

kale 4

As the Endeca Server ingests more data, one can walk further along the “research-to-root cause” pathway.  At this point we were curious about this part number and performed a search on Google (another source of data that can be integrated with Endeca).  Unexpectedly we had an “ah-ha!” moment when we found that in the past few years this product type was the subject of litigation tied to failure rates with a price tag of nearly $1B in awarded damages. Endeca may have sped discovery and corrective action of this device failure, reduced the number of affected parties, and could have led to averting millions in costly lawsuits.

You can imagine that if you worked with Endeca at the FDA and wanted to protect U.S. consumers, the next step might be to work with additional data sets from this manufacturer to answer questions such as, “Was there a quality issue or recall for the product lots used in this date range?” or “Who handles the  task of properly training doctors on the usage of this mesh versus other types” or “Is there something unique about this date range tied to external factors such as weather, shipping, storage or shelf life that contributed to failure rates of this device?”

You can also imagine that as the manufacturer, certain data sets or information feeds connected to Endeca may have indicated a growing trend that, if discovered at the right time, could have resulted in corrective action.  Endeca could also identify which data source, or combinations of data sources, would properly predict correlations worth investigating further or exclude products that were red herrings.  If a company researcher felt there may be a potential problem in the future, they could choose to create a BI report that tracks characteristics of the device allowing management to make actionable decisions in a timely manner…possibly saving lives and preventing a lot of law suits!

This use case only scratches the surface of Endeca capabilities and functionality.  If you would like to learn more about how you can apply data discovery efforts in your organization, send an email to communications@performancearchitects.com and we would be happy to talk with you in more detail about this topic.

Author: Kale Schulte, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Enhanced Planning Validations: Part 1 – by Tyler Feddersen (featured on Cameron Lackpour’s blog)

July 22, 2014

Cameron Lackpour is a well known veteran in the EPM community with specialties in Hyperion Essbase, Planning, ODI, Financial Reporting, and system automation. Our very own Tyler Fedderson was featured in one of Lackpour’s recent blog posts – a small snippet is below.

“Data form validations offered in Hyperion Planning 11.1.2+ were a nice step in the right direction for easily identifying data entry errors. Additionally, a lesser known functionality that was introduced simultaneously was the ability to hook up these data form validations with Process Management. Unfortunately, there are a few caveats. First off, there are very few of us out there using Process Management. Secondly, even if you do…I’ve found the validation processing to be less than stellar if there are too many validations.

Facing similar issues, a fellow co-worker of mine, Alex Rodriguez, introduced the initial idea to use business rules as an alternative, which turned out to be quite efficient. Since then, I’ve expanded on the idea to create a full validation solution. I have to say…I like the outcome. In fact, I’ve used it in multiple recent Planning solutions. The result does not involve anything like code manipulation but rather is an innovative take on pieces of functionality that are already available. Part 1 of this solution will go through the creation of a process to identify erroneous data entry.” 

To continue reading please visit Cameron’s Blog For Essbase Hackers.

We want to thank Cameron for featuring some of our expertise.

Author: Melanie Mathews, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Outcomes from Oracle Developer Tools User Group (ODTUG) Kscope14: Business Intelligence Track Observations…Part Two!

July 16, 2014

This is a continuation of my review of the Oracle Developer Tools User Group (ODTUG)’s annual Kscope14 conference Business Intelligence track, which focuses mostly on Oracle Business Intelligence (OBI) and Oracle’s related analytic applications.  This post focuses specifically on my learnings around Oracle’s roadmap for BI solutions, new features in the Oracle Business Intelligence Mobile offering, and the Oracle Business Intelligence Cloud Service.

As I mentioned in my last article, the next major version of OBI will be 12c and will be introduced sometime in 2015.  Here is a look at some features I believe we can expect to see in that release, however, please remember that the Oracle team could change their mind and go in a different direction by the time that release goes public.  I (nor Performance Architects) make no representations or warranties about the accuracy of this information.

Better and Smarter Visualizations

We can look forward to a wider range of visualizations with better quality overall.  One of the key pieces of functionality Oracle is looking to introduce for visualizations is the ability of the application to suggest and auto-generate reports based on the underlying data.  Some of you already know that a similar piece of functionality already exists in Exalytics called Presentation Suggestion Engine (PSE).  In Exalytics, OBI provides recommendations on the type of visualizations to use to best represent a data set.   So “new” is a relative term here depending on how much you are invested into the Oracle technology stack.

Useful Search Capabilities

I’m happy to hear that fully integrated, better search capabilities will be released within OBI.  You will be able to search the RPD, WebCatalog, and data.  My guess here is that this will simply be Endeca running behind the scenes – but running seamlessly as part of OBI – much like BI Publisher – which is actually a completely different tool than OBI.

Personal Data Set Integration

Oracle is feeling the pinch from companies like Tableau and QlikView and therefore is investing more heavily in end-user driven “visual analysis.”  I believe Oracle wants to eliminate the distinction between database and personal data sets, providing the end-user with all the tools they need to accomplish their job.  This extends the concept of self-service we find within OBI in tools such as subject areas.  Oracle said that they are going to “fundamentally change the RPD by allowing it to extend itself to personal data sets.”  At the moment, what that really means, in my opinion, and how it will look and work remains a mystery, but my hunch is that they will leverage the same technology Oracle’s team created for RPD modeling in their cloud offering, which I will discuss in an upcoming blog post.

Advanced Analytics

As you may or may not know, today OBI provides the ability to use Oracle R technology to do advanced analysis.  What Oracle is looking to do is to make advanced analytics integration more seamless.  An example is to easily add custom visualization to OBI with a right-click action.  Today you can add custom visualizations, but the process involves deploying your code to WebLogic – and then referencing it through an object like a Narrative – to get your results (not something a business person can do).  Another aspect of this prospective functionality is to give the RPD the ability to extend itself by referencing analytical scripts that perform complicated calculations.

More to come in an upcoming blog post (Part Three of this series)…want to see Part One of this series?  Read here.

Author: John McGale, Performance Architects

 


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

How to Use Linux Grep to Search the OBIEE Web Catalog

July 11, 2014

Searching the Oracle Business Intelligence Enterprise Edition (OBIEE) Web Catalog is not a fun process.  You can use the Catalog Manager to perform the search, but I have found this time consuming, impractical, and inconsistent.

For Windows, your options are very limited, so I usually use a third party tool like Grep for Windows.  This helps simplify my script portfolio, so now I can maintain one set of scripts for Linux and Windows.

Take the case of searching for a subject area in the XML structure of an OBIEE Web Catalog file.  I want to look for “"MySubjectArea"” as this is very specific and occurs only once each file.

The basic syntax is easy:

grep [options] “text I want to search” /file path I want to search/directory/directory/

So, here I’m looking for the CAPS subject area:

—————————————————————————————————

grep -H -r -m 1 “"CAPS"” /u02/app/obi/Middleware/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalog/hdwobidev

—————————————————————————————————

Grep immediately finds files and dumps the contents of the files onto the screen!  Yuck!

—————————————————————————————————

/u02/app/obi/Middleware/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalog/hdwobidev/root/shared/data+security+demo/capsdatafiltertest:<saw:report xmlns:saw=”com.siebel.analytics.web/report/v1.1″ xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlVersion=”201201160″ …… [more junk]

—————————————————————————————————

To fix this problem, we use the CUT command to effectively truncate the file contents after the first semicolon.

—————————————————————————————————

grep -H -r -m 1 “&quot;CAPS&quot;” /u02/app/obi/Middleware/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalog/hdwobidev | cut -d’:’ -f1
—————————————————————————————————

Now Grep just gives me the path and file name.  Perfect!

—————————————————————————————————

/u02/app/obi/Middleware/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalog/hdwobidev/root/shared/data+security+demo/capsdatafiltertest

/u02/app/obi/Middleware/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalog/hdwobidev/root/shared/data+security+demo/asset+status+change/prompt%3a+asset+status+change

/u02/app/obi/Middleware/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalog/hdwobidev/root/shared/data+security+demo/asset+status+change/asset+status+change+-+summary

Here is the definition for the switches that were used:

Options for GREP:

  • Print the filename for each match: -h, –with-filename
  • Read all files under each directory, recursively; this is equivalent to the -d recurse option: -r, –recursive
  • Stop reading a file after NUM matching lines: -m NUM, –max-count=NUM

Options for CUT:

The CUT command removes or “cuts out” sections of each line of a file or files:

  • -d uses “DELIM” instead of a tab for the field delimiter
  • -f selects only these fields and also prints any line that contains no delimiter character, unless the -s option is specified

Happy searching!

Author: John McGale, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Outcomes from Oracle Developer Tools User Group (ODTUG) Kscope14: Business Intelligence Track Observations

July 2, 2014

Last week, I attended the Oracle Developer Tools User Group (ODTUG)’s annual Kscope14 conference.  For those of you not familiar with Kscope, it is one of the largest Oracle conferences in the country that is not run by Oracle. The conference content is divided into multiple tracks that span the Oracle application stack.  I attended to mainly participate in the Business Intelligence track, which focuses mostly on Oracle Business Intelligence (OBI) and Oracle’s related analytic applications.

The BI Symposium on Sunday kicked off the week for the BI track, and included a panel and multiple presentations from Oracle on the future direction of their BI products.  One of the sessions I found most interesting was Oracle’s Richard Tomlinson’s product strategy and roadmap keynote discussion. The main themes he highlighted included:

  • Agility
  • Governed Self-Service
  • Mobile
  • Big Data

For those of you anxiously waiting new product capabilities, I have good news learned during the Symposium, as well as during several events and sessions during the rest of the week.  We discovered we can look forward to a new version of Oracle BI (Version 11.1.1.9) by the end of calendar year 2014.  Oracle BI Applications Version 11.1.1.8 was just released last week. The next version number of OBI after 11.1.1.9 will be 12c, and will be released sometime in the next calendar year (remember Oracle’s “Safe Harbor” statements we always see at the beginning of their conference sessions…these dates may change!).

“12c” is an interesting departure from Oracle’s regular version number strategy for OBI.  This brings OBI product numbering in line with the naming conventions of other Oracle products such as Oracle Database and Enterprise Manager.  The “c” refers to a product’s “cloud ready” or “cloud enabled” capabilities.  Oracle’s business analytics product leads mentioned that in the 12c version and beyond, all new features will be released on the cloud first and then will be made available to the rest of the user base.  Considering this could upset the existing user base, we will see if Oracle actually moves forward with this strategy.

Look for my next blog entry soon on reflections about what I learned on new features in OBI 12c!

Also, this was the third year running that I presented at Kscope.  My session this year was entitled, “The Joys and Griefs of OBI Data Modeling.” For those of you who missed the conference, we will present the session content during a webinar on Wednesday, July 9th, 2014 at 12:30 PM EST; register here to participate.

Author: John McGale, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Outcomes from Oracle Developer Tools User Group (ODTUG) Kscope14: Put All Your Eggs in the Oracle Hyperion FDMEE Basket!

July 2, 2014

As an Oracle EPM (Hyperion) implementer, one of the biggest highlights for me at Oracle Developer Tools User Group (ODTUG)’s annual Kscope14 conference is the EPM Sunday Symposium, where product experts from Oracle provide roadmap information on future releases. Mobile technology and cloud computing owned the day at this year’s Symposium…however, Oracle’s product management team revealed another item that stole the show (well, at least for me).

“FDMEE to source EPM applications.”

It was a single bullet, on a single slide. As it was read, did attendees thunderously applaud? Did they dance in the aisles? Did they cry tears of joy? Sadly, there wasn’t even a whimper from the several hundred people attending. However, in my opinion, this simple capability is a major modification that it is a true game-changer to the way that we architect and utilize EPM data integration solutions. Here’s why…

Over the years, the EPM (Hyperion) footprint has evolved to include purpose-built application areas, including planning, budgeting and forecasting solutions; financial close and reporting solutions; profitability and cost management solutions; and the EPM platform solutions that tie all of these areas together. These areas matured from tools addressing specific business processes to integrated enterprise solutions. Because these applications evolved over time into more integrated solutions, while they could share dimensions, security, and a reporting foundation, the ability to directly integrate data among them has been somewhat difficult. Data integration methods have varied depending on the requirements and needs of the customer.

Methods of moving data amongst EPM applications today are actually plentiful, but each comes with a series of pros and cons. Current methods include:

This is a very efficient method of transferring data amongst Essbase applications. This method also puts the power of the transfer with the user, and can be set up real-time if executed when a Planning input form is saved. Unfortunately, this method is only useable for Essbase applications (built using Planning and/or Essbase), and is also not scalable to transfer large sets of data. In addition, this method often results in “CREATEBLOCK” type issues in the target if using the XREF function, and really does not provide significant transformation capabilities; it is really designed for the transfer of data.

  • Report Scripts / Calc Exports / Level0 Dumps 

Assuming you can find a dusty old implementer like me who actually knows how to write report scripts, this –  and the other methods here – have always been viable techniques. However, performance has never been great with some of these export options.

This method consists of dumping data out to a file location, and picking it up and loading it to another application. There is nowhere to transform the data; and if any transformation is needed, it must be accomplished through an outside process (usually SQL in nature), leaving the confines of the EPM solution. This method doesn’t provide near-real-time capabilities; must be maintained by IT personnel; has many moving parts; and can also result in data security risks.

  • Oracle Data Integrator (ODI) 

Oracle Data Integrator (ODI) does the job well. It is a true data integration tool that basically utilizes almost anything as the source and target. While very powerful, however, this tool cannot be easily used by a finance team or end user to maintain mappings and transformation logic. The IT team typically maintains and manages this solution, and ODI is often used for enterprise-wide data integration. Purchasing ODI as a data integration tool just for EPM is a bit unusual, unless you have a large EPM footprint with significant data movement requirements.

  • Hyperion EPM Architect Data Sync

The latest offering to solve the data integration riddle among EPM applications is a data synchronization feature that is part of Hyperion EPM Architect (EPMA) functionality. This can be automated and is easily maintained by a finance-oriented EPM administrator. However, data transformation capabilities are limited in this solution. This protocol provides for simple mappings only, and relies on the members from the source and target to be similar in nature. In addition, this feature can only be used for EPMA-created applications, and is not available for applications built in EPM Classic.

So…let’s get back to the new FDMEE capabilities. This product was first provided as part of the Oracle EPM (Hyperion) Version 11.1.2.3 release. It is a combination of the previous FDM and ERPi solutions (both no longer available), and utilizes ODI behind the scenes. It takes the best parts of each of these components, and puts them into a single data integration solution. FDMEE’s only drawback has been that it can only source data from ERP-type systems or data extracts. The announcement at Kscope14 that FDMEE will now allow the user to source data from other EPM applications closes the loop and makes this the go-forward option for data integration among EPM applications for the following reasons:

  • Graphical, easy to use data integration tool, that can be maintained by finance department and/or EPM system administrators
  • Ability to execute complex mapping logic and data transformation
  • Can provide drill through capabilities to source systems
  • Ability to automate and schedule data integration routines

And coming soon…

  • In addition to sourcing from ERP systems and other extracted data, it can source data from other EPM applications, and transform this data into target EPM applications

The combination of this functionality makes this the true EPM data integration tool of the future, providing seamless data integration among all EPM applications and application types. So, in the world of EPM data integration…put all your eggs in the FDMEE basket!

Author: Chuck Persky, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.