Monthly Archives: August 2014

Demystifying Oracle’s Data Integration Offerings: DRM, EPMA, FDMEE, GoldenGate, & ODI

August 27, 2014

With all of the current data integration offerings available from Oracle, it’s important to understand the strengths of each, what they accomplish, and how they might fit into your existing technology landscape. Read on to learn a few facts about each of these tools.

Data Relationship Management (DRM) is Oracle’s “enterprise-wide” analytic master data management utility used to maintain master data (dimensions and hierarchies) along with associated properties, attributes, and reference data. DRM provides a single point of hierarchy maintenance across the enterprise, which results in reduced maintenance efforts, keeps hierarchies across systems in-sync, and eliminates excessive report reconciliation efforts associated with master data issues. DRM also provides full audit capabilities for all hierarchy changes, and with the inclusion of the new Data Relationship Governance (DRG) offering, DRM now offers a fully capable internal change control and workflow process.

DRM can serve as a chart master where it functions as a true source for maintaining a Chart of Accounts (COA) or as a pass-through where it is used to maintain and simplify complex hierarchy maintenance activity. This solution excels at rapid change scenarios that commonly accompany merger and acquisition activities. DRM offers robust validation, query, transformation, and export capabilities across target systems in various technology platforms and for various purposes, e.g., ledger systems, relational data stores, planning, reporting, or analytic systems. With workflow and security in place, DRM allows business users to maintain and own their hierarchies.

JS 1

Enterprise Performance Management Architect (EPMA) is an Oracle EPM (Hyperion) focused utility used for maintaining master data and dimensions for the suite of Hyperion tools (Hyperion Financial Management  or HFM, Planning and Essbase, etc.). EPMA promotes hierarchy consistency and supports finance power user usage accessed via Hyperion Workspace. It uses a dimension and application library to manage the Hyperion environment and supports synchronization and shared dimension usage between applications. EPMA also provides a plugin for connecting to DRM to receive master data. The key benefit of EPMA is that it is native to Hyperion, with dimension management and reporting consistency across the Hyperion suite. EPMA is not intended as an enterprise-wide master data solution.  This view of the application demonstrates how easy EPMA is to use in the Hyperion environment:

js 2

Since we get a lot of questions about the differences between EPMA and DRM, I’ve provided a chart that compares functionality between the two applications:

js 3

Financial Data Quality Management Enterprise Edition (FDMEE) is a data integration tool that helps transform data and metadata from a variety of sources into a consumable format for Hyperion products like HFM, Planning and Essbase. It is the evolved successor of the prior FDM product in that it now pairs both the former FDM capability along with ERPi in a 64-bit architecture. FDMEE allows you to apply mapping logic though a user-friendly interface, and manages the data load process via a web console in Hyperion Workspace. The 64-bit architecture adds scalability and stability, and with ERPi capability baked into FDMEE, connecting to source systems such as EBS and PeopleSoft is possible.

Another benefit of including ERPi capability within FDMEE is the elimination of an integration point as data can now move directly from an ERP source to FDM. On the navigation front, the tool now sits in Hyperion Workspace, so users don’t have to navigate between ERPi in Workspace and FDM on a separate webpage when making changes or running a job. Scheduling of jobs is also internal to FDMEE with a built-in automation process. FDMEE lets you source metadata from a source system and load it into target applications such as Planning; this can be automated and results in less maintenance for administrators. One of the highlights for FDMEE continues to be the enabled drill back capability down to the detail record level of the source system. For those interested in integrating multiple source systems into Hyperion FDMEE is a strong consideration.  Here’s an example of the FDMEE interface:

js 4

Instead of a traditional Extract, Transform, and Load (ETL) methodology for interfacing and transformation, Oracle Data Integrator (ODI) utilizes an “ELT” concept where the traditional transformation step of the ETL process happens last (data transformation takes place in the target database instead of prior to loading the data). This is a rethinking of traditional ETL for efficiency of code and data integration.

Similar to the older HAL and DIM tools, ODI is workflow-based and uses an interface-driven method of creating ELT logic rather than relying on coding. ODI focuses on business rule-based logic that is broken down into process steps such as mappings, filters, and constraints that need to be set up when implemented. Like HAL or DIM, ODI also uses adaptors for connecting to data sources. Interestingly, ODI contains a Hadoop adapter which is available for connecting to non-traditional data sources. ODI leverages the target database to maintain a staging area with temp tables which is where the process step code is applied. ODI is an IT-driven tool and provides a developer friendly interface for data transformation:

js 5

The GoldenGate technology was acquired by Oracle in 2009 and is designed to support real time data capture and transformation. Oracle and GoldenGate state that this technology is “the fastest and most scalable real-time data integration across heterogeneous systems” and that it is “the most scalable heterogeneous E-LT data integration”. The solution also enables uptime even during maintenance windows of migrations and upgrades, and provides a modular, scalable framework that satisfies availability needs. This is valuable to customers in situations where critical systems must remain available at all times.

My opinion is that the real benefit of GoldenGate is that it meets business requirements in situations where high availability and data capture is necessary without interruption. It accomplishes this in real time by providing continuous availability and synchronization of database transactions across the enterprise. GoldenGate also supports data backup and recovery in real-time. In general, GoldenGate eliminates downtime and provides constant access to mission critical database systems.  This diagram from Oracle demonstrates how the architecture is separated into Capture, Trail Files and Delivery to allow each task to function independently and optimize efficiency:

js 6

We have additional content on our site regarding the topics discussed – please note that a Learning Center account is required in order to download content. For more information, please visit the links below:

DRM:
http://www.performancearchitects.com/lcpreview?showItem=201
http://www.performancearchitects.com/lcpreview?showItem=156
http://www.performancearchitects.com/lcpreview?showItem=167

FDM:
http://www.performancearchitects.com/lcpreview?showItem=296
http://www.performancearchitects.com/lcpreview?showItem=365

EPMA
http://www.performancearchitects.com/lcpreview?showItem=194
http://www.performancearchitects.com/lcpreview?showItem=354
http://www.performancearchitects.com/lcpreview?showItem=228

ODI
http://www.performancearchitects.com/lcpreview?showItem=145
http://www.performancearchitects.com/lcpreview?showItem=147

Author: Jason Sawicki, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Enhanced Planning Validations: Part 2 – by Tyler Feddersen (featured on Cameron Lackpour’s blog)

August 21, 2014

Last month, Tyler Feddersen was featured in a blog post by Cameron Lackpour, a well-known veteran in the EPM community with specialties in Hyperion Essbase, Planning, ODI, Financial Reporting, and system automation.

Tyler is featured again in the closing segment of this two-part series on Planning validations – a small snippet is below.

“If you recall from Part 1, I created two validation members: AllocValidate and Validate. AllocValidate was created in the dense Accounts dimension while the Validate member was created within a sparse dimension, to give us block suppression capabilities. For this portion of the validation process, I created an additional Accounts member, LockFlag. This new member will be used in coordination with the previously created member, AllocValidate, to create a locking “flag” that all business rules can use to decide whether the rule should continue to process or not.

Additionally, I added a “NO_XXXX” for each dimension. The flag only needs to be stored at the Entity level, so each dimension outside of Accounts, Period, Scenario, Version, Year (although, I use No Year), and the dimension containing “Validate” will need the “NO_XXXX” member.”

To continue reading please visit Cameron’s Blog For Essbase Hackers.

We want to thank Cameron again for featuring some of our expertise.

Author: Melanie Mathews, Performance Architects


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Data Discovery and Business Intelligence: Are They The Same Thing?

August 6, 2014

Using a sports metaphor, business intelligence and data discovery are akin to blocking and tackling in the sport of American football. You need to be able to block and tackle well to stay in the game – these actions are the bare minimum requirements to not get blown away by your competition during the game. One team that blocked and tackled well not only on the field but also off the field was the New England Patriots, which helped them win several Super Bowls in the last decade.

The Patriots organization mined data to successfully help them prepare for games against opposition. They used disparate sets of data like stats on players and competitors; statistical probabilities of various game outcomes; weather impact on players and teams; and reviews and feedback on social media when scouting for talent. In a nutshell, they used a plethora of structured and unstructured data and connected seemingly disconnected sets of data to make decisions.

You are probably wondering “Are business intelligence and data discovery the same thing?” The right answer is they are two sides of the same coin; in short, data discovery solutions focus on discovering correlation amongst disparate elements, while BI solutions offer a platform to report and analyze on causal patterns in data. Business intelligence and data discovery have the same goal, however – to help end users make better decisions. To understand the two concepts in a little more detail, let’s now look at some of the specifics of these two concepts.

Business intelligence is primarily a way of analyzing the transactional (historical) data of an organization through data mining and online analytical processing. Data discovery is an extension of data mining with the intent to discover data patterns. Business intelligence focuses on identifying the data, extracting the data and validating the data (aka: finding causation in the data). Data discovery focuses on finding correlations in any available data set without the need to validate it via IT. Data discovery allows you to explore the data and discover new questions that you did not think of, or to discover answers to questions that you may not have asked.

In today’s world, there has been a tremendous increase in the volume and types of data. Add this to the proliferation of social networks; businesses now need to track the pulse of their markets from new angles as well. Data does not come from within the supply chain alone (product, customers, suppliers, etc.).  Data discovery efforts allow businesses to see correlations in various types of data outside transactional supply chain data sources.

Data discovery is a more interactive and iterative process than the analysis and reporting performed in a business intelligence platform. There is no need to start with a question during the analysis/discovery phase, to source the data, and then to report and summarize the data like in a typical BI solution. With data discovery solutions, you can instead use any source of data, immerse yourself in an iterative discovery process, and arrive at new questions or answers.

With traditional BI applications, data governance processes typically ensure the cleanliness of the data. With data discovery, business users can discover patterns in any data source, even if this data is located on their local desktop as a text or an Excel file. Data that is found in obscure sources has historically been mined sparingly for a myriad of reasons. This data usually gets lost and never found. Data discovery allows for users to mine this data and detect patterns without relying heavily on IT resources. However, this also poses a challenge for data discovery applications as the data has to be self-governed.

Failures of business intelligence efforts due to reasons like heavy reliance on IT and capital and resource intensive projects (to name a few) led to the entry of data discovery into the business intelligence landscape. Data discovery is not a new concept and it has existed for some time. It historically took a lot of technical expertise from the individual with help and reliance on technology solutions to facilitate data discovery efforts. The ones who could mine and discover patterns and were successful at it used it sparingly as the business needs kept changing, and institutionalizing the process was an arduous task. Discovering data patterns quickly and easily without a heavy reliance on technology became the need of the hour. Connecting the dots and simplifying the analytics in a disconnected world is a prime requisite for most successful organizations today. Today, a product like Oracle’s Endeca Information Discovery (OEID) fits the above data discovery needs.

Data discovery empowers all end users – not just the data/information hoarders. Business people like the fact that they do not have to rely on IT teams to find data correlations and to make faster business decisions. With a data discovery solution, business users can focus more on analyzing the data and making a story out of the data available. This enables a quicker turnaround when analyzing near real-time data than BI solutions. Business people typically have short memories – so it’s great to create a story around data that is not only relevant but also timely.

One thing that will probably never change is that business decision makers will never be satisfied with the systems and data in any BI or data discovery solution.  Change is the only constant. Therefore, data discovery will not replace BI. The two concepts will coexist at least into the foreseeable future to satisfy the ever-growing data needs of a business community that increasingly favors self-service and faster decision-making cycles.

Author: Sreekanth Kumar, Performance Architects

 

 


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.