Monthly Archives: July 2015

Oracle BI Cloud Service (BICS): Data Sync Utility Overview

July 29, 2015

Author: Michael Bender, Performance Architects

Late last year, Oracle released its Software-as-a-Service (SAAS) business intelligence offering called Business Intelligence Cloud Service, or BICS. One of the requirements for using this tool is that all relational database data must be first uploaded to a cloud version of the Oracle database:

MB1

The next obvious question is, “What are the ways in which I can upload data to my cloud Oracle database?” On initial release, BICS offered a few options such as:

  1. Data Loader: An Oracle ApEx application that allows for uploading data via a flat file export from a relational database. It is limited in scope, however, and does not allow automation.
  2. SQL Developer Carts: An interactive method of creating a cloud connection within SQL Developer. Automation and scheduling are more cumbersome for novice users.
  3. REST API: An advanced method that requires API programming knowledge.

More recently, Oracle introduced a new tool for uploading data to BICS: a client utility called “Data Sync.” Data Sync is a free utility that supports loading on premise data from both relational databases and flat file sources, and can be downloaded from Oracle TechNet here.

The download is a simple unzip operation with no executable file. For security reasons, the tool should be downloaded to a server environment and is supported on both Windows and Linux/Unix platforms. A quick note on prerequisites for using Data Sync:

  • Java Developer Kit (JDK) 1.7 or later is required – JRE will not suffice
  • Prior to running the utility, update the config.bat file to point to your JDK
    MBextra
  • Databases supported:
    • Oracle
    • MS SQL Server
    • DB2
    • Teradata
    • MySQL
    • Oracle Times Ten
  • JDBC Drivers: Data Sync uses JDBC drivers to extract data from source databases. For databases other than Oracle, it may be necessary to copy the .jar drivers from the source database install to the \lib directory in the Data Sync folder

The first time Data Sync starts, it will prompt for environment configuration, a repository name, and a login password. Then, there are three main components of the utility:

  1. Connections: Defining source databases / files and target (BICS) databases.
  2. Project: Defining source and target database tables.
  3. Jobs: Defining groups of tables to upload.

Connections: Source Definition

MB2

Connections: Target Definition

MB3

Project: Database Tables and Load Strategy Definition

MB4

MB5

Job: Scheduling Capabilities

MB6

Other than standard logging, one of the other useful features of the tool is email notification of success or failure.

MB7

MB8

For more advanced functionality, such as integration with upstream or downstream ETL processes, Data Sync also includes a command line option. The most recent version of Data Sync is v1.1, and contains a few enhancements:

  • Automatic retry: In the event a connection fails during a data upload job, Data Sync will retry up to 10 times. In order to support this, three additional columns are added to each target table to accommodate the reload process. In the case of failed data loads, the utility will automatically clean up any partially loaded records.
  • Additional delimiter support for file based data sources (pipe and custom single character).
  • Support for table / column names using mixed case and Oracle reserved words.

In our experience at Performance Architects, Data Sync is a lightweight, reliable, and easy to use tool to batch up and schedule data loads into a BICS environment. Contact the Performance Architects team at sales@performancearchitects.com for more info.


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Updating Data Forms in Oracle Hyperion Planning with the FormDefUtil.cmd Utility

July 21, 2015

Author: Ben Snawder, Performance Architects

Making changes to a single data form in Oracle Hyperion Workspace (Workspace) is easy. When you have to make changes to ten data forms, the process starts to become tedious. What if you need to make changes to fifty or even one hundred data forms? That’s essentially torture, especially if you’re using version 11.1.1.3.0 of Oracle Hyperion Planning or earlier.

To make this process easier, the FormDefUtil utility allows you to export data forms to XML files and to make changes in a text editor (NotePad++ is recommended). In this way, you can update any number of data forms and then import them back into your application in a fraction of the time it would take to do manually in Workspace.

Let’s look at a simple example:

In the data form below, we have three months of actuals and eleven months of forecasting. Let’s say you want to expand this form to forecast 12 additional months. Instead of editing the form manually to add these columns, you can export the data form into an XML file using the FormDefUtil utility, make the changes you need and import this back into Planning.

BS7

For most people, the FormDefUtil utility is located, along with other Planning utilities, in D:\Hyperion\products\planning\bin.

In order to launch the FormDefUtil utility, open up the command prompt and change directory to the folder where the utility is located.

BS2

Entering “FormDefUtil.cmd ?” will provide you with the syntax needed to run the utility.

[-f:passwordFile] can used if you have an encrypted password file set up. If you have one, enter the full file path and name in place of “passwordFile”. If you don’t have one, skip it.

After indicating whether you want to import or export, you can specify the name of the form of interest, or you can export all data forms in the applications.

Once you provide the server, username and application name, it will prompt you for your password.

After the utility runs, the XML file will appear in the same folder where the FormDefUtil utility is located.

Data forms in their XML state can seem unintelligible at first glance. After taking some time to notice the patterns, it becomes easy to see where columns and rows are being encoded:

BS3

A new column can be made by simply copying a column, pasting it after the last column in the form changing the substitution variables for the year and month accordingly:

BS4

BS5

This process can be repeated until you have the number of new columns that you need:

BS6

Now when you run the FormDefUtil utility again, this time choosing to import, the data form will reflect the changes that were made:

BS1

This utility comes in extremely handy if, for example, you need to add an additional year of forecasting to all 150 data forms in a Planning application.

Using this method you can create the new columns in one form, copy/paste them into all the others, import them back into Planning and be done within minutes rather than hours.


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Business Intelligence and Institutional Research: A Shotgun Wedding, or Marriage Made in Heaven?

July 8, 2015

By Hank Childers, University of Arizona

At The University of Arizona we decided to combine our Business Intelligence (BI) team (which includes ETL, Data Warehouse, and Reporting) with our Institutional Research (IR) team (which includes Ad Hoc Requests along with External Reporting).  This made good sense to us, and we chose to act on it.  But it isn’t all that common, and that made me curious to better understand why not.  So I set out to interview a number of other individuals and institutions to see what their thinking was.  I presented the findings at the Higher Education Data Warehousing (HEDW) 2015 Conference in Normal, Illinois, in a session with the title:  “BI+IR: Shotgun Wedding, or Marriage Made in Heaven?”  These slides are available to HEDW members on the HEDW web site.

In the interviews, I started off by asking this question:

If your institution set out to combine IR and BI, how would you characterize it?

  • Marriage made in heaven?
  • Arranged marriage?
  • Shotgun wedding?
  • Las Vegas wedding?

Results spread across all the categories, with “arranged marriage” the most common response.  But the actual comments were especially interesting.

KL_HC 1

While there was certainly not a uniform reaction, the question did seem to touch a nerve.  I started to feel less like a researcher and more like a journalist…looking for the story.  And I think there is one, with three parts to it.

  1. Why combine BI and IR?
  2. What makes it hard?
  3. How can we approach it?

Why combine BI and IR?

Most agree that IR and BI need to work cooperatively when necessary, but also see their missions as distinct.  In a typical situation, IR focuses on external reporting of official numbers, especially student-and faculty-related, while BI focuses on operational and management level reporting of ERP systems data, including research, employee and financial data.  Different data.  Different users.  Different uses.  But these differences blur when seen by those who want to make expanded use of data in decision-making.  A dean, for example, needs information of all kinds, and that information needs to be integrated.  The dean rightly asks:  “Where should I go to get information?  What should I do when the results are different?”  To the dean, the distinction between BI and IR is less and less clear, and in any case not that interesting.

One question of interest is where financial data comes into play.  In the majority of institutions the IR function is organizationally located with or close to the Provost.  However, in a significant number of cases, the IR function is located within the Planning & Budgeting function.  Thus there is not a consistent pattern on where the financial data strength is housed.  Sometime it’s in IR and sometimes it’s in BI.  But in either case the need for integration of financial data with other data drives us to bring these groups into significant alignment.

Secondly, the pressures to move towards a culture of data-informed decision-making, and the general pressures on the system of higher education, are enormous.  If we in IR and BI are not directly involved in wrestling with these issues, then we will be decisively less relevant to our institutions.  There are pressures as well to greatly expand our use of predictive analytics, most often around student success.  This requires integration of data that is typically spread across multiple systems, and in many cases it is data that changes from day to day.  This requires both BI skills and IR skills, working together.

In addition, there are the seemingly never-ending pressures to do more with less.  We need to streamline the routine work whenever we can in order to wring out some capacity to do the value-added non-routine work.  This, too, requires the skills of IR and BI, working together.

These pressures are substantial, but they also represent opportunity.  Institutions generally have not yet figured out how to do all this.  Maybe we can, and that’s an exciting prospect.

What makes it hard?

BI and IR both work with data, and in many cases, pretty much the same data.  Shouldn’t this bring the two functions together?  Perhaps it’s a little like the statement that England and America are “two nations divided by a common language.”[1]  They both work with data, but historically they look at it very differently.  I see current forces pushing them together, but in the 18 interviews I conducted I found there are real differences between them, including their perceptions of each other.  Generally speaking they are skeptical of each other’s methods and motives.  And even in those situations where the groups are working well together now, it hasn’t been easy to get there.

My hypothesis was that these differences are in part due to structural factors, such as assigned responsibility, location within the organization, etc., and in part due to cultural factors, such background, values, etc.  The specifics of a current situation, including organizational history and the people themselves, are likely predominant in any given instance.  However, there are also pretty clear patterns that emerged.

Structurally, these two functions report up differently in their institutions.  While their responsibilities are now starting to overlap significantly, this has not been the case in the past.  They have tended to work with different data for different purposes that serves different constituencies.  Also, notably, IR is a much older and more established function.  As more than one IR person put it, “We were doing BI before there was BI!”

The structural differences are pretty significant, but the cultural ones may be even more telling.  This gets into the areas of identity and values, and can lead to conflict that is harder to root out.  For example, the two function’s educational backgrounds and levels differ significantly.  IR staff very commonly holds Masters Degrees and PhD’s.  Also they have different career backgrounds and career prospects.  BI has opportunities in the private sector, for example.  BI seeks to make data available to broad groups of people and may not always understand the meaning of the data.  IR is very sensitive to how data are interpreted or misinterpreted, and past experience has taught them to be cautious.  IR people say that the “data are useful” while BI people say that the “data is useful” (both are correct, by the way, but be aware of what kind of bar you’ve walked into before ordering your data drink!).  BI people think that IR people manipulate data, while IR people know that official reporting requires conformance to official reporting definitional rules, which transactional systems may not always honor.

It goes on.  I see discernable differences in about 20 different structural and cultural dimensions.  But the most important and certainly hopeful fact is that these differences all make sense when you step outside.  In higher education we greatly value diversity of all kinds, and here we see an opportunity to practice that value.

How can we approach collaboration?

I think there is no magic here.  This is hard work, but certainly not impossible.  A few institutions have taken fairly dramatic steps, and these have clearly been very hard on the institutions, and nuanced understanding has likely been sacrificed.  Perhaps that’s the best way at certain times.  At Arizona we are taking a more evolutionary approach, and attempting to consciously move in the direction of bringing these groups together both organizationally and in terms of work practices and relationships.  We have a vision of where we want to be in three years, and the steps to get there are largely practical steps.

The foundational step is awareness.  It’s awareness of these structural and cultural differences, and the reasons for these differences.  But it’s also awareness of the forces at work that are pushing us to work together.  People can sometimes lose sight of the bigger forces at play, and genuinely fail to see why yesterday’s approach may not work tomorrow.  It’s our job to help people see this.

One of the most effective tactics (and a reason why we want to do this anyway) is to work together successfully on common projects.  It has to be real.  It has to be projects where both BI and IR are needed, and where both get appropriate credit.  As we know project work is difficult.  Projects need to be managed carefully, and be strongly and visibly supported by senior administration.  There has to be a reason for people to work together that they can see and feel.  But the shared experience of working together successfully on something big is powerful.  When we finish we get to say, “We did that!”

There are other things we need to do that contribute greatly to coming together.  In our case, where we have combined the groups, we need to have a common physical work space, common processes (when appropriate), common office infrastructure, and common data space.  As much as we can we need to get there in a way that doesn’t support the perception of there being winners and losers.  This is difficult, and we are rightly suspicious when we hear that A and B are merging—is it a merger, or a takeover?

Conclusion

Indeed this is difficult work, but we need to do it, and we can do it.  One final thought—there should be a human touch on all of this, with a sense of humor and a sense of balance!  As a friend likes to say, “There’s a lot of stuff, but after all it’s just stuff.”

[1] Variously attributed to George Bernard Shaw, Oscar Wilde, and even Winston Churchill!

© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.