Monthly Archives: January 2017

Getting Started with The Oracle Node.js Connector

January 25, 2017

Author: Tony Tauro, Performance Architects

In an earlier blog post, we offered a brief introduction to Node.js and a few use cases. Now, let’s get our paws dirty and use Node.js to run SQL against an Oracle database.

Setup Development Environment

First, the prerequisites. The official github site has a complete list but here’s the gist (for a Windows machine):

  1. Install Python 2.7 (it needs to be 2.7, not 3.5)
  2. Install a C Compiler with support for C++ 11 (this comes standard with Visual Studio 2012, but with 2015 you’d need to install the Visual C++ Template via NuGet)
  3. Install the small and free Oracle Instant Client. You don’t need the Instant Client if you have the Full Client or the database installed on your machine, but it should be version 11.2 or greater.
  4. Finally, you also need Node.js. download from here. To save yourself some CLA SSPATH jiu-jitsu later, follow this instruction: “Make sure the option to add the Node and npm directories to the path is selected.” Choose 32-bit or 64-bit to match your Oracle client.

Finally, we are done with the downloads and installs. Well, almost!  All of that was to set up your development environment. We’ll install the Oracle Node.js connector after we start our project.

Create A Project Folder

Create a folder called “node” anywhere you want. I recommend to not put this folder into an automatically backed-up folder. In a node project, we install npm modules which can take up space. There’s some debate as to whether these should be backed up with your source code or not. For this kind of practice exercise, I’d say don’t back them up. So, to keep things simple, let’s do this in the command line:

cd \

mkdir node

cd node

mkdir oracledb01

cd oracledb01

The very last thing we need to install is the Oracledb module. Since we’ve installed Node already, just type in:

npm install oracledb

That will take some time to run, so go get a coffee or have a light saber fight. Better still, do both. Once that is complete, we are ready to write some code.

Writing the Code

First, let’s get a reference to the Oracledb module we just imported (during the light-saber fight). This object is our link to the Oracle database:

var oracledb = require(‘oracledb’);

Then let’s put our database credentials into an object:

var dbConfig = {

            user: ‘testuser’,                                   // TODO: Replace with actual user

            password: ‘testpassword’,                  // TODO: Replace with actual password

            connectString: ‘localhost/orcl’ // TODO: Replace … you know the drill

};

Next, add a simple select statement (with a placeholder for the actual department id):

var selectStatement = “SELECT department_id, department_name FROM departments WHERE department_id = :did”;

Now, we are going open a connection using the “getConnection()” method, execute the SQL, and get the results. One thing that is different about Node that regularly trips up newcomers is that it is asynchronous. So, if you simply called the getConnection() method and tried to access the connection in the next line, it would fail. The getConnection() method is asynchronous, so if you want to ensure code is executed only after the connection is established, then you have to pass that code as callback to the getConnection() method. The method would look like this:

oracledb.getConnection(<credentials object>, <callback function>) bxc

Notice that the callback function is the second parameter to the getConnection() method. Thus, this is how we code our getConnection:

oracledb.getConnection(

  dbConfig,

  // This function is called once getConnection returns

  function(err, connection)

  {

            // Check the error object. If its not null, then…

    if (err) {

              // …connection failed. Write to console and exit

      console.error(err.message);

      return;

    }

            // Now use the connection object you got to execute your SQL

    connection.execute(

      selectStatement,   // The select statement

      [180],                                 // The department id we want to retrieve

              // This function is called once connection.execute returns

      function(err, result)                                                                                                      

      {

                // Check the error object. If it’s not null, then…

        if (err) {

                          // …SQL execution failed. Write error message to console…

          console.error(err.message);

                          // … RELEASE the connection and then exit

          doRelease(connection);

          return;

        }

                        // SQL execution was successful. Write the meta-data and data to console

        console.log(result.metaData);

        console.log(result.rows);

                        // RELEASE the connection

        doRelease(connection);

      });

  });

 

The code above should be self-explanatory because of the comments. The “RELEASE” lines need a follow up. To release a connection, we just call “release()” on the connection object. So, we need one last function to complete our simple program:

function doRelease(connection)

{

  connection.release(

    function(err) {

      if (err) {

        console.error(err.message);

      }

    });

}

Save this as “select01.js” in the “oracledb01” folder. To execute it, simply type in:

node select01.js

You should see two lines in output:

[ { name: ‘DEPARTMENT_ID’ }, { name: ‘DEPARTMENT_NAME’ } ]

[ [ 180, ‘Construction’ ] ]

That’s the first step, of course. This entire example is based on the “select1.js” example from the github download (it should be in the examples folder in “oracledb01”). I have just simplified a little and added comments. Note also that this is supposed to be quick and dirty. You won’t put credentials in your source code (it’s better to keep them in the Node Environment variables or a config file). You don’t hard-code your selection criteria. Also, you can recycle your connections instead of opening and closing them each time because the node connector supports connection pooling. You can use “Promises” to streamline the async calls. Most importantly, you can plug this right into an enterprise-grade framework like Express.js or Sails.js and build an enterprise application on an Oracle back-end.


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

How to Implement Constant Currency (FX Impact) in Oracle Hyperion Planning

January 18, 2017

Author: Vahram Gabrielyan, Performance Architects

As a financial analyst for a multinational company, it was my job to come up with variance explanations for our monthly, quarterly and annual financial reports. At the end of 2015, the strong U.S. dollar started causing major fluctuations in our financials. Foreign exchange (FX) fluctuations became a constant talking point in our monthly business reviews.

Even though we had all of our data loaded in Oracle Hyperion Planning, I found myself using multiple Excel tabs to manually calculate the FX rate fluctuations in our financials. During a month-end close, I found myself ruminating over my manual calculations. Two thoughts came to mind: 1) “Dang, boy, that looks good.” and 2) “There has to be a better way to do this.”

Fast forward to my current position as a consultant at Performance Architects working on Oracle EPM (Hyperion) implementations. At my current client, we were tasked with creating a Hyperion Planning module which would calculate constant currency (CC) or FX impact. All of the Excel tabs that I used to come up with a single currency fluctuation would now be substituted by a business rule that would retrieve that one number with one click. Here are some key points when looking to create your own CC capabilities.

First, decide what kind of conversion your company needs. Choose between the following options:

  • Convert current actuals using previous year actual rates
  • Convert previous year actuals using current year act rates

Both of the options are valid; choosing one simply depends on your organization’s needs. My client decided to use previous year rates to convert the latest data.

Next, decide on CC scenarios. Some CC scenario examples include, but are not limited to, these options:

  • Current forecast at budget rate
  • 2015 actuals at 2014 actual rates
  • Current forecast at previous forecast rates

Based on your reporting needs, the number of CC scenarios may vary. We created eight scenarios at my current client. Keep in mind, a larger number of scenarios will give you reporting flexibility, but it may increase the maintenance time.

Lastly, I recommend creating a separate set of substitution variables (system variables) for CC calculations.  Most often, CC calculations would occur after the books are closed and substitution variables are updated. However, you may still need the previous month/year data for your calculations.

To conclude, FX fluctuations have become more common in financial variances, and analysts spend more time manually calculating them. It may be a good idea to talk to your Oracle Hyperion Planning team or partner to build a CC capability in your Oracle Hyperion Planning or Oracle Essbase environment.

Need help figuring out FX impacts in your planning environment? Contact us at sales@performancearchitects.com and we can help you out!


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Countdown to BIWA Summit 2017…The Performance Architects Inside Scoop on Conference Events and Presentations

January 11, 2017

Author: Kirby Lunger, Performance Architects

The annual Oracle Business Intelligence, Warehousing and Analytics (BIWA) Summit is taking place from January 31 – February 2, 2017 at the Oracle Conference Center in Redwood Shores, CA, and the Performance Architects team wants to make sure that you’re up to speed on what we think are some of the “can’t miss” events and sessions at the conference!

The event is hosted by the Independent Oracle Users Group (IOUG), which was founded in 1993 as “the voice for Oracle technology and database professionals.”  Three groups within IOUG oversee this event: the BIWA Special Interest Group (SIG); the Oracle Spatial & Graph Special Interest Group (SIG); and the Northern California Oracle Users Group (NoCOUG).

As a result, the conference is billed as “THE Big Data + Analytics + Spatial + Cloud + IoT + Everything Cool User Conference” and doesn’t disappoint, with ~85 sessions spread across eight tracks: Advanced Analytics; Big Data; Business Intelligence & Visualization; Cloud Computing; Data Warehousing & SQL; General; Internet of Things; and Spatial & Graph.  We strongly recommend attending the Oracle product roadmap sessions, as these provide an important preview about what’s happening in the BI/business analytics world in between the highlights generally provided at OpenWorld in the fall and COLLABORATE and Kscope later in the spring.

I’m giving two presentations during the conference.  The first is on January 31st at 4:55 PM for the Business Intelligence & Visualization track, entitled, “Leadership Essentials in Successful Business Intelligence (BI) Programs” [Room 203: 839662].  The second is on February 2nd at 12:00 PM for the Cloud Computing track, entitled, “Avoid Frankenstein: Optimize Business Analytics Environments for Cloud & On-Prem” [Room 104: 839660].  Please come by and heckle me…or at least say “Hello!”  I know that many of us these days are accustomed to working remotely, but it would be great to finally connect names and voices with faces…and to have some fun together in person!

Speaking of having fun in person, the Oracle Developer Tools User Group (ODTUG) is hosting a meetup for the ODTUG BI community after-hours at this event on February 1st at St. James Gate Pub and Restaurant.  Sign up here.  Should be a good time.

Is there anything you’re looking forward to at this event?  Leave us a note and let us know!


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Oracle News: Removal of Support for Financial Reporting Studio Desktop Client

January 9, 2017

Author: Melanie Mathews, Performance Architects

If you are running Oracle Planning and Budgeting Cloud Service (PBCS) , the Performance Architects team wants to highlight an important recent update that you should know about. Oracle is committed to providing a simple and intuitive experience to the users of the service. To achieve this goal, Financial Reporting Web Studio, a component available in the service since March 2016, will be established as the only tool for designing and building reports. As a result, support for Financial Reporting Studio desktop client will tentatively be stopped starting the May 2017 update. Oracle will not distribute Financial Reporting Studio desktop client starting this update.

For more information and how to transition to using Financial Reporting Web Studio, please email us!

 


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

What is the Source Cube in Oracle PBCS…and What Does HSP_NOLINK UDA Do?

January 4, 2017

Author: Andy Tauro, Performance Architects

The “Source Cube,” or “Source Plan Type,” is a feature of Oracle Planning and Budgeting Cloud Service (PBCS) that determines the flow of data between the models or plan types of an application.

The idea behind this is that the plan for a model is authored in one plan type, and the others use that as a source, automatically, without the need to move data. This is driven by the dimension tagged as type “Account.”

When building a planning application with more than one model, the account dimension is built with separate hierarchies for each model – the relevant hierarchies are enabled only for the specific plan type / model.

However, it is not unusual to have members that are common across models. In such cases, the BSO plan types that are not the ‘source’ for a member, get a member formula with an XRef to the member in the source plan type. This is implemented in the Oracle Essbase back-end, and is not obvious in the web interface.

Why it can this be a problem?

The XRef member formulae allow for dynamically retrieving data on-demand, without the need for any other process. While this is theoretically a good thing, there are a few situations where this may not be the best way to go. As the size of the dimension grows, the number of such ‘dynamic’ members grows as well. And since data pulls via XRef get data one cell at a time, performance will slow as the size of the block increases. In other words, as the dense ‘Account’ dimension grows, the negative performance impact is twofold.

Additionally, if a dense restructure of the cube is triggered, every member formula is validated. And this can increase the restructure time. Imagine a retrieve being performed on the entire Account dimension! Such a dense restructure is not difficult to encounter in a multi-model planning application, since PBCS performs a “Refresh Database” on all plan types, every time.

With the advent of ASO plan types, it can be tempting to load actuals data into the ASO cube, and use it as a source for the BSO cubes. While this can be done, this method suffers from the same drawbacks as the two previous situations because the BSO plan types will all have XRef member formulas pointing to the ASO source.

How to get out of this?

There are two ways to get out of this, and the first one is more of a best practice: avoid making the ASO plan type as the source. Since ASO cubes do not use XRef member formulae, setting a BSO cube as a source will reduce one such data flow path. The second option is to use the HSP_NOLINK UDA.

Applying the HSP_NOLINK UDA to the Accounts dimension tells PBCS’s underlying database, Oracle Essbase, to disregard the “Source Cube” feature, and to not link the cubes via the XRef member formula. The drawback to this is that data is not automatically sourced on-demand between the plan types. However, this is not something that difficult to overcome. The “Map Reporting / Data Maps” feature moves sets of data between plan types, and the “Smart Push” capability allows for such moves to be triggered off data entry web forms.

How to apply this?

While analyzing your application to make the determination of which approach to take, start by making a list of account members that are unique to each plan type (cube). These are a no-brainer, as the cube is the source for these. For the ones that are shared, if only one of them is a BSO cube, make that the source. It gets trickier when the member is shared between multiple BSO cubes. In such a case, unless the design of the application calls for data flow between the cubes via the implicit XRef pulls, use the HSP_NOLINK UDA and save yourself a world of pain.

Hopefully you have found this helpful in figuring out one (albeit important) aspect of application design when using Oracle PBCS. Check back every so often for more such tidbits that we share from lessons we learn using these tools every day of our working lives.


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

How to Provide Secure Access to Dashboards in Oracle Business Intelligence Cloud Service (BICS)

January 1, 2017

Author: Linda Stewart, Performance Architects

Oracle Business Intelligence Cloud Service (BICS) is part of Oracle’s Platform as a Service (PaaS) offerings.  Enterprise IT teams can use Oracle cloud solutions to rapidly build and deploy applications without the setting up expensive infrastructure.  That said, your IT team still needs to assist with security setup and configuration.  This blog post provides details important focus areas based on the Performance Architects team’s experiences with BICS security.

Getting Started

When your business signs up for an Oracle Cloud account, Oracle Cloud creates an identity domain specific to your company. When users log in to an Oracle Cloud service through the service account, the cloud identity management controls the user authentication and which features of the service users can access using Oracle Single Sign-On (SSO).  SSO may be federated between on-premise SSO and the Oracle SSO.  The Oracle Cloud uses LDAP schemas for storing identities.

User and Role Management

There are two parts to the security in BICS:  Oracle Cloud Identity Domain Users and Roles and Oracle BICS Application Users and Roles.

First, we add the user to Oracle Cloud Identity Management using the Identity Management administrator credentials.  Click Users; then on the following form, click Add:

Typically, we use the email address as the user name.

We now can set up a role in the service for each user.  If the new user will just view reports, we will click on the Service drop down and select BICS (Business Intelligence), then click the button with the two “greater than” signs which pushes the roles to the Selected Roles box.  If the user is not a reports or dashboard author, then uncheck “bics BI Cloud Service Advanced Content Authors” as shown above.

Click Add to save the user.

As the administrator, open BICS to complete the user creation.  Open Console and select Users and Roles.

In BICS, there are five predefined application roles.  We do not have to add our new user to any of the predefined application roles.

Let’s say we want to secure our data by sales region.  We have a global sales manager who should be able to see all sales regions.  In our fact table, we have the name of the sales region on each data row.  We want to secure the dashboard to only the members of the sales department, and then filter the data where the sales person may only see their own regional data and the sales manager should be able to see all regions.  Let’s also say there are also marketing departmental dashboards in BICS, and marketing should not be able to see the sales dashboard and sales should not be able to see the marketing dashboard.

To do this, we will establish two application roles to manage the dashboard level.  Add Sales Dashboard. Click Save.  Repeat for Marketing Dashboard.

Add members (users) to Sales by selecting the button at the right end of the Sales Application Role, then select Manage Members:

Search for “John Doe” and add this user to the Sales Dashboard Role by clicking the user name in the left box. Next, click the single arrow to place the user in the Selected Users panel.  Click “OK” to save.

Then, create the Marketing Dashboard role and add users to that role.

After this, we will create the roles to secure the data in the Sales fact. Open Application Role and add the following roles:

  • Global Sales Manager Data
  • Eastern Sales Data
  • Western Sales Data
  • Marketing Data

Use the same process as we did for creating the Sales Application Role and add a member to each role.

We can divide the sales regions as many ways as we want as long as the sales region data in our fact has data to support our scenario to slice up the data.  To simplify the example, we will have just two sales regions and then the sales manager who may see all regions.

Once the application roles are complete, we navigate to the BICS Modeler to secure the data and then to the BICS Catalog to secure the dashboards.

In the Modeler, we will need administrator rights so we can filter the data by role.  To do this:

  • Open Data Model in the left panel
  • Select Fact-Sales
  • Select Lock to Edit button
  • Select Data Filters tab
  • Click Add
  • Select the Role as Eastern Sales Data
  • Click the FX button and select to filter the fact data on Region=’eastern’
  • Repeat for the Western region

This secures our data.

Lastly, we will secure the dashboards by opening the Catalog:

  • Under our Company Shared Folder, we have Sales and Marketing folders
  • Add both the Sales and Marketing dashboard roles to each folder and apply permissions recursively

Secure sales as follows:

  • Select the Sales folder
  • In the Tasks pane, select Permissions
  • Click the ‘+’ button to add new application roles
  • Select the Sales and Marketing dashboard roles
  • Set as Custom Permissions

Use the pencil icon on the Marketing role and uncheck all of the boxes to change the permissions to No Access.

Repeat the process for the Marketing dashboard, but set the Marketing role to Full Control and the Sales role to No Access.

Do you have detailed questions on how to configure security in BICS?  Contact Performance Architects here and we’ll be happy to discuss in more detail with you.


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.