Monday, 12 June 2017

EPM Cloud – Managing applications with Smart View

One of the recent new features in EPM Cloud is the ability to manage applications with Smart View, this is not to be confused with managing metadata using the Planning Admin Extension in Smart View, that has been around for a while and I wrote a blog on it back in 2013.

The announcements and new features document for June 2017 has the following information on the new functionality.

“Service Administrators can now use Microsoft Excel to manage applications in Oracle Smart View for Office. Using a downloadable Excel workbook template, you can quickly create Planning applications in Smart View. By editing worksheets within the application template workbook, you define application properties, dimension and member properties, security (access permissions), and substitution variables. You can also load data from the template. Additionally, Service Administrators can edit and delete the application from Smart View.”

So basically with the use of an Excel template and Smart View you can create, update or delete an application.

The Excel template allows you to manage:
  • Application properties
  • Dimension and member properties
  • Attribute dimensions
  • Data loads
  • Access permissions
  • Substitution Variables
At the moment it only looks like this is available for PBCS and EPBCS, there is no mention of FCCS.

As I have already covered the metadata management in the past I thought I would look into this alternative method to manage an application, currently this functionality only exists in EPM Cloud and who knows if it will ever be pushed down to on-premise, obviously it is there hidden in the latest versions of Smart View but would require an update to Planning to bring it to life.

It is worth mentioning that there is no need to have the Planning Admin Extension installed in Smart View to use the application management functionality.

For managing an application it is all about the Excel workbook template, Smart View is only really sending the excel file to planning, to get up and running you will first need the template and this can be obtained in a couple of ways.

The first way to obtain the templates is through the downloads page which can be access by clicking your user name once logged into an instance through the web.

Under Planning Extension there is an option to select “Download Application Templates”

This will download a zip file named “PlanningApplicationTemplates” which contains three Excel workbooks, the zip file is downloaded from:


The zip file contains a template for PBCS, EPBCS and a version of the Vision sample application for PBCS.

The alternative method for downloading a single template excel workbook is from the Smart View panel, once logged into the cloud instance in the bottom right corner there should be an option “Download Template” or by selecting EPM Cloud and then by right clicking your instance name.

If no application exists there will also be the option for “Create New Application” which I will cover later.

Selecting “Download Template” will download and open “ApplicationTemplateFile.xlsx", the file is downloaded from


The template has the following worksheet tabs which is enough to get you going managing an application.

The template does not include worksheets for attribute dimensions or data so for this post I am going to switch to the sample template, it is based on the vision application so should be familiar to many.

The sample template includes data and attribute worksheets.

Let us run quickly through each of the worksheets in the template to get an idea of the format, to be honest there is nothing complex about them and shouldn’t need much explaining.

Starting with the definition worksheet which defines the application properties and cubes.

There are no list boxes or any type of validations in the template, there are some validations when the file is loaded to planning but not for all areas so you need to make sure you enter the property values correctly, the documentation contains the default and valid values available for each property.

In order to create an application the only properties that are actually required are the application name and description, if any of the other properties are left blank the default value will be used.

Under the cubes section if the type is left blank then BSO will be taken as the default.

The definition sheet is just a replication of the details screen when creating an application through a browser.

The advantage using the template is that you can defined all the cubes instead of the two which are available when creating through a browser.

Moving on to the dimension sheets, I will only cover one of them because they are self-explanatory, the naming convention for the sheet should be Dim.<name>, for example

The actual dimension name is picked up from the worksheet so in theory you can put whatever you like after “Dim.” but it makes sense to keep them aligned.

Not all member properties are included as column headers in the template so if you need to include additional properties then you will have to manually add them.

Also instead of “Parent” there is “Roll up Member” to confuse matters, so there are slight differences between the template, Smart View metadata management and metadata import files.

Using the Smart View metadata management, it is named “Parent Member”.

A metadata import file it is defined as Parent.

I tested updating the template and changed “Roll up Member” to “Parent” and it still worked so it looks like using either should be ok.

If you are looking to find out what a property name should be then have a look at a metadata
import/export file as they should match.

If an invalid member property is included as a header then no error will be generated and the column will be ignored.

If an invalid property value is included like:

An error will be generated when loading.

What does not exist in the excel templates or the documentation is Smart Lists so I am not yet sure if you can create Smart Lists, if you can’t this would mean you couldn’t assign them to members until they have been created.

Once Smart Lists do exist then it is possible to assign them to members using the template as I have successfully tested

Next on to attribute dimensions and the format for the worksheet tab should Attribute.<name>, for example

Once again the dimension name is picked up from the worksheet so you can put anything you like after “Attribute.

On to the advanced settings worksheet which should be named the same:

During my initial testing I couldn’t get the advanced settings to be picked up and the default values were used, I am not sure if this is a bug.

Anyway, the worksheet allows you define the evaluation order settings for each cube in the application.

The equivalent settings in the planning applications are:

The sheet has a section to define the dense/sparse settings, for some reason the template includes populated values for ASO cubes which doesn’t make sense.

The equivalent settings in the planning application are:

The dimension settings can also be defined in the worksheet.

It is also possible to define which dimensions are valid for each dimension in the application.

Now on to substitution variables where the sheet should be named “Variables”.

I don’t think I need to explain the contents of this worksheet as hopefully it should be clear.

On to access permissions where the worksheet should be named Security.

This is quite useful as it is a simple to use format like when using the on-premise planning importsecurity utility, instead of the forced XML format in migrations (LCM).

If you are looking for the best place to understand all the possible values for the worksheet, check out the on-premise importsecurity documentation as currently the cloud documentation does not provide all the detail.

Finally on to the data worksheet which should be named “Data.”, it is up to you what you put after that, for example.

The cube name is defined in the worksheet in cell B3

The column header containing the members should be set as “Dimension”, I did try to set it to the dimension names but when I did the data was not updated in the application.

I did notice a slight issue if there are blank cells in the data, if I load the above data set then the data is fine.

If I remove the value for “BegBalance” so the cell is blank.

After loading the data you can see the periods where the blank cell exists have shifted

To get around this any blank cells should be populated with #missing.

The data will then be correct.

It is also worth pointing out that only up to 1,000 rows of data can be loaded, anything more than that will be ignored, for example I tried to load the following which had more than 1,000 rows of data.

No error was returned but a retrieve shows that the data after 1,000 was ignored.

It looks like data is loaded directly to Essbase so only numeric can be included, the default load method will be to overwrite any existing values in the database with the values from the file.

That covers the worksheets in the template so let us move on to creating a new application.

After connecting to a cloud instance through Smart View and if no application already exists there should be the option to “Create New Application” in the bottom right panel

Once selected the excel file will be uploaded and the application creation process will be initiated, be patient as creating an application can take a while and Excel will look like it is not responding until the process is complete.

If the creation was successful you should see the application, dimensions (if you have the planning admin extension enabled) and cubes in the Smart View panel

After creating the application you can update the application using the Excel templates, there should be the “Update Application” option in the bottom right panel.

Alternatively you can right click the application and select “Update Application”.

The application can be updated with any of the sheets from the template file, so for example if I want to create a new substitution variable I can create an excel file with a single worksheet named “Variables”

The only requirement is the worksheet sticks to the standard format as the template file.

The advantage of only including the sheets where you want to update the application is that the update process is much faster.

Using my example a new substation variable has been created in the application.

The functionality is similar to LCM where it will not delete and only creates/updates are carried out.

If any members are updated then a planning refresh will automatically be performed.

Not that it will be used that often there is also the option to delete an application which can be done from the bottom right panel or by right clicking the application.

You will need to confirm that the application should be deleted.

Once deleted a confirmation message will be displayed.

I am going to leave it there for this post, if I find the time I will create a follow up post to go through in more detail the process behind the application creation, update, deletion and replicate this outside of Smart View.

Friday, 26 May 2017

FDMEE/Data Management – All data types in Planning File Format – Part 2

Moving swiftly on to the second part where I am going to look at the “all data types in planning file format” load type in Data Management, the functionality is basically the same as what I covered in the last part with the on-premise Outline Load Utility but has now been built into Data Management, I am hoping that by providing the examples in the last post it will make more sense when setting up the integration in Data Management.

Currently the functionality only exists in EPM Cloud but I would expect this functionality to be pushed down to on-premise FDMEE possibly when is released, I will update this post once it is available.

Once again I will start out with the same employee benefits source data but this time the file can be kept much simpler as Data Management will handle the rest.

Just like with on-premise the data load settings need to be applied and these can be accessed through the navigator under integration.

It is a shame that these settings cannot be dynamically generated or be defined in Data Management instead of having to set them in planning.

On to Data Management and creating the import format, the file type is set to “Multi Column – All Data Type”.

In the import format mappings, I have basically fixed the members to load to by entering them into the expression field, this replicates the example in the last part using the OLU and fixing the POV in the file.

For the account dimension I could have entered any value in the expression field as it will be mapped using the line item flag in the data load mappings.

The Data dimension will be defined by selecting add expression and choosing Driver, I explained this method in detail in a previous blog on loading non-numeric data.

Basically the driver dimension is selected which in my example is the Property dimension, the first row is the header row and contains the driver members and the data is across five columns.

The mapping expression window provides examples if you are unsure of the format and the expression field will be updated with the values entered.

The data rule is created in the same way as any other integration.

The difference comes when setting up the target options, the load method this time will be “All data types in Planning File Format”.

There are also properties to define the Data load and Driver dimensions, these will match what has been set in the Data Load Settings in planning.

Seems a bit excessive having to set the driver dimension in planning, in the import format and in the data load rule, it would be nice if all these settings could be applied in one place in Data Management.

There is only one difference with data load mappings and that is for the data load dimension the LINEITEM format must be used.

The target value will need to be manually entered with the data load dimension parent member but after going through my example with the OLU it should be clearer to why it is required.

On to the data load and the data columns in the file will be converted into rows in the workbench.

In my source file there are five columns and four rows of data so a total of twenty records are displayed in the workbench.

The final step is export and load the data in to the planning application.

All good but a quick check of the data form in planning and something is not right.

Only the equivalent of one row of data has been loaded and the data that has been loaded is not correct.

The process log confirms that the outline load utility is definitely being used to load the data just like with the earlier example I went through, though in this case only one row has been processed and loaded.

13:10:36 UTC 2017]Outline data store load process finished. 1 data record was read, 1 data record was processed, 1 was accepted for loading (verify actual load with Essbase log files), 0 were rejected.

13:10:36,825 INFO  [AIF]: Number of rows loaded: 1, Number of rows rejected: 0

I checked the file that Data Management had generated before loading with the OLU and even though the format is correct there was only one record of incorrect data in the file.

The file should have been generated like:

The file is generated by converting rows into columns by using the Oracle database pivot query and outputting driver members and values as XML.

13:10:29,528 DEBUG [AIF]: SELECT * FROM ( SELECT ACCOUNT,UD4,DATA,'"'||ENTITY||','||UD1||','||UD2||','||UD3||','||SCENARIO||','||YEAR||','||PERIOD||'"' "Point-of-View"
                      ,'Plan1' "Data Load Cube Name" FROM AIF_HS_BALANCES WHERE LOADID = 598 )

I replicated the data load in on-premise FDMEE, ran the same SQL query and only one row was returned.

The query returns the driver members and values as XML which then must be converted into columns when generating the output file.

At this point I thought it might be a bug but thanks to Francisco for helping keep my sanity, I was missing a vital link which was not documented, I am sure the documentation will get updated at some point to include the missing information.

If you have records that are against the same POV then you need a way of making the data unique so that when the SQL query is run all rows are returned, this is achieved by adding a lookup dimension and identifying a driver member that will make the data unique.

If you take the data set I am loading the driver member “Grade” values are unique so this can be defined as a lookup dimension.

To do this you first add a new lookup dimension to the target application.

The lookup dimension name must start with “LineItemKey” and depending on the data that is being loaded you may need multiple lookup dimensions to make the records unique.

Next in the import format mappings the dimension should be mapped to a column containing the driver member.

The “Grade” member is in the first column in my source file so I map the lookup dimension to that.

After adding a like for like data load mapping for the lookup dimension the full load process can be run again.

The workbench includes the lookup dimension and is mapped to the driver member Grade.
The SQL statement to generate the file now includes the lookup dimension which was defined as column UD5 in the target application dimension details.

17:16:36,836 DEBUG [AIF]: SELECT * FROM ( SELECT ACCOUNT,UD4,DATA,'"'||ENTITY||','||UD1||','||UD2||','||UD3||','||SCENARIO||','||YEAR||','||PERIOD||'"' "Point-of-View"
                      ,'Plan1' "Data Load Cube Name" ,UD5 FROM AIF_HS_BALANCES WHERE LOADID = 634 )

17:16:36,980 INFO  [AIF]: Data file creation complete

Once again I replicated in on-premise and the query correctly returns four records.

Even though the query results include the lookup dimension this will be excluded when the output file is created.

This time the process log shows that four records have been loaded using the OLU.

17:16:49 UTC 2017]Outline data store load process finished. 4 data records were read, 4 data records were processed, 4 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

17:16:49,266 INFO  [AIF]: Number of rows loaded: 4, Number of rows rejected: 0

The planning form also confirms the data has been successfully loaded and is correct.

Now that I have the integration working I can test out the rest of the functionality, I am going to load a new set of data but where data already exists for the unique identifier driver members

The unique identifier members are “Grade” and “Benefit Type”, data already exists under “Total Benefits” for “Grade1” and “Health Insurance” so the data being loaded should replace the existing data.

The data has been overwritten as the value for Active has been changed from “Yes” to “No”

Now let us load a new set of data where there is no matching data for the unique identifiers.

Before the load there was no data for “Grade 3” so the data should be loaded to the next available child member of “Total Benefits” where no data exists for the given POV.

The data has been loaded against next available member which is “Benefit 5” as no data previously existed for the given POV.

Next to test what happens when loading a data set with no matching driver member identifiers now that all child members of the data load dimension parent are already populated.

The export fails and the process log contains the same error as shown when testing the OLU as in the last post.

13:21:07 UTC 2017]com.hyperion.planning.HspRuntimeException: There is no uniquely identifying child member available for this member defined in Data Load Dimension Parent. Add more child members if needed.

13:21:07 UTC 2017]Outline data store load process finished. 1 data record was read, 1 data record was processed, 0 were accepted for loading (verify actual load with Essbase log files), 1 was rejected.

As the log suggests in order for the export to succeed additional members would need to be added under the data load dimension parent.

Since adding the lookup dimension all the data values have been unique for the “Grade” member so there have been no problems, if I try and load a new set of data where the values are no longer unique you can probably imagine what is going to happen.

The above data set contains “Grade 1” twice so now the lookup dimension is not unique and even though the load is successful we are back to where we were earlier with one record of incorrect data being loaded.

This means another lookup dimension is required to make the data unique again so I added a new lookup dimension, mapped it to the “Benefit Type” column in the import format, created a new data load mapping for the new dimension and ran the process again.

In the workbench, there are now two lookup dimensions present which should make the data unique when creating the export file.

Much better, the data loaded to planning is as expected.

In the whole the functionality in Data Management acts in the same way as when using the on-premise Outline Load Utility, I do feel the setup process could be made slicker and you really need to understand the data as if you don’t define the lookup dimensions to handle the uniqueness correctly you could end up with invalid data being loaded to planning.