Tag Archive for: Data Management

EPM Data Integration – Pipeline

 

Case Study

A client expressed the need for an interface with functionality that would allow non-technical professionals to run daily batches. These batches could include tasks like pulling Actuals, updating the Chart of Accounts (CoA), or refreshing the Cost Centre structure and running the business rules, among others, from the source System.

While seeking a solution, we explored numerous alternatives within Data Integration. However, the challenge emerged as several intricate steps were involved, necessitating individuals to possess a certain level of technical understanding of the Data Integration tool.

Solution

Exciting developments ensued when Oracle introduced a new feature known as “Pipeline.”

 


Pipeline in Data Integration

This innovative addition empowers users to seamlessly orchestrate a sequence of jobs as a unified process. Moreover, the Pipeline feature facilitates the orchestration of Oracle Enterprise Performance Management Cloud jobs across instances, all from a single centralized location.

By leveraging the power of the Pipeline, you can gain enhanced control and visibility throughout the entire data integration process, encompassing preprocessing, data loading, and post-processing task.

Yet, this merely scratches the surface. The Pipeline introduces a multitude of potent benefits and functionalities. We’re delving into an in-depth exploration of this novel feature to uncover its potential in revolutionizing your data integration process.

Enterprise data within each application is grouped as multiple dimensions. Each Dimension has its own Data chain. Registering New application results in the creation of various objects and associated dimensions. An A

Note the following Pipeline considerations

  • Only administrators can create and run a Pipeline definition.
  • Pipeline is a replacement for the batch functionality in Data Management and can be migrated automatically to the Pipeline feature in Data Integration.
  • For file-based integrations to a remote server in the Pipeline when a file name is specified in the pipeline job parameters., the system copies any files automatically from the local host to the remote server automatically under the same directory.

This function applies to the following Oracle solutions:

  • Financial Consolidation and Close
  • Enterprise Profitability and Cost Management
  • Planning
  • Planning Modules
  • Tax Reporting


Proof of Concept

  EPM batches to run sequentially are:

Stage 1 – Load Metadata
  1. Load Account Dimension
  2. Load Entity Dimension
  3. Load Custom Dimension
  4. Clear current month Actuals (to remove any nonsense numbers if any)
Stage 2 – Load Data
  1. Load Trial balance from Source
Stage 3 – Run Business Rule
  1. Run Business rule to perform Aggregate & Calculations.


The workflow for creating and running a Pipeline process is as follows:

  1. Defining Pipeline

  1. Pipeline Name, Pipeline Code, maximum Parallel Jobs
  2. Variable page to set the out-of-box (global values) for Pipeline are available from which you can set parameters at runtime. Variables can be pre-defined types like: “Period”, “Import Mode” etc.

 

  1. You can utilize Stages in the Pipeline editor to cluster similar or interdependent Jobs from various applications together within a single unified interface. Administrators can efficiently establish a comprehensive end-to-end automation routine, ready to be executed on demand as part of the closing process.

Pipeline Stages & Container for multiple jobs as shown below:

Stages & Jobs example

 

The new stages can be added by simply using the Plus card located at the end of the current card sequence.

 

  1. On the Run Pipeline page, Complete the variable runtime prompts and then click As shown below:

 

 

Variable Prompts

 

When the Pipeline is running, you can click the status icon to download the log. Customers can also see the status of the Pipeline in Process Details. Each individual job in the Pipeline is submitted separately and creates a separate job log in Process Details.

Users can also schedule the Pipeline with the help of Job Scheduler.

Variable Prompt

Review

 


Amir Kalawant

Enterprise Data Management Series – Part 2

In the first part, we learned an overview of EDMCS. In this part, we will discuss more on “What EDMCS has to offer and How”. Unlike DRM where we have version, hierarchy, and nodes; Oracle has introduced View, Viewpoint, and Data chain. Let us go through the basic structure of EDMCS.

 

Figure 1 EDM Model

Figure 1 EDM Model

 

Enterprise data within each application is grouped as multiple dimensions. Each Dimension has its own Data chain. Registering New application results in the creation of various objects and associated dimensions. An Application consists of connected views, dimensions, and associated viewpoints:

  • The View is a collection of Viewpoints.
  • Viewpoints are where users view and work with application data.
  • Each dimension contains a series of related data objects called data chains, which consist of node types, hierarchy sets, node sets, and viewpoints.

 

The above objects are the building blocks of the EDMCS as shown and explained below.

 

Information Model

Figure 2 Information Model

 

Application

  • An application models each connected system as an application. You can click on Register to create a new application.

 

Application

Figure 3 Application

 

Dimension

  • Enterprise data is grouped as dimensions such as Account, Entity, and Movement.

Figure 4 Dimension

 

Figure 5 Data Chain Flow

 

Node Type

  • Collection of nodes that share a common business purpose, like Department, Entities.
  • Defines Property for Associated nodes. For Example, Product node type can include properties like Name, Description, Cost, etc.

Figure 6 Node Type

 

Hierarchy Set

  • The hierarchy set defines parent-child relationships for nodes. Example Employees to Department or Vehicles rollup to Automobiles etc.
  • It can define own hierarchy sets using different relationships between node types.

Figure 7 Hierarchy Set

 

Node Set

  • Defines a group of nodes available in Viewpoints and consists of hierarchies or lists. Example Hierarchy of Cost Centre or List of Country codes.
  • Node sets are the only group of hierarchy sets that are required in Viewpoints. Consider the below figure where only Marketing and Finance are included, and the Marketing hierarchy excluded.

Figure 8 Node Set

 

Viewpoint

  • Viewpoints are used for managing data like comparing, sharing/mapping, and maintaining a dimension across applications such as viewing a list of accounts or managing a product hierarchy or exporting an entity structure.
  • Viewpoints are organized into one or more views. Each viewpoint uses a node-set and controls how users work with data in that node-set in a specific view.

Figure 9 Viewpoint

 

 View

  • A group of viewpoints such as managing data for a dimension across applications or integrating data from and to an external system.
  • Users can define additional views of their own to view and manage data for specific business purposes.

 

Figure 10 View Dashboard

 

 

Integration Benefits

Oracle has taken a major leap improving Integration in EDMCS. When in DRM, Integration to other Hyperion modules can only be possible through Table, Flat file, or API integration or involving custom code development. EDMCS has introduced various components Adapter like PBCS, FCCS, EBS to help make a connection directly to the respective component. Note: Adapter for some components is yet to be deployed from Oracle. However, you can always integrate using the standard flat file export.

 

Migration made simple

Existing on-premise Data Relationship Management can be migrated to EDMCS. The administrator needs to register DRM application in EDMCS as custom application and then import dimensional structure. Note: Data Relationship Management 11.1.2.4.330 or higher is supported for on-premise to cloud migration.

 

Governance at a different level

Previously, on-premise DRM had a separate Data Relationship Governance (DRG) interface but in EDMCS it included governance as part of an application. In EDMCS, organizations use request workflows to exercise positive control over the processes and methods used by their data stewards and data custodians to create and maintain high-quality enterprise data assets. Workflow stages are similar like Submit, Approve, and Commit. Finally, before committing changes, users can visualize changes and their effect on Hierarchy.

 

Enterprise Data Management Series – Part 1

Welcome to our initial post in a series of about the world of Metadata Management, Enterprise Data management provides a new way to manage your data assets. You can manage application artefacts that include properties such as Master data (members that represent data and includes dimensions, hierarchies), Reference Data (such as page drop-downs for ease of filtering in frontend), and Mappings (master data member relationships). Using these pre-built functions, you will be able to track master data changes with ease.

If your business is restructured to align entities such as Accounts, Products, Cost Center and Sales across multiple organizational units, you can create model scenarios, rationalize multiple systems, compare within system etc. You can maintain alternate hierarchies for reporting structures which differ from current ERP system structure. In the case of migrating an application to the cloud, you can define target structures and data mappings to accelerate the process. EDMCS also provides the ability to sync up applications from on-premise to cloud or across cloud applications. The process involves the below four C’s:

  • Collate: The first process involves collecting and combining data sources through the process of application registration, data import, and automation.
    • The registration process refers to establishing application connections with a new data source(s).
    • The Import process refers to the loading of data into the application view(s).
    • Automation is built using the REST API features.

Figure 1 Collate Process: Register

 

Figure 2 Collate Process: Import

 

  • Curate: Curate helps organize the source data through a Request mechanism to help load or change existing data. Request mechanism involves 4 steps: Record, Visualize, Validate and Submit.
    • Record: All appropriate actions are Recorded together under the Request Pane with primary action mentioned first.
    • Visualize: This allows you to visualize all the changes prior to committing them. This enables us to view model changes as per request, then study the impact and make any final modifications to the request. Changes made to a view are visible in unique colours and icons so that you can see which parts of the hierarchy or list were changed and what areas may be affected by the change.
    • Validate: Maintain the integrity of data during data entry with real-time validation that checks for duplicates, shared nodes, data type or lookup violation. You can run validation through the Record session as well.
    • Submit: Upon user validations, you can commit them.

Figure 3 Request Process

 

  • Conform: Conform is the process to help standardize rules and policies through Validation, Compare and Map process tasks.
    • You can run validations between and across viewpoints. Share application data within and across the application to create consistency and build alignment.
    • Compare viewpoints side-by-side and synchronize using simple drag and drop between and across applications.
    • Map nodes across viewpoints to build data maps. Construct an alternate hierarchy across viewpoints using the copy dimension feature. Now the question arises “What is the viewpoint”? Viewpoint is a subset of the nodes for you to work with.

 

Figure 4 Viewpoint and Various Actions

 

  • Consume: Consume defines where you move the changes to a target application. You can achieve this by download, export, and automate.
    • Download a viewpoint to make changes offline or make bulk updates.
    • Export moves changes to other application(s) or sync updated data to external target applications.
    • Using EPM automation, you can load and extract dimensions from and within the EPM cloud application

 

Figure 5 EPMAutomate