Tag Archive for: Oracle Cloud

EPM Data Integration – Pipeline

 

Case Study

A client expressed the need for an interface with functionality that would allow non-technical professionals to run daily batches. These batches could include tasks like pulling Actuals, updating the Chart of Accounts (CoA), or refreshing the Cost Centre structure and running the business rules, among others, from the source System.

While seeking a solution, we explored numerous alternatives within Data Integration. However, the challenge emerged as several intricate steps were involved, necessitating individuals to possess a certain level of technical understanding of the Data Integration tool.

Solution

Exciting developments ensued when Oracle introduced a new feature known as “Pipeline.”

 


Pipeline in Data Integration

This innovative addition empowers users to seamlessly orchestrate a sequence of jobs as a unified process. Moreover, the Pipeline feature facilitates the orchestration of Oracle Enterprise Performance Management Cloud jobs across instances, all from a single centralized location.

By leveraging the power of the Pipeline, you can gain enhanced control and visibility throughout the entire data integration process, encompassing preprocessing, data loading, and post-processing task.

Yet, this merely scratches the surface. The Pipeline introduces a multitude of potent benefits and functionalities. We’re delving into an in-depth exploration of this novel feature to uncover its potential in revolutionizing your data integration process.

Enterprise data within each application is grouped as multiple dimensions. Each Dimension has its own Data chain. Registering New application results in the creation of various objects and associated dimensions. An A

Note the following Pipeline considerations

  • Only administrators can create and run a Pipeline definition.
  • Pipeline is a replacement for the batch functionality in Data Management and can be migrated automatically to the Pipeline feature in Data Integration.
  • For file-based integrations to a remote server in the Pipeline when a file name is specified in the pipeline job parameters., the system copies any files automatically from the local host to the remote server automatically under the same directory.

This function applies to the following Oracle solutions:

  • Financial Consolidation and Close
  • Enterprise Profitability and Cost Management
  • Planning
  • Planning Modules
  • Tax Reporting


Proof of Concept

  EPM batches to run sequentially are:

Stage 1 – Load Metadata
  1. Load Account Dimension
  2. Load Entity Dimension
  3. Load Custom Dimension
  4. Clear current month Actuals (to remove any nonsense numbers if any)
Stage 2 – Load Data
  1. Load Trial balance from Source
Stage 3 – Run Business Rule
  1. Run Business rule to perform Aggregate & Calculations.


The workflow for creating and running a Pipeline process is as follows:

  1. Defining Pipeline

  1. Pipeline Name, Pipeline Code, maximum Parallel Jobs
  2. Variable page to set the out-of-box (global values) for Pipeline are available from which you can set parameters at runtime. Variables can be pre-defined types like: “Period”, “Import Mode” etc.

 

  1. You can utilize Stages in the Pipeline editor to cluster similar or interdependent Jobs from various applications together within a single unified interface. Administrators can efficiently establish a comprehensive end-to-end automation routine, ready to be executed on demand as part of the closing process.

Pipeline Stages & Container for multiple jobs as shown below:

Stages & Jobs example

 

The new stages can be added by simply using the Plus card located at the end of the current card sequence.

 

  1. On the Run Pipeline page, Complete the variable runtime prompts and then click As shown below:

 

 

Variable Prompts

 

When the Pipeline is running, you can click the status icon to download the log. Customers can also see the status of the Pipeline in Process Details. Each individual job in the Pipeline is submitted separately and creates a separate job log in Process Details.

Users can also schedule the Pipeline with the help of Job Scheduler.

Variable Prompt

Review

 


Amir Kalawant

Oracle Fusion Cloud EPM – 23.08 Update

EPM Cloud August Update

Test Environments: Oracle will apply this monthly update during the first daily maintenance that occurs at or after 22:00 UTC on Friday, August 4, 2023.


Production Environments: Oracle will apply this monthly update during the first daily maintenance that occurs at or after 22:00 UTC on Friday, August 18, 2023.


RELEASE HIGHLIGHTS


HELPFUL INFORMATION

The Oracle Help Center provides access to updated documentation. The updates will be available in the Help Center on Friday, August 4, 2023.

NOTE: Some of the links to new feature documentation included in this readiness document will not work until after the Oracle Help Center update is complete.

Updated documentation is published on the Oracle Help Center on the first Friday of each month, coinciding with the monthly updates to Test environments. Because there is a one-week lag between the publishing of the readiness documents (What’s New and New Feature Summary) and Oracle Help Center updates, some links included in the readiness documents will not work until the Oracle Help Center update is complete.

https://docs.oracle.com/en/cloud/saas/epm-cloud/index.html


FIXED ISSUES AND CONSIDERATIONS

Software issues addressed each month and considerations are posted to a knowledge article on My Oracle Support. Click here to review. You must have a My Oracle Support login to access the article.

NOTE: Fixed issues for EPM Cloud Common components (Smart View for Office, EPM Automate, REST API, Migration, Access Control, Data Management/Data Integration, Reports, Financial Reporting, and Calculation Manager) are available in a separate document on the My Oracle Support “Release Highlights” page.

The full Oracle advisory note can be found here

Enterprise Data Management Series – Part 2

In the first part, we learned an overview of EDMCS. In this part, we will discuss more on “What EDMCS has to offer and How”. Unlike DRM where we have version, hierarchy, and nodes; Oracle has introduced View, Viewpoint, and Data chain. Let us go through the basic structure of EDMCS.

 

Figure 1 EDM Model

Figure 1 EDM Model

 

Enterprise data within each application is grouped as multiple dimensions. Each Dimension has its own Data chain. Registering New application results in the creation of various objects and associated dimensions. An Application consists of connected views, dimensions, and associated viewpoints:

  • The View is a collection of Viewpoints.
  • Viewpoints are where users view and work with application data.
  • Each dimension contains a series of related data objects called data chains, which consist of node types, hierarchy sets, node sets, and viewpoints.

 

The above objects are the building blocks of the EDMCS as shown and explained below.

 

Information Model

Figure 2 Information Model

 

Application

  • An application models each connected system as an application. You can click on Register to create a new application.

 

Application

Figure 3 Application

 

Dimension

  • Enterprise data is grouped as dimensions such as Account, Entity, and Movement.

Figure 4 Dimension

 

Figure 5 Data Chain Flow

 

Node Type

  • Collection of nodes that share a common business purpose, like Department, Entities.
  • Defines Property for Associated nodes. For Example, Product node type can include properties like Name, Description, Cost, etc.

Figure 6 Node Type

 

Hierarchy Set

  • The hierarchy set defines parent-child relationships for nodes. Example Employees to Department or Vehicles rollup to Automobiles etc.
  • It can define own hierarchy sets using different relationships between node types.

Figure 7 Hierarchy Set

 

Node Set

  • Defines a group of nodes available in Viewpoints and consists of hierarchies or lists. Example Hierarchy of Cost Centre or List of Country codes.
  • Node sets are the only group of hierarchy sets that are required in Viewpoints. Consider the below figure where only Marketing and Finance are included, and the Marketing hierarchy excluded.

Figure 8 Node Set

 

Viewpoint

  • Viewpoints are used for managing data like comparing, sharing/mapping, and maintaining a dimension across applications such as viewing a list of accounts or managing a product hierarchy or exporting an entity structure.
  • Viewpoints are organized into one or more views. Each viewpoint uses a node-set and controls how users work with data in that node-set in a specific view.

Figure 9 Viewpoint

 

 View

  • A group of viewpoints such as managing data for a dimension across applications or integrating data from and to an external system.
  • Users can define additional views of their own to view and manage data for specific business purposes.

 

Figure 10 View Dashboard

 

 

Integration Benefits

Oracle has taken a major leap improving Integration in EDMCS. When in DRM, Integration to other Hyperion modules can only be possible through Table, Flat file, or API integration or involving custom code development. EDMCS has introduced various components Adapter like PBCS, FCCS, EBS to help make a connection directly to the respective component. Note: Adapter for some components is yet to be deployed from Oracle. However, you can always integrate using the standard flat file export.

 

Migration made simple

Existing on-premise Data Relationship Management can be migrated to EDMCS. The administrator needs to register DRM application in EDMCS as custom application and then import dimensional structure. Note: Data Relationship Management 11.1.2.4.330 or higher is supported for on-premise to cloud migration.

 

Governance at a different level

Previously, on-premise DRM had a separate Data Relationship Governance (DRG) interface but in EDMCS it included governance as part of an application. In EDMCS, organizations use request workflows to exercise positive control over the processes and methods used by their data stewards and data custodians to create and maintain high-quality enterprise data assets. Workflow stages are similar like Submit, Approve, and Commit. Finally, before committing changes, users can visualize changes and their effect on Hierarchy.

 

Oracle Data Integrator Cloud Service (ODICS) – PART 1

Oracle is one of the prominent leaders in providing comprehensive data integration solutions that includes Oracle Data Integrator Cloud Service (ODICS), Oracle Data Integration Platform Cloud, Oracle Golden Gate, Oracle Enterprise Data Quality, Oracle Enterprise Metadata Management, and Oracle Stream Analytics.  ODICS provides continuous access to timely, reliable and heterogeneous data from both on-site and cloud solutions to support analytical and operational business needs.

ODICS Overview:

  • ODICS provides high-performance data transformation capabilities with its transparent E-LT architecture and extended support for cloud and big data applications.
  • ODICS supports all the features included in Oracle Data Integrator Enterprise Edition within its’ heterogeneous cloud service.
  • ODICS provides an easy-to-use interface to improve productivity, reduce development costs and decrease the total cost of ownership.
  • Oracle Data Integrator Cloud Platform is fully integrated with Oracle Process as a Service (PaaS) platform, such as Oracle Database Cloud Service, Oracle Database Exadata Cloud Service and/or Oracle Big Data Cloud Service to deliver data needs.
  • ODICS can work with third-party systems as well as Oracle solutions as shown in the below screenshot.

ODI On-Premises Integration with Cloud Services

ODI On-Premises Integration with Cloud Services

 

Cloud E-LT Architecture for High Performance vs Traditional ETL Approach:

  •  Traditional ETL software is based on proprietary engines that execute row by row data transformations, thus limiting performance.
  • We can execute data transformations on the target server by implementing an E-LT architecture based on your existing RDBMS engines and SQL.
  • The E-LT architecture gathers data from different sources, loads into the target and performs transformations using the database power.
  • While utilizing existing environment data infrastructures, Oracle Data Integrator delivers flexibility by using target server for data transformations thereby minimizing network traffic.
  • The new E-LT architecture ensures the highest performance possible.

ODICS ELT vs ETL Architecture Differences

ODICS ELT vs ETL Architecture Differences

 

Oracle Data Integrator Architecture Components:

The Oracle Data Integrator (ODI) architecture components include the below feature sets.

 

ODI SDK Java-based API for run time and scheduling Operations.
ODI Studio Designers’ studio to manage connections, interface designs, development, and automation including scheduling.
ODI Standalone Agent It can be configured in a standalone domain and managed by WebLogic Management Framework.
ODI J2EE:

 

This is the Java EE agent based on the Java EE framework that runs on a WebLogic domain a Managed Server configured in a WebLogic domain. This feature set only comes with Enterprise Installation.
ODI Standalone Agent Template Domain files that are required when Oracle WebLogic Server is not handling your Oracle Data Integrator installation. This feature set is accessible only with the type of Standalone Install.
ODI Console As an alternative to certain features of ODI Studio, we can access the web-based console available to assigned users.
FMW Upgrade This is the upgrade assistant used to upgrade the Oracle Data Integrator version from 11g to 12c.
Repository Creation Utility The Repository Creation Utility (RCU) is used to create database schemas and included with the Standalone Installation type. Enterprise Installation does not include RCU but RCU is included with the installation of Oracle Fusion Middleware infrastructure distribution.

 

ODICS Architecture

ODICS Architecture

 

New / Enhanced Big Data and Cloud Features within ODICS:

 ODICS continues to evolve with technological advancements for Big Data and Cloud Knowledge Modules for better transformations.

Big Data Features:

  • Spark Knowledge Modules (KM) Improvement: The emphasis was on producing high-performance, and easy-to-read code (Spark) instead of handwritten scripts. Spark KMs now leverage the latest features such as Dataframes from Apache Spark 2.x to speed up the ODI processes.
  • Spark KMs support in Knowledge Module Editor: The Spark KMs are now fully supported and can be customized as per specific needs.
  • Hadoop Complex Types Enhancements: ODI enhances its support capability to Apache HDFS and Kafka Architecture.
  • Big Data Configuration Wizard: The Big Data Configuration Wizard is now updated with new templates for the current Cloudera distribution.

Spark KMs In Knowledge Module Editor

Spark KMs In Knowledge Module Editor

 

Cloud Features:

  • RESTful Service Support: ODICS can invoke RESTful Service in Topology configurations that include RESTful Service connectivity, resource URI, methods, and parameters.
  • Business Intelligence Cloud Service (BICS) Knowledge Modules: BICS is now supported out of the box in ODICS.
  •  Connectivity with Salesforce: ODICS is fully certified with Salesforce.com and now includes a JDBC driver for this technology out of the box.

ODI Integration With Salesforce

ODI Integration With Salesforce

 

In the next part, we will focus on more key feature highlights within ODICS.

 

Enterprise Data Management Series – Part 1

Welcome to our initial post in a series of about the world of Metadata Management, Enterprise Data management provides a new way to manage your data assets. You can manage application artefacts that include properties such as Master data (members that represent data and includes dimensions, hierarchies), Reference Data (such as page drop-downs for ease of filtering in frontend), and Mappings (master data member relationships). Using these pre-built functions, you will be able to track master data changes with ease.

If your business is restructured to align entities such as Accounts, Products, Cost Center and Sales across multiple organizational units, you can create model scenarios, rationalize multiple systems, compare within system etc. You can maintain alternate hierarchies for reporting structures which differ from current ERP system structure. In the case of migrating an application to the cloud, you can define target structures and data mappings to accelerate the process. EDMCS also provides the ability to sync up applications from on-premise to cloud or across cloud applications. The process involves the below four C’s:

  • Collate: The first process involves collecting and combining data sources through the process of application registration, data import, and automation.
    • The registration process refers to establishing application connections with a new data source(s).
    • The Import process refers to the loading of data into the application view(s).
    • Automation is built using the REST API features.

Figure 1 Collate Process: Register

 

Figure 2 Collate Process: Import

 

  • Curate: Curate helps organize the source data through a Request mechanism to help load or change existing data. Request mechanism involves 4 steps: Record, Visualize, Validate and Submit.
    • Record: All appropriate actions are Recorded together under the Request Pane with primary action mentioned first.
    • Visualize: This allows you to visualize all the changes prior to committing them. This enables us to view model changes as per request, then study the impact and make any final modifications to the request. Changes made to a view are visible in unique colours and icons so that you can see which parts of the hierarchy or list were changed and what areas may be affected by the change.
    • Validate: Maintain the integrity of data during data entry with real-time validation that checks for duplicates, shared nodes, data type or lookup violation. You can run validation through the Record session as well.
    • Submit: Upon user validations, you can commit them.

Figure 3 Request Process

 

  • Conform: Conform is the process to help standardize rules and policies through Validation, Compare and Map process tasks.
    • You can run validations between and across viewpoints. Share application data within and across the application to create consistency and build alignment.
    • Compare viewpoints side-by-side and synchronize using simple drag and drop between and across applications.
    • Map nodes across viewpoints to build data maps. Construct an alternate hierarchy across viewpoints using the copy dimension feature. Now the question arises “What is the viewpoint”? Viewpoint is a subset of the nodes for you to work with.

 

Figure 4 Viewpoint and Various Actions

 

  • Consume: Consume defines where you move the changes to a target application. You can achieve this by download, export, and automate.
    • Download a viewpoint to make changes offline or make bulk updates.
    • Export moves changes to other application(s) or sync updated data to external target applications.
    • Using EPM automation, you can load and extract dimensions from and within the EPM cloud application

 

Figure 5 EPMAutomate

 

 

How Account Reconciliation Cloud Service(ARCS) from Oracle can improve efficiency of the finance users

 

In this world of cloud computing and mobile applications, we have software applications to increase the efficiency of our personal and official tasks. Software applications are being used for a grocery list, tracking heart rate, and energy burned etc. It is interesting to know that more than 60% of organizations are still performing manual reconciliations between systems and subsystems. The approval process is managed using emails and there is limited visibility on the accountability. Lack of governance in the account reconciliation process cause problems in the audit. Accountants are not leveraging technology to decrease the burden during the monthly close process. With new IFRS standards and new requirements from the regulator, this reconciliation burden is going to increase more. This account reconciliation is one of the major bottlenecks in the close process.

Good news is that Account Reconciliation Cloud Service (ARCS) from Oracle can help the finance users to perform automatic account reconciliations, efficiently manage the process and avoid security risks. Using ARCS, finance users can close their books early and spend time on analytical activities. Since this is a cloud-based solution, there is no capital expenditure required. Once ARCS are implemented, it is deployed on the Oracle cloud and the IT department does not need to maintain it.

 

This best in class reconciliation solution has Reconciliation Compliance and Transaction Matching modules in a single instance.

  1. Reconciliation Compliance: This module helps in managing reconciliation processes and ensure that compliance requirements are met.
  2. Transaction Matching: This module increases efficiency and saves cost by automatic matching of the bulk and labour-intensive transactions.

The reconciliation process is executed in 5 steps in the Reconciliation Compliance module:

  1. Load Balances: Balances from source systems and subsystems can be loaded directly into ARCS. This loading of data can be scheduled as per the client requirement.  Once data is loaded to ARCS, it can be drilled back to the source system. It has prebuilt integrations with EBS, JDE, PeopleSoft, and other market-leading ERP systems.
  2. Automation Executed: Once data for source systems and subsystems is loaded, ARCS performs auto reconciliations based on the account analysis, balance comparison and custom reconciliation criteria. Custom attributes will be assigned based on the rules defined. Amortization is calculated automatically. Email notifications are sent to the preparers and reviewers.
  3. Prepare Reconciliations: Preparers work on the assigned reconciliations by proving account explanations (Prepaid, Accruals) and doing balance to balance comparison (AP/AR/Inventory/Bank Reconciliations/Depreciation). It has prebuilt 20 common reconciliations templates. The administrator can create new templates as per organization’s requirements.
  4. Review Reconciliations: User can put comments and add attachments to each reconciliation. User needs to provide sign off on the reviewed reconciliations. It has multiple level approval and rejection workflow.
  5. Monitor the compliance process: ARCS give instant visibility to the progress and status of the reconciliation life cycle. It shows status charts on reconciliations pending with preparer, pending with reviewer and closed reconciliations. Ageing analysis dashboard gives information on the ageing of all transactions and ageing buckets. Each reconciliation (including attachments and invoice level details) can be extracted in a report binder and these report binders can be shared with the auditors.

In this way, ARCS efficiently manage reconciliation process with visibility and automation for improved accuracy, security and reduced risks.

If your organizations have a manual account reconciliations process and want to have an insight into the real-time status of the reconciliation and close process, ARCS can help in streamlining the reconciliation process.

Transaction Matching: This component helps in the automatic matching of high volume, efforts intensive transactions across one or more data sources. Examples: POS to Bank, POS to cash and credit card, and intercompany transactions (AP/AR) bulk transactions matching.

With customized rules having a lot of flexibility, the majority of the transactions are auto-matched. ARCS intelligence engine provides suggested matches for remaining un-matched transactions.

Transaction matching has 4 high-level steps:

  1. Load Data: Transaction data from source systems (like POS and bank) can be uploaded manually or scheduled using data management. ARCS has the capability to upload pre-mapped transaction data directly to it.
  2. Run Auto Match: User-defined custom rules run and automatically match the transactions between 2 systems.
  3. Confirm Suggested Matches: For un-matched transactions, ARCS intelligence engine suggests matches. User can accept those suggestions or modify the suggestions to match transactions.
  4. Create Manual Matches: User can manually match transactions by providing reason code or adding adjustments. Above mentioned steps are repeated for every data load.

 

ARCS provide reconciliation balancing report for operational controls. It has prebuilt reports on open/closed adjustments and supported transactions.

With above-mentioned capabilities, ARCS can really help in decreasing the burden of finance users and can increase the efficiency of the close process.

We can help with implementing this best in class solution as per the business requirements of your company.