Recently I had a client decide to implement a major architecture change to their production environment. Of course, this change was going to be made to their production environment, during financial close, and on a weekend. As managers of EPM Systems surely know, there is no ‘good time’ to push major system changes. In this case, the change was moving all the EPM components to a new database (DB) server.
When working with a client reporting frequent slowdowns in copying of data between schemas, we asked the DBAs to investigate, as well as the UNIX team. Both teams reported that the individual database servers were performing fine, with plenty of resources available to them. However, after a number of conference calls, we overheard one of the team members say their HOST server was at or near 100% CPU usage and had been for some time. As the host was an Exadata 3 server, we had not anticipated we would be subject to shortages of system resources. Unfortunately, without visibility on the host server, we had to rely on reports from the teams maintaining the individual instances. Without us asking specifically for a screenshot of the host server’s performance statistics, we would not have been able to certify the issue was with the database server after all. As always, trust, but verify.
Task Flows in Classic HFM 22.214.171.124 are interesting beasts. If your clients rely on them to run consolidation packages like ours do, they could be in some trouble. If there is any drop in communication between HFM and Shared Services, your results will not be as desired.
We discovered a few interesting things about consolidation tasks and HFM 126.96.36.199 at a recent client engagement. First, there is no “keep-alive” between HFM and the Oracle HTTP Server, so anything relying on a keep-alive will time out and cause issues later. Load balancers would need to have their keep-alive times adjusted, and OHS would need the entry for WLIOTimeoutSecs in the mod_wl_ohs.conf file adjusted upwards too.
When comparing two roughly-equivalent environments, UAT and Staging, we kept running into a performance discrepancy we were unable to account for. We had the same data, rules, etc. in each HFM application, the same tuning parameters applied, and similar server host capabilities. We could not run a complex Task Scheduler instruction set in the same amount of time in each environment. UAT would take 90 minutes longer to complete the few dozen steps in the Task than Staging would.
This blog is to illustrate and walk you through the process of creating a CSV extract file from your Hyperion applications. For this example, I’m using Oracle’s COMMA4DIM sample application. Additionally, this is also useful for those who wish to use their Hyperion applications as a data source.
Oracle bundles an installer for WebLogic with their Oracle EPM suite of products, that ‘phones home’ to Oracle for more information. Many of us deal with clients who have firewalls that may block all attempts to communicate from the installation server to Oracle.com or other outside Internet addresses. In these cases, the installer for Oracle EPM informs us of the communication error and asks us to provide proxy information for the installer to communicate with Oracle.com. Refusal to grant the proxy information results in another nag screen after which we can move on with our installation.
An interesting issue appeared for me recently. What happens when after a server crash / restart, Essbase refuses to start? The service shows Running, but Essbase isn’t listening and not responding. The start attempt produces no errors, no logs, it simply does not start.
In Part 1 of this blog, we discussed using the “runbatch” command to process files for three Locations. In Part 2, we will discuss using the “rundatarule” command to accomplish the same thing.
As with many solutions… it depends. “What a surprise,” you say. Your choice will probably be based upon your need, preferences, and also on how comfortable/proficient you are with writing batch scripts. If you need to execute a Data Load Rule(DLR) for one location as a part of a batch script, the answer will probably be “rundatarule.” If you need to execute DLRs for multiple locations, such as a part of a batch script to process monthly Trail Balances, then the answer becomes more subjective.