As an update to CheckPoint’s Blog on May 10th on the subject of Oracle’s EPM 220.127.116.11.900 release, we are posting our latest findings. The testing, and subsequent results are from our lab environment using simple test applications. The results may or may not be indicative of what we would expect to find in a “real-world” example. Your mileage may vary!
A challenge that is commonly overlooked when upgrading EPM is keeping your upgraded application in sync with your Legacy application after the initial upgrade, prior to go-live. The challenge comes from the requirement to continue day-to-day operations in the Legacy environment during the upgrade project time line. At some point, the Legacy application must be "upgraded." The upgraded application then becomes a snapshot, or a "Point in Time" of the Legacy application; any changes made in the Legacy environment beyond that point must be accounted for in some form or fashion in the new, or upgraded environment and application.
Disaster Recovery is a vast topic and on that gets a lot of press. It is also misunderstood and many times inadequately implemented. Application owners want a Disaster Recovery solution that can be implemented at their whim, requires virtually zero downtime, results in zero data loss, and can be rolled back just as easily. IT Administrators want a system that can be fully replicated using in house technology, costs nothing to implement or maintain, and can be failed over with almost no direct involvement. Neither of these is reasonable!
When EPMA is in the mix it is critical to follow a strict change control process when updating metadata. A recent client failed to heed this suggestion and has run into numerous challenges along the way.
I recently setup a Disaster Recovery Strategy for an EPMS 18.104.22.168 client. This is one of many possible solutions.
CheckPoint is introducing an Infrastructure Assessment to add to its current list of Infrastructure Offerings.
Welcome back! As i discussed in my last blog, I have been helping a customer migrate to a cloud provider for EPM. The migration itself has proceeded well, but netowrking has been quite the ordeal. Ideally in the cloud world you would want to establish an fairly flexible tunnel that supports routing of a sufficiently wide subnet to allow for flexibility in architecture. One of the "features" of the cloud is the flexibility for scaling capacity on demand. The cloud provider wanted to scope subnets that allowed for hosts that exist today, and hosts they may have to "add" in an on-demand scenario without having to reconfigure the vpn tunnel. The network provider was not prepared to handle this. They have not become "cloud centric" in their thinking. Their standards still required 1:1 mapping between any Cloud Provider host and an IP address on the client side of the tunnel. This requires DNS to be managed for the client side addresses at the client. The cloud provider has foo.cloudserver.net(10.0.1.2) and on the client side they have foo.myclientserver.net (22.214.171.124) and network address translation (NAT) is used to manage the mapping between the netowrks. Each time a server is added o the cloud side, the client must also add an ip address, firewall exceptions, routing changes, and DNS updates on their side. Not a flexible model.
In order to make the most of you and your implementation teams’ time, in preparation for your new EPM System installation here is a quick list of the most common items, we have found are commonly overlooked. This is specific to a Windows 2008 and Windows 2008 R2 EPM 126.96.36.199 installation:
188.8.131.52 has been out for a few months now, and we’ve had several clients with whom we have worked to perform upgrade-in-place of their 184.108.40.206 environments. Thus far, the process has been far less problematic than the previous upgrades from 220.127.116.11 to 18.104.22.168. It appears Oracle has done a great deal of work making the process more seamless and less perilous.
The upgrade process for environments with HFM, FDM and Financial Reporting does have an interesting change from previous installations. The installTool will not allow you to upgrade HFM or FR if your default DCOM settings are not correct. However it does not explicitly point out that it is not upgrading those pieces. You must expand and verify every item on your product list that says ‘22.214.171.124 Installed’. If they’re not all checked, you should see an explanation why it is not checked when you select the item in question. In this case, the installer warned that the default DCOM settings were not correct. Cancel out of the installer, and change the default DCOM setting to “Connect”. Our environment seemed to work properly before the upgrade with the DCOM default set to “None”, however 126.96.36.199 seems to want a non-granular DCOM security setting scheme before allowing you to install.
Once your installation and configuration steps are complete, FDM may have some additional work that needs to be done before it will successfully allow the Load Balance Configuration to proceed. In our case, we received an error, “Unable to create Load Balance Manager object”. We found that the domain username/password needed to be added or modified for the FDM DCOM objects. Due to a peculiarity in the way DCOM picks up changes to these objects, it was necessary to remove provisioning to the objects, including removing the “dots” from the password lines, set it to ‘The Launching User”, and close the object. We were then able to re-open it, add the correct ID/password, and get the Load Balance Configuration to proceed to the next step.
Users of previous versions of FDM will not be surprised by the need to remove/change DCOM users and permissions, but it is a good reminder to be vigilant as Oracle has changed their FDM installation procedures to better fit with their overall Hyperion installation framework. Review the documentation from the previous installation, as well as the READMEs from the 188.8.131.52 documentation, in case there are other surprises or issues that may have been resolved but forgotten since the last installation.
Lifecycle Management is most commonly used to promote changes from one environment to another. However, there’s no reason that you can’t use this same tool to recover back into the same environment. With that in mind, LCM can become a terrific backup solution.