If you’ve stumbled across this blog first, don’t forget to check out Parts I and II. The purpose of this series is to offer insight into the mapping process and identify reasons as to why a dimension’s mappings can grow to an abnormal size. Part III will provide details with regard to the causes of large volume mappings within a dimension, as well as efficient ways to deal with mappings that have gotten disproportional.
In Part II, you viewed the diagram that was provided on the various causes that lead to a large mapping volume. Let’s review some of the possible causes. I’ve highlighted a few sample questions regarding some of the major points in each section from the diagram below.
Inefficient Mapping Design
Is there too much standardization in the current mapping process?
a. For example, is there an overuse of master maps? Is the mapping process effective or has the standard across maps led to the large mapping volume that exists?
Is the mapping design utilized at an efficient level?
a. Do you have implicit rules? Does your current process have people in place to create effective implicit rules to catch the explicit mappings? Is there overlapping within the explicit mappings?
Complex Mapping Process
Are mapping changes frequent from month to month in relation to new mappings that are added, changed, and deleted?
How does the mapping period affect the close process and is the mapping period utilized effectively as to not result in redundant mappings?
What is the feedback to the “owners” of the general ledgers? Are there any issues related to the use of the import source string? Are there HFM codes not being used, or is a source key incorrectly mapped to it?
Is each “owner” responsible for his/her own mapping requests? Is there a loss of communication between the general ledger owner and the individual processing the requests?
Uncategorized GL Data
How are the general ledgers governed when it comes to mappings?
When a mapping kicks out, is there a process to correctly fix the mapping to ensure proper data validation, or is the mapping assigned a default member?
Missing TBO Training
- Are there enough Hyperion Support Team (HST) resources available to maintain mappings?
- Is there consistent communication between the “owners” and those who govern the master maps? What issues could exist?
- Does training exist on the use of explicit and implicit mappings and the general process that the Trial Balance Owners (TBOs) go through when handling mappings? What is the feedback from the training? Do the TBOs or “owners” understand the mapping process?
Frequent Metadata Changes
- Are frequent changes in an undeveloped system causing the large mapping volume?
- How are such changes handled?
- If Data Relationship Management is used, is it utilized to manage mapping integrations? Is there a check before requests are processed?
Undeveloped Close Process
- Is there a maintenance or review process in check to ensure data integrity during the close process?
The purpose of this blog is not to give concrete solutions in solving the issue of a large mapping volume; rather it serves as an attempt to provide perspective with regard to how you view disproportionate member mappings and assistance in your review of all the potential areas that can lead to a huge growth in mappings. Do not just look at the issue at face value (i.e. explicit, in, between, like, and multi-dimensional mappings) but instead dive deeper into the process of understanding and identifying the source of the issue.
If you are just considering FDMEE, and considering a migration then you may benefit from this upcoming webinar. Register today for Denis Gray's webinar, Making the Move from FDM Classic to FDMEE.
Disclaimer: This article is intended to be a resource only and is not intended to be nor does it constitute legal product advice. Any recommendations are based on our direct experience in this environment during the time in which this was posted.