To fully understand whether their costs are "reasonable," plan sponsors must have reliable and accurate data upon which to base their conclusions. A benchmarking report released Sunday by PlanTools, a technology platform that provides risk management solutions to retirement plan service providers and investment professionals, offers analysis based on real data, rather than hypothetical information from survey respondents.
David Witz, AIF, managing director of PlanTools LLC, argued in the report that benchmark statistics compiled from standard fee schedules produce reports that are higher than actual fees charged due to negotiations.
"The resulting survey data creates an artificially higher average expense scenario for comparison by the buyer, making it easier for a service provider to beat the surveyed mean or median reported expense," he wrote in the report.
PlanTools benchmarks fees and compares them to other providers of similar size without attempting to provide a value statement for providers. Users can determine value for themselves by comparing the number of services offered; evaluating the complexity of services offered; determining which services have the highest degree of difficulty or risk; and ranking which services are most important to the plan.
Witz compared relying on a third-party's subjective value analysis to buying a cancer cure that doesn't have approval from the FDA.
"Relying on value assessments structured around a third party’s subjective scoring system that lacks industry vetting and acceptance is similar to buying a cure for cancer that does not have FDA approval," he wrote. "Plan sponsors in the crosshairs of a plaintiff attorney for unreasonable fees are particularly susceptible to relying on flawed survey methodologies."
Witz identified four challenges to effectively evaluating benchmark statistics.
Sample size – One challenge to analyzing any statistics is knowing whether the sample size is large enough to produce reliable statistics. The report notes that there are no studies that address sampling size for benchmarking the reasonableness of fees for retirement plan purposes, but points to guidance from the AICPA on auditing retirement plans. Auditors recommend small plans with 100 participants use a sample size of 25% or 25 participants. Plans with between 1,500 to 100,000 participants, though, require just 60 participants, or between 4% and .06%.
Vetting data – The report asks who is vetting data in benchmark reports to confirm its accuracy, adding that it's "unreasonable" to expect recordkeeping systems and "especially an IRS Form 5500" to provide full and accurate data from a wide range of service providers.
Plan size ranges – The report observes that as a database grows, additional plan size ranges need to be added to keep costs in perspective. For example, very small plans are subject to minimum recordkeeping and administrative fees that can inflate the cost as a percentage of assets, according to Witz.
Not enough criteria – Databases that only provide plan size by assets and number of participants offer a limited view of expenses. A robust database should include the SIC industry code, region and average balance size, in addition to the assets and participants in a plan.
The benchmark report is available here.