There are many studies of independent advisory firms, and some of them can be quite helpful. Using them can also present significant challenges to owner-advisors who want to grow or run their firms more successfully.
For starters, few of them are actual “studies,” conducted by researchers who are familiar with their subject firms and who actually observe how they change over time. Instead, most studies are actually surveys: unaudited data voluntarily supplied by owners or employees at the firms involved. This, of course, can lead to questionable data for a number of reasons, including the tendency to make your firm look better than it is and the inclination of busy owners to “ballpark” numbers of which they really have no idea.
What’s more, the sample size of many study/surveys is relatively small compared to the actual number of independent firms, introducing randomness into the data, along with a skewing toward more successful firms, as those less successful are also less willing to participate. Sadly, this skewing toward more successful firms can have a considerable effect on the most valuable aspect of these studies: comparing changes in the subject firms over a number of years, as a whole or in subsets.
Finally, there are the problems with the data itself. Job descriptions in the independent world vary widely, with little consistency about what firms call all levels of advisors from associate to lead to senior—or how firms divide the multitude of tasks in a firm, such as compliance, HR duties, training, rainmaking, supervision, marketing, finance, etc.—making comparison between any two given jobs difficult at best.
The best of the firms that produce these studies attempt to control or adjust for these sampling and data gathering problems. Using their experience with and understanding of the workings of advisory firms, they discount data deemed to be unrealistic or outliers. Probably the best advisory industry studies were the old Moss Adams annual reports, which were produced by a dedicated team of professionals who understood the advisory business and attracted well over a thousand firms to participate each year. Unfortunately, that team has long since disbanded, but two of its members—Dan Inveen and Eliza De Pardo—have continued conducting studies. While their sampling size is quite a bit smaller, and their methodology is a little different, the annual studies they produce are, in my opinion, the best in the industry today. Their current offering, “The 2013 FA Insight Study of Advisory Firms: People and Pay,” is no exception.
As I mentioned, FA Insight’s survey participation is a bit of a problem. This year, only 211 firms met their standards for complete submission. It’s a small sample, to be sure, but equally important is whether they are the same firms. A quick look at the breakdown of participants in the 2013 study compared to those in the 2009 study reveals that the concentration of firms generating $500,000 or less in annual revenues fell 7% (from 28% to 21%). The firms in the $500,000 to $1.5 million category stayed level at 36%. Those firms with more than $1.5 million grew by 7% (from 36% to 43%). That suggests that the same firms did consistently participate, as their revenues grew with the general economic recovery.
The breakdown of the participants’ business models tells a similar story. While the concentration of RIA and “primarily RIA” firms grew from 69% to 80%, the number of brokers shrank from 29% to 13%. This would be consistent with both the growth in revenues and the trend toward asset management, suggesting that the firms in the FAI studies for the past five years have, for the most part, been the same or very similar firms. Here’s how the FAI folks explained it: “These data represent medians across the respective groups of firms that participated in each study over the past five years. Some variability in these series over the years may simply result from the changing makeup of firms that participated in the study in any given year.”