Executive Summary
As a financial advisor, there are many potential sources of advice on running a practice and serving clients, from fellow advisors to coaches to academic researchers and others. Sometimes, it can be tempting to rely solely on the advice of those with an ‘in-the-trenches’ perspective, as these individuals have actually lived out a similar experience and can appreciate some of the nuanced challenges that an industry outsider might not. For example, in the financial planning context, it can be difficult for those who have not worked with clients to understand the challenges in gathering needed financial data from them. However, there can also be value in insights that come from outside perspectives, and it’s even possible that ‘those who have done’ could be extremely misguided about what they believe works.
The experience of bodybuilders provides an interesting perspective to this dynamic, as they are incentivized to learn as quickly as possible and to do what works to achieve their specific bodybuilding goals. One of the important questions for these athletes is the amount of time they should rest between sets of exercise to promote maximum muscle growth. But while a survey of bodybuilders found that shorter rest periods (30-60 seconds) were considered to be more advantageous for building muscle, academics using randomized control studies determined that longer intervals between sets (3 minutes) actually led to greater muscle growth. This is an example of outside research serving as a useful check on the accepted conventional wisdom among practitioners.
At the same time, while bodybuilders have the benefit of very fast feedback loops (as they can generally see the results of changes to their workout routine very quickly – often within days or weeks) and can typically identify the causal reason for the new results (e.g., changing the amount of rest time between sets), success as a financial advisor is far more multifaceted and subject to lots of random noise that can be misinterpreted in processes with very long feedback loops (since effective business development practices can take months, quarters, or years to provide any real results). For example, if an advisor has a good streak of converting prospects into clients, it could be because of a change they made in their discovery meeting process, or it could just be the result of random noise (e.g., prospects that approached the firm in a given month just happened to be more engaged, while those who approach the firm next month might be less so).
This raises the question for advisors of how to evaluate practice management advice and industry research and which sources of advice to trust. To start, it can be sensible to give greater weight to advice or resources from those who have been in the trenches trying to solve a business issue, though it would be imprudent to cut off the potential to learn from other avenues entirely (e.g., a planning template designed by an outsider but tailored to the needs of a firm’s clients could be more useful than one designed by another planner for a different type of client). And when reviewing academic research, signs advisors can look for to assess the study’s reliability and relevance to their practice include larger sample sizes, participants who are similar to their own clients, and qualified researchers with industry experience.
Ultimately, the key point is that while those with on-the-ground expertise often provide helpful practice management advice, academic researchers and others can also provide valuable perspectives. Because as the example of bodybuilders shows, sometimes commonly accepted wisdom among practitioners can benefit from being challenged by outside research!
There’s a refrain that seems to be increasingly common among some circles of financial advisors that suggests you should only learn from people who have actually done what you want to accomplish. The thought is that while it may be worthwhile to build a relationship with a coach or other professionals who can help you with your goals, you shouldn’t trust anything that comes from academics, consultants, coaches, or anyone else who hasn’t actually already done what you want to accomplish. Some have even gone so far as to say that advisors should pull ADVs of other advisors selling resources like financial planning templates to look at their assets under management and decide whether they are a truly reliable source for such resources.
There is certainly some logic in learning from those who have accomplished what you are striving for and can give a real ‘in-the-trenches’ perspective on the challenges one would face on this journey. If you haven’t actually lived out an experience, it can be easy to fail to appreciate some of the nuanced challenges.
For instance, in my own experience teaching the coursework required for CFP certification, I’ve often found that students who haven’t actually worked as financial advisors will drastically underestimate the challenges associated with the data-gathering step of the financial planning process. In theory, it sounds easy enough to just get all of someone’s financial information, but as I suspect most practicing financial advisors would attest, it’s extremely rare for clients to be organized and prepared enough to just hand over all of the information you need to put together a plan. In contrast to the case studies (which typically lays out all of the data needed to prepare a plan) that students pursuing the CFP mark often encounter in their coursework, there’s a lot of time and effort that simply goes into getting the information you need to start the financial plan.
So there’s a very real logic in learning from ‘those who have done’. However, there can also be value in insights that come from outside perspectives, and it’s even possible that ‘those who have done’ could be extremely misguided about what they believe ‘works’.
Interestingly, bodybuilders provide a fascinating example of one such case where research has helped right some false beliefs that were wildly popular among practitioners.
Bodybuilding: The Ultimate ‘Skin-In-The-Game’ Case Study
Bodybuilding provides an interesting context to explore the errors of practitioners because it had long been thought to be an example of an activity where participants are almost perfectly incentivized to ‘get it right’. Scott Alexander, psychiatrist and blogger, describes these incentives structures well:
…some people hold up (no pun intended) bodybuilders as an almost-perfectly-incentivized “scientific” community. Every bodybuilder has his own skin in the game – based on getting the science right or wrong, he’ll be better or worse at what he does. There’s a quick feedback loop – you can see if you’re gaining muscle or not. And success is easy to observe – check if the person giving you advice has arms that look like tree trunks.
Alexander (summarizing Nassim Nicholas Taleb, author of the book, Skin In The Game.) then contrasts this with the academic scientific community:
…contrast this with the academic scientific community, whose incentive is to publish papers that have low p-values so they can get tenure – no skin in the game, no incentive to get anything right, no easy way to check success or failure.
Alexander explains that bodybuilders have what we might refer to as “metis” – meaning the practical wisdom that can arise from being part of a community working toward shared goals with good incentives and lots of shared wisdom:
…bodybuilders have metis – the practical wisdom that comes from being a tight-knit community sharing a common goal, watching each other succeed or fail, and passing down lore to the next generation. It’s the same wisdom that lets primitive tribes have almost supernatural knowledge of how to safely prepare plants in their environment or build a bow with exactly the right kind of mountain goat sinew. Way better than the on-paper numerical knowledge that Western science can produce.
In light of these different incentive structures, it is easy to understand why many believed that bodybuilders could outperform academics when it comes to understanding what makes someone a successful bodybuilder. Every day someone enters the gym is an opportunity for essentially some real-world experimentation with potential to see results. Those who deviate from the norms might discover something is working, and because they have a strong incentive to stick with what works, they’re incentivized to learn as quickly as possible and do what works.
If anything, we might expect that in an environment like this, academics would mostly be riding the coattails of practitioners and merely be engaging in an exercise of describing why things that are thought to work do, indeed, work. We certainly wouldn’t expect that such a large, bottom-up community – with norms that emerge as the result of many – would develop lore that is actually counterproductive to their shared goals.
However, recent research has suggested that this may be exactly the case…
Chasing The Pump: Duped By Physiology?
For those who may not be familiar with bodybuilding, an important decision that a bodybuilder will make when designing a workout is how long to rest between sets (e.g., 30 seconds? 1 minute? 2 minutes? 3 minutes?). Bodybuilders are very interested in figuring out what the ideal resting pattern is since the goal is to build muscle, and an ideal workout design should help them best do that.
If you’ve never lifted weights, it may also be important to understand a concept referred to as ‘feeling a pump’. Generally speaking, the faster you repeat an exercise, the more you are going to experience a fatigued, swelling feeling in your muscles. Bodybuilders refer to this as a ‘pump’ (with the caveat that you can’t lift so fast that you are completely fatigued to the point that you can’t lift anything). Notably, this physiological response feels productive and might be, in part, why so many bodybuilders historically recommended lifting practices included shorter rest times that would help induce this feeling.
Scott Alexander draws attention to the #1 response (based on user votes) to the question, “How long should you rest between sets for maximum growth?” on Bodybuilding.com, which was that people training for strength should wait 3-5 minutes between sets whereas people training for muscle growth should wait 30-60 seconds. In other words, in a large community of bodybuilders looking to share wisdom with one another, the top recommendation was that shorter rest periods (30-60 seconds) were more advantageous for muscle growth. Furthermore, even organizations such as the American College of Sports Medicine had generally recommended rest periods of 1-2 minutes for promoting muscle growth.
Interestingly, however, the academic research does not support this position that shorter rest periods are better. In fact, if anything, the literature suggests the opposite. Moreover, that’s not because there are only 1 or 2 studies on the topic that give bodybuilders little in terms of quality information to rely on. Bodybuilders even had randomized controlled trials testing 1-minute resting intervals against 3-minute intervals, finding that those who rested 3 minutes experienced greater muscle growth.
In other words, despite having tremendous incentives to get this right, bodybuilders relying on their own collective wisdom were getting this wrong. And academics who admittedly had little incentive to get it right were pointing out a sizeable error in the bodybuilders’ ways. Fitness researcher Menno Henselmans concludes a review of research on resting periods as follows:
In conclusion, your rest interval matters primarily because it affects your training volume. As long as you perform a given amount of total training volume, it normally doesn’t matter how long you rest in between sets. If you don’t enjoy being constantly out of breath and running from machine to machine, it’s fine to take your time in the gym.
The challenge with short rest intervals is that it makes it difficult to maintain total training volume that one would have with longer rest intervals, and, as a result, those who use shorter intervals tend to see less muscle growth!
The Challenge Of Getting It ‘Right’ In Financial Planning
There are some unique aspects of financial planning that make ‘getting it right’ more difficult than in bodybuilding. For instance, bodybuilders have very fast feedback loops. You make a positive change in your workout routine, and you might see those effects only days or weeks later. Furthermore, those effects tend to be more objective (e.g., you can see/measure muscle growth) and less prone to occur due to random reasons. While natural fluctuations in testosterone or other physiological factors could certainly confuse some bodybuilders, it is quite rare that someone just starts building muscle unintentionally.
By contrast, success as a financial advisor is far more multifaceted and subject to lots of random noise. Nick Murray, author of several books for financial advisors, talks about this at length in his book, Game of Numbers. He suggests that it is often better for advisors to measure inputs (e.g., how many prospects did I reach out to today?) since there is a lot of random noise associated with metrics that examine outputs (e.g., which prospects became clients?). Feedback loops also typically take much longer for financial advisors. Effective business development practices can take months, quarters, or years to see any real results.
Furthermore, unlike a bodybuilder who may work out daily and can mix up their practices frequently, it is much harder to try and experiment in an advisory context when business development initiatives might take months or years to be fully carried out, and often advisors are changing many other factors in their business at the same time. As a result, compared to bodybuilding, it may be much easier to get ‘faked out’ as an advisor and mistake the true cause of some business success, including the fact that the success may be nothing more than random noise and hitting a particularly good or bad streak of prospective clients.
Another unique challenge of evaluating practices within financial planning is how much harder it is to observe and research practices in the first place. In the case of bodybuilding, it’s fairly easy to carry out a 10-week random-control study that can shed insight on questions like whether 1-minute or 3-minute resting periods lead to greater muscle growth. It’s relatively easy to recruit participants (pretty much anyone could participate), it is relatively easy to monitor workout behavior, and it is relatively easy to measure the outcomes that result.
By contrast, imagine the challenges of trying to carry out a study of pretty much any advisor's behavior. Not only is the pool of potential participants much smaller, but it would be harder to measure both the behavior and outcomes of interest. Moreover, advisors would have few incentives not to abandon a practice if it appeared to be detrimental (or even just less helpful) to their practice. You could pay college students enough to stick with a workout regimen for 10 weeks and measure the outcomes even if they did think the regimen might be suboptimal. It would be much harder to compensate an advisor to stick with a study if it appeared that something wasn’t working well for them.
Studies within firms may be one exception to this. For instance, a firm could A/B test a newsletter design or deliverables they present to clients; larger firms with call centers could even A/B test phone scripts or other tools they use. Unfortunately, however, in this case, firms have little incentive to share their results. If they find that Email A performs better than Email B, it may not be in the firm’s best interest to share that finding with their competitors.
Of course, the flip side to the difficulties of conducting research is that we also have reason to be skeptical about the perceptions of ‘those who have done’ in the first place. On top of the fact that successful people have been found to have a limited understanding of what led to their success in the first place, we should be even more skeptical among advisors, given all of the difficulties we’ve mentioned here.
How To Evaluate Research And Resources As A Financial Planner
A key point here is that if bodybuilders can get it ‘wrong’ for so long in a better-incentivized environment, it’s certainly true that financial planners – including those who have ‘done it’ before – can get it wrong in a worse-incentivized environment. We all have the potential to learn from one another. That said, it’s certainly true that we shouldn’t treat all perspectives as equals.
From a Bayesian perspective, it makes sense to give greater weight to those who have been in the trenches trying to solve a business issue. It’s reasonable to give preference to that perspective, but it would be imprudent to cut off the potential to learn from other avenues entirely. As professionals, we should also acknowledge that our professional judgment is crucial in situations like this.
While it’s foolish to outright dismiss an advisor without many clients trying to sell financial planning report templates, it is worth being mindful of this fact. If an advisor, based on their professional judgment, is confident a resource could be used beneficially in their practice, then this can be a good reason to move forward with it. However, if an advisor is torn between 2 template packages and 1 has been used in the market whereas the other hasn’t, then this should also factor into an advisor’s decision.
When it comes to research specifically, there are some key factors advisors could focus on. While this list isn’t exhaustive, it does provide some key considerations that may be helpful:
- Does the person carrying out the research have industry experience? This could be helpful in avoiding ‘blind spots’.
- How many participants are in the study? Generally speaking, more is better, but the challenges of doing research in a financial planning context can limit the sample sizes that are feasible.
- How well does the typical participant match your own clientele? The more a study’s sample fits your own target clientele, the better.
- Is the research general in nature or something that could be very specific to a particular type of individual? If there’s no reason to believe that the sample used in a study would differ from your own clientele when it comes to the behavior/outcome of interest, then this would be a positive.
Ultimately, the main point is that the attitude among some that we can only learn from ‘those who have done’ lacks humility and curiosity, which could ultimately be harmful to achieving one’s goals. Advisors face some unique challenges in objectively measuring and researching business practices, but if even bodybuilders operating in “an almost-perfectly-incentivized ‘scientific’ community” can be led astray and need the input of researchers to help correct widespread misunderstandings, then certainly financial advisors are not immune to learning from the input of researchers and other non-practitioners, either.