Executive Summary
The financial advisory industry has faced many purported technological ‘threats’ over the past several decades. From the introduction of computers to the rise of the internet to the emergence of ‘robo-advisors’, there has been no shortage of innovations that would allegedly reduce the need for consumers to work with (human) financial advisors. But in reality, none of these advances meant the end for the advice industry; rather, they often made financial advisors more productive by increasing their efficiency with back-office tasks from producing financial planning calculations more quickly and accurately to being able to serve more clients across the country. But despite this history of technological advances actually benefiting the financial advisory industry, the emergence of powerful Artificial Intelligence (AI) systems, such as ChatGPT, has raised fresh questions about the future of human-provided financial advice.
While the capabilities of modern AI technology are quite impressive, it is important to recognize that AI systems would have to overcome significant trust hurdles before they would be in any position to replace human advisors. For instance, despite the rise of self-driving cars in recent years, survey data suggests that humans are hesitant about riding in them (or sharing the road with them) for safety reasons. This ‘trust penalty’ implies that self-driving vehicles would have to prove that they are significantly safer than human-driven cars (across a range of challenging driving environments, such as in a snowstorm) to achieve mass adoption. And because offering financial advice, like driving through snowstorms, often involves high risk and complexity, actually trying to replace human advisors with AI technology would be a terribly difficult and impractical place to start.
While AI systems are unlikely to replace human advisors anytime soon, their functionality could still help advisors operate more efficiently. For example, ChatGPT’s AI can be thought of as a form of calculator that takes inputs (e.g., various information or data that it’s fed or that it has ‘ingested’ itself) and turns them into useful outputs (e.g., written responses that conform to how humans typically communicate). In this way, ChatGPT can be used as a tool to help human advisors convey important financial concepts to clients through writing faster and easier. From the human perspective, the reality is that it’s typically far faster to edit something that already exists than to create it from scratch.
For instance, a human advisor could prompt ChatGPT to write an email in response to a client who is concerned about the current state of the market and wants to sell all of their equity holdings. And while it’s unlikely that advisors would simply copy and paste ChatGPT-generated text into a client email without checking its output, prompting ChatGPT and editing its output for accuracy and personalization is still likely to be faster than composing an email response from scratch. Further, beyond producing client emails, advisors may also find ChatGPT useful for summarizing lengthy text (e.g., creating succinct notes from a full client meeting transcript) or drafting social media content to promote content the advisor has already created.
Ultimately, the key point is that, in the long run, the most likely legacy of ChatGPT and AI for financial planning is not to replace financial advisors, but to help them increase their productivity by streamlining more of the middle and back office tasks and processes. Which, in turn, will either enhance the profitability of firms or allow them to provide their services at a lower cost for the same profitability while increasing the market of consumers who can be served, further growing the reach of financial planning. Or stated more simply, ChatGPT will not necessarily end out as a threat to financial advisors; instead, it is probably more of a useful tool for advisors that will help to grow the market for financial planning advice services!
Imagine for a moment that it’s a beautiful spring day, and you need to get to a meeting across town. It will be an easy enough drive – it’s midday, so there should be little traffic and the destination is just one stoplight off the main highway exit (which itself is just 2 blocks from the office you’re about to leave). But the weather is so nice that you decide you’d rather enjoy the view and not need to worry about driving at all. So you pull out your phone and, as you have done many times before in getting around town, you hit a button in an app; within a few minutes, a car pulls up to the curb to pick you up for a very affordable drive.
But as you walk up to get into the car to verify with the driver that it’s the right car and the right pick-up, you notice… there’s no one in the driver’s seat. The car is a fully autonomous self-driving car, ‘driven’ by the latest in AI technology to take you across town to get you where you’re going.
Do you still get in the car, or has the revelation that AI will be (literally) driving the experience now exceeded your comfort level?
The Standards We Expect To Really Trust AI
The rise in recent years of autonomous (self-driving) cars has provided an interesting real-world experiment about the extent to which human beings are willing to trust artificial intelligence. And despite nearly a decade of excitement over and build-up of the development of self-driving cars, it’s still been difficult for most people to trust AI.
For instance, a study by the highly reputable Pew Research last year found that only 37% of American adults say they “definitely” or “probably” would want to ride in a driverless vehicle, and only 21% are “extremely” or “very” comfortable even sharing the road with other cars that are driverless. (Notably, this means we’re actually more afraid of navigating around the driverless cars alongside us on the road than we are of being in one ourselves… a tacit acknowledgment of how AI doesn’t just impact those who use it, but the broader environment in which AI systems operate.)
In part, this is simply because the baseline standards that self-driving AI must achieve to match humans are actually quite high. The U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) data show that there were nearly 43,000 car accidents with fatalities in the U.S. in 2021 and more than 3.1 billion miles that motor vehicles were driven, which means the fatal-accident rate was approximately 1.37 per 100,000,000 miles, for a human-driver survival rate of approximately 99.999% per vehicle-mile-traveled. Which is, to put it mildly, a high bar. Mathematically, this means that a self-driving car would have to travel nearly 300 million miles without a fatality to be able to claim with a 95% confidence level that self-driving cars are at least as reliable as human drivers. Which is not impossible as the technology improves… but it begins to emphasize how remarkably high the hurdle is to demonstrate that the AI is really better at something humans are actually already pretty good at (as in practice, this means that the self-driving car needs to go almost 3X the distance that humans go without an accident, just to ‘prove’ that they’re at least as good as humans and not just due to random chance).
The AI Trust Penalty
Beyond the sheer challenge of what AI must do, successfully and with a very high degree of accuracy, just to show that it’s as good as a human being, the reality is that it’s not clear that human beings will be willing to trust AI even when it is as good as a human being… at least in high-stakes situations.
Because as human beings, we appear to be wired with a very strong preference for autonomy. We don’t like it when external events happen to us; instead, we have a fundamental desire to ‘control’ their outcomes to the extent that we possibly can. Thus, for instance, why so many ancient cultures had some symbol of a ‘god of weather’ to whom they could then offer prayers, make sacrifices, or take other steps in an attempt to ‘control’ the weather through their human actions. We do whatever we can to try to control situations – even when otherwise seemingly beyond our control – to influence the outcome for the better.
Which makes it especially challenging when it comes to putting our full faith and trust in AI, because it means even if the AI is ‘just as good as a human’, we are still likely to reject relying on it in any material way because getting the same outcome with less control (that has been surrendered to the AI) is still terrifying. As a driver, you’re at risk of dying in a car accident, but as a passenger, you can be killed by a self-driving car.
The end result is that while AI might be fine to solve relatively ‘simple’ problems in low-risk scenarios such as finding recommendations for movies or music that you might like or figuring out the best way to navigate to a destination (where being ‘just’ 99% accurate is more than enough to feel good about the outcomes because it’s easy enough to adjust along the way), the higher the stakes – and the higher our desire for autonomy – the harder it will be for us to accept AI because the “AI Trust Penalty” means the technology doesn’t have to just be good (which is hard enough in high-risk scenarios that may demand 99.999% accuracy), it has to be materially better for us to accept the loss of control of relying on the AI.
Financial Advice Is High-Risk And Highly Complex
The fact that our tendency to require AI to prove its functionality so rigorously – i.e., the AI Trust Penalty – becomes more pronounced in situations that are higher risk is especially important when it comes to the consideration of how ChatGPT and artificial intelligence may impact the financial advice business because the reality is that financial advice – especially the kind that financial advisors typically deliver – is in practice both high-risk and highly complex.
Of course, not all financial advice is complex. The basics – spend less than you make, save and invest prudently, be cautious with debt, etc. – are relatively straightforward concepts to apply once they’re learned and understood, and a wide range of financial recommendations are also fairly simple to engage with (e.g., open a Roth IRA for retirement savings when you’re young, establish a 529 college savings plan to save for a newborn child’s future education, be wary of costs when investing your savings, be prepared for the capital gains impact when selling an appreciated investment, etc.).
But these are typically not the issues that consumers hire a financial advisor to solve – they can be implemented by consumers on their own via any number of financial services firms, there are innumerable books and articles and podcasts for those who don’t know and don’t need to learn the financial concepts for the first time, and for near-term issues that need a quick fix, we can simply turn to friends and family for advice as well.
However, when the stakes become high (e.g., the consequences are more substantive and/or more long-term), we start to rely less on friends-and-family advice and on just looking it up ourselves online, and look more to experts (e.g., financial advisors) instead. As a result, we might go online to find an institution to open a Roth IRA, but look to a financial advisor to figure out the optimal amount to Roth convert. We may look to the internet to create a diversified portfolio for long-term growth, but look to a financial advisor when it’s time to retire and rely solely on that pot of money for the rest of our lives (where there’s no ‘do-over’ to build the savings back once we retire for good). We might ask colleagues to recommend a 401(k) plan provider for our medical practice, but look to a financial advisor when it’s time to sell the medical practice, maximize its value, and minimize the tax consequences of a once-in-a-lifetime multi-million-dollar transaction. We may read an article about the best way to set up our spending accounts as a newlywed couple and establish a will with guardianship provisions for the baby, but seek out a financial advisor when it’s time to set up an estate plan to transition the wealth of the family business to 3 children (of which only 2 are actively involved in the business today).
In such situations of high-stakes complexity, not only is it difficult to trust AI-generated recommendations (because the AI Trust Penalty makes the standard of success very high), but in areas of high complexity, it’s also difficult to engage AI because we don’t even know what questions to ask it in the first place. (Have you tried logging into ChatGPT to ask it questions only to find yourself sitting there wondering, “What should I ask an AI chatbot?” Now imagine that feeling again, but this time you have to ask it the right question because your financial life savings are on the line!)
Because the reality is that even if and as AI begins to enter the marketplace and engage general consumer use, we start out by either solving low-stakes problems (coming up with music or movie recommendations or helping navigate a path to my destination given dynamic traffic conditions), or applying the technology in scenarios that at least involve more controlled environments (thus why self-driving cars are starting out as “Automated Guided Vehicles” (AGVs) in a warehouse, or are being used to facilitate a driving tour in a known fixed space, or in the trucking/freight industry where they don’t have to contend with the concerns of passengers).
In other words, applying ChatGPT and other AI tools to financial advice out of the gate is akin to initiating self-driving autonomous vehicles to replace the drivers of snowplows. In practice, using AI to replace snow plow drivers is a terrible initial use case; the road conditions are terrible (it’s literally snow and ice!), visibility is especially poor, there’s a higher risk of needing to avoid other vehicles not being where they’re supposed to be (e.g., if some other car slips or skids on the ice), and the autonomous driver needs to be able to understand when there might be an important object buried in the snow (is that snow mound a fully buried car, could there be a submerged fire hydrant inside that snow bank?). In addition, the snowplow has to be mindful of where the plowed snow goes (not just where the vehicle itself is driving).
The core point: even if and as ChatGPT and AI begin to come to the world of financial advisors, actually trying to replace human advisors (or snowplow drivers) is a terribly difficult and impractical place to start.
ChatGPT Is More Like A (Communication) Calculator Than A Robo-Advisor
One of the biggest challenges in trying to decipher how ChatGPT (and artificial intelligence more broadly) will impact industries like financial services is simply figuring out what, exactly, ChatGPT is in the first place.
At its core, ChatGPT isn’t really ‘intelligence’ in a manner that most humans envision – where it studies information and ‘learns’ the underlying concepts in order to apply them to novel situations. Instead, ChatGPT ingests huge reams of existing content (into what is known as a “Large Language Model” or LLM), studies the text to understand the patterns of how words and information appear, is ‘trained’ regarding what an appropriate response is (by receiving feedback about whether it wrote an answer correctly or not), and then eventually learns to provide a written response to any question it is asked.
Notably, though, as much information as ChatGPT’s LLM ingests, it still can’t literally learn and read ‘everything’ there is – to do so is both a challenging amount of data to ingest, but more importantly is also an impossibly large amount of information to study in real-time every time a question is asked just to generate an answer. Instead, ChatGPT’s LLM ends up working from what has been analogized to “a blurry JPEG of the web” – where a JPEG is a compressed version of an original image that doesn’t retain all of the original information in the picture, just ‘enough’ to be able to reconstruct a reasonable version of the picture when the JPEG is decompressed.
As a result, similar to a JPEG, people can’t actually tell that the image has a loss of fine-point detail most of the time… at least, until ChatGPT is asked very specific questions about very detailed knowledge domains. In which case ChatGPT produces what is still a very ‘well-written’ response – the LLM is still excellent at generating a response that sounds like a human speaking intelligently – except the answer isn’t actually fully correct, because the AI doesn’t actually ‘know’ and intuit the answer on the subject; it’s attempting to reconstruct an answer, word by word, based on the word patterns it’s seen in the reams of data it’s previously analyzed (with the attendant risk of losing some of the finer-point resolution/details).
In this context, the true innovation of ChatGPT is not literally its ability to correctly answer questions about anything – because ChatGPT doesn’t actually maintain all the information to give all the correct answers – but its ability to generate responses that are at least (usually) mostly correct with relevant information, and more importantly to structure those responses in a coherent form of written communication.
In other words, ChatGPT’s breakthrough as artificial intelligence is not its ability to know all the answers, but its ability to communicate those answers in a reasonably well-written manner. As a result, Professor Erik Brynjolfsson of Stanford’s Institute for Human-Centered AI has analogized that “ChatGPT will be the calculator for writing”.
It's Easier To Edit (ChatGPT) Than To Create
If ChatGPT’s AI can be thought of as a form of calculator that takes inputs (various information or data that it’s fed or that it has ingested itself) and turns them into output (in the form of a written response that conforms to how humans typically communicate), then it starts to become clear that ChatGPT doesn’t actually replace human financial advisors… it creates shortcuts for them to generate written communication regarding important financial concepts for clients faster and easier. From the human perspective, the reality is that it’s typically far faster to edit something that already exists than to create it anew from scratch.
For instance, imagine for a moment that a client reaches out with a request to “sell everything in my portfolio to cash” because he’s concerned that the mass adoption of ChatGPT and other artificial intelligence will cause mass unemployment and soon trigger a stock market crash. So now you’re sitting in front of a computer, staring at a blank email, and need to start writing a response.
But instead of trying to craft that email from scratch, you go to ChatGPT and simply prompt it by asking, “Write an email to calm my investment client who is worried that mass adoption of ChatGPT and other AI tools will cause mass unemployment and trigger a stock market crash in the next few years.” And receive the following response:
Arguably, ChatGPT has generated a remarkably ‘solid’ response here. While perhaps not perfect – either relative to the advisor’s style or speaking to the client’s particular concerns (for which the advisor will have more context than ChatGPT) – it doesn’t have to be, because the advisor doesn’t have to send this text. The advisor can use it as a baseline and edit the text to make it more specific to the client. Because, again, it’s far faster and easier to edit than create.
As a result, subsequent editing of the response from ChatGPT’s baseline might yield the following (blue text represents text that was changed/added):
Jim,
It was great to see you at the Club last Tuesday. Glad that you and Jenny are doing well, and congrats again on the great write-up for Jenny’s restaurant launch!I understand that you are feeling concerned about the potential impact of mass adoption of AI tools like ChatGPT on employment and the stock market, as I know you see it first-hand living in Seattle, where there are so many tech companies trying to innovate in this space. Yet while it is true that technological advancements can cause disruptions in the job market and financial systems, it's important to take a closer look at the situation and consider the broader picture.
Firstly, it's important to note that technological advancements have been happening for centuries, and while they may displace some jobs, they also create new opportunities and industries that weren't previously possible. In fact, the adoption of AI tools can lead to increased productivity, efficiency, and innovation, which can ultimately drive economic growth and job creation. You’ve lived this technology impact first-hand; the irony is that if it wasn’t for the rise of computers (that were also supposed to cause mass layoffs by replacing human jobs), you wouldn’t have a job today as a Microsoft engineer in the first place!
Secondly, it's worth noting that AI tools like ChatGPT are still in the early stages of development and adoption. While they have shown great promise in certain areas, they are still far from being able to replace human workers entirely. Additionally, there will always be a need for human skills and expertise, particularly in areas that require creativity, critical thinking, and interpersonal communication. (ChatGPT wouldn’t have helped to replace your gas furnace when it broke in January, and it won’t replace any of the line cooks at Jenny’s restaurant.)
Finally, it's important to remember that the stock market is a complex system that is influenced by many factors beyond the adoption of AI tools. While it's true that technological advancements can have an impact on market trends, there are many other factors that also play a role, including government policies, global economic conditions, and geopolitical events. Remember when you also wanted to go to cash when the pandemic was breaking out and you were concerned that it was going to shut down the economy… but we stayed the course and your portfolio actually finished up in 2020?
In conclusion, while it's natural to feel concerned about the potential impact of technological advancements on the job market and the stock market – I’m spending a lot of time thinking about it as well, as ChatGPT and AI could have a big impact on our industry as a financial advisor, too! – it's important to take a broader view and consider the potential benefits and opportunities that AI tools like ChatGPT can bring. And, of course, we continue to broadly diversify your portfolio and invest in a wide range of industries and sectors (some of which may actually benefit from AI with higher profits and rising stock prices, and others that are more service-oriented and won’t be impacted much at all) to mitigate risks and maximize returns.
Let me know if this helps, and please don't hesitate to reach out if you have any further questions or concerns. I’m happy to chat about this further at our meeting next month as well (Sarah will be in touch in the next few days to get it scheduled).
- Michael
And voilà! An email that would have taken many advisors 30–60 minutes to write instead takes 5–10 minutes to prompt ChatGPT, edit once with some client-specific adjustments, and hit Send. Not because ChatGPT ‘gave the answer’ or replaced the advisor, but it did instantly provide a strong initial framework from which the final response could be derived – a calculator-like shortcut for writing!
Calculators Didn’t Replace Financial Advisors; It Spawned Software To Help Them Go Deeper!
When a new wave of technology innovation first hits the scene that is capable of doing what humans have done up until that point, it is often both celebrated by the business community (which sees the potential to replace ‘costly’ humans with cost-efficient technology) and feared by workers (who see the risk that their jobs will be replaced/eliminated by technology).
Yet in practice, technology can only displace jobs so quickly – if only because it still takes a material investment of time and money to identify the tech providers, vet them, develop a transition plan, and implement a transition plan, which for large companies at scale can take many years to get to an actual launch!
But from a broader perspective, the striking reality is that even as computers have improved by roughly 10,000,000X in computing power over the past several decades, the economy has grown, mass layoffs have not occurred, and instead, there are far more jobs now than there were before computers arrived. Yet over that time, a lot of very manual work and calculations that were human jobs really have been substantively displaced or eliminated altogether. Even as other new jobs arose to layer on top of that technology.
For instance, in the early days of financial planning (which emerged in the early 1970s and pre-dated computers), financial advisors who wanted to project a client’s wealth or savings strategies had to ‘manually’ calculate the result with their financial calculator. (An exercise we still pedantically teach in the CFP curriculum today!) Yet even as consumers can do radically superior analyses of their financial health and trajectory with a spreadsheet or any number of online tools today, the financial advice business has only grown.
Why? Because the rise of computers and more sophisticated projection tools to replace financial advisors using calculators simply meant the financial advisors could use the more sophisticated tools themselves – and the adoption of financial planning software has only led to deeper conversations with clients about even-more-complex planning issues and opportunities, allowing advisors to charge higher fees to layer advice on top of the advanced technology than they ever could by just doing the math themselves with a yellow pad or a calculator.
In the context of ChatGPT, imagine a world where we could provide prompt and thorough responses to clients on an ever-growing range of financial planning topics in a fraction of the time it takes us to generate the responses today, because ChatGPT and similar AI can ‘instantly’ generate an initial response from which we can edit. Ultimately leading to clients who are even more engaged in financial planning and who reach out to their advisors more often to receive meaningful advice in more areas because the advisor can generate the advice – and the (often written) communication to deliver that advice – more rapidly than ever, thanks to ChatGPT?
How Financial Advisors Can Leverage ChatGPT AI
If the long-term opportunity of ChatGPT and similar AI tools is not to replace financial advisors but to make them better, then the question naturally arises: how, exactly, can financial advisors leverage ChatGPT today to harness its benefits?
Let ChatGPT Generate Explanatory Communication Advisors Can Edit To Their Style
Given the time savings that comes from editing instead of creating, the first clear use case for financial advisors is to leverage ChatGPT as a form of ‘generative AI’ to create an initial first draft of client communication, especially regarding a topic that may be more complex and necessitates a lengthier response.
For instance, imagine your top (very affluent) client reaches out with the question, “I saw in a recent news story that Congress might be cracking down on some estate planning strategies like GRATs that are supposed to be really good for people in my financial situation. But I don’t really know what these are. Can you explain how a GRAT works and whether this is something I should be considering?”
In turn, a quick prompt to ChatGPT generates the following response:
And very quickly, the financial advisor now has a summary of a GRAT that they can use with their client, around which they can expand with a lead-in to respond to the client, and an adjusted conclusion to make a recommendation about whether this is or is not a strategy this particular client should consider.
Nerd Note:
Because ChatGPT itself is not ‘perfect’ when it comes to its knowledge – its innovation is how it presents the information in written format, not its technical accuracy – it is advisable to only use ChatGPT to summarize concepts that the advisor is already familiar with, to ensure that the advisor can catch any substantive errors or client-specific nuances that the software may have missed. Which again helps to emphasize that ChatGPT doesn’t replace a financial advisor – just as a calculator can ‘do the math’ but won’t give the right recommendation if the user doesn’t enter the right inputs in the first place, ChatGPT’s communication templates can’t be relied upon in the absence of the advisor’s knowledge. Instead, they simply shortcut the creation of client communication on a subject the advisor already knows and understands (to ensure that the ChatGPT template is actually right!).
Notably, in some cases, a response of this nature may still be too complex and require too much editing for a client who is really not very financially knowledgeable. But no problem, a follow-on prompt to ChatGPT can provide an even simpler template!
To each their own client communication style. (And, of course, once it’s sent as an email, the response will be archived for compliance purposes as well!)
Another version of this approach is for advisors who prefer to create their own explanations, but aren’t necessarily good at typing them out on a keyboard or organizing their flow of thought.
For example, if the client reaches out to ask, “Is it worth even trying to buy a house in the current environment, with interest rates rising and house prices falling?” The advisor might dictate the following response (e.g., via a dictation tool like Dragon Naturally Speaking):
I understand your concern. Rising interest rates make it harder to afford a house, and prices dropping makes it really tempting to wait. But I really don’t think you need to. You and Alice, you’ve been talking about buying a house and starting a family for years. You’re been doing so much good work, saving up the downpayment. Even with higher interest rates, even with higher rates, you can afford this. And this is a house you’re going to stay in for a long time. So even if the price goes down a little in the next year or 2, a small loss in the next few years won’t impact your long-term future. It will still almost certainly be a lot more valuable in 20 years when the baby goes off to college and you can move or downsize. And timing the real estate market is really hard to do. If you wait, you may just regret it even more if prices move in the wrong direction. Which shouldn’t matter because again, you can afford it now based on what you’ve saved. There’s also a risk that if interest rates turn around and go down, that will just bring in more buyers, and more competitive bids, and the prices will go up further. So even if you wait for interest rates and they go down, you won’t necessarily be able to afford more. Oh, and remember the bonus you got last year shows as really strong income for underwriting, so it’ll be easier for you to qualify for a mortgage based on your income if you do it this year. Since we don’t know if you’re going to get a similar bonus again. So I really don’t think you should wait.
Of course, when you look at something that you’ve dictated as a spoken word written out, it never looks great. Most of us don’t perfectly organize our thoughts (which isn’t as noticeable in a conversation, yet becomes really noticeable when you see it written!). We repeat words and phrases. We don’t always use good grammar.
But not a problem… just grab the transcription, drop it into ChatGPT, and ask ChatGPT to turn it into a well-written email!
The end result is an email that, once again, the advisor can easily adapt based on what might have been 3 minutes of dictation and 5 minutes of edits, instead of needing to write out an entire email from scratch!
Use ChatGPT (Or Other AI) To Summarize Lengthy Text Into Something Shorter (And More Useful)
While there’s a big opportunity for ChatGPT to take ‘short and simple’ prompts and turn them into a full-length client email explanation (that the advisor can then edit and adapt further), an alternative use case for financial advisors is to take a lengthier document and leverage ChatGPT to shorten and summarize it.
For instance, it’s a best practice for financial advisors to keep contemporaneous notes of their client meetings in their CRM (both for other staff to reference and to document what was discussed in meetings in the event it’s ever questioned later), but, at best, advisors often struggle to find the time to capture meeting notes, and many aren’t good at either recalling what was discussed or aren’t very expeditious in writing it up.
However, in a world of increasingly virtual meetings, it’s relatively ‘easy’ to record an entire client meeting (with client permission in advance, of course) and then provide it to ChatGPT or a similar AI tool with a prompt to “write a summary of the key discussion points and takeaways of this meeting transcript”, and capture that in the CRM system with a simple copy/paste of the output. Which, notably, doesn’t even require ChatGPT; the AI-driven Fireflies.ai is already built to capture audio recordings of meetings (which can be integrated directly into Zoom) and transcribe them, and then uses AI to create Meeting Notes of what was discussed (that can be inserted into the advisor CRM manually or potentially via a Zapier integration).
In practice, some advisors already use this approach so successfully that they also record their in-person meetings in a similar manner (e.g., with a conference call speakerphone system in the middle of the meeting table to capture all the audio of the meeting).
Nerd Note:
Once the recorded text transcription of the meeting is captured, not only can Fireflies generate meeting notes that are recorded in the CRM, but the advisor can also then turn the full-length transcribed text of the meeting over to ChatGPT. With a prompt to “create a written summary of the meeting transcription, with key action items and next steps, that can be emailed to the client”, the advisor can then use the generative capabilities of ChatGPT to quickly create a post-meeting client email, which again can be more quickly edited once the bulk of the email has been AI-generated.
Another helpful use case to leverage the ‘longer-to-shorter’ summarizing benefits of ChatGPT is to create summaries of the advisor’s own content creation. For instance, if the advisor writes a blog or hosts a podcast and needs to generate an Executive Summary and/or highlight a brief summary with bullet points to share on social media, ChatGPT can be called in to help! (Example below using this article on advanced estate planning techniques!)
Similarly, even the process of generating a title for an article is itself an exercise in summarizing (all the way down to 1 sentence!), so if an article is otherwise written, ChatGPT can be used to help generate article titles (which in turn might become social media headlines!). (Example below using this article on Strategies to Maximize Tax-Free HSA Withdrawals.)
The key point is that just as ChatGPT can be used to create something longer from an initial prompt (e.g., a 1-sentence request to create content turns into the first draft of a client email), so too can ChatGPT be used to take something longer (e.g., an article) and turn it into something shorter (such as a social media post or an article title/headline!).
Tools That Advisors Can Use To Leverage ChatGPT
Because of what has become a veritable ‘ChatGPT craze’ in the few months since it went into general release, the breadth of available tools is iterating very rapidly, making any particular recommendations of tools dated relatively quickly. In the long run, the most common use cases of ChatGPT for financial advisors will likely be integrated directly into advisors’ core systems – for instance, embedding a ChatGPT-driven Meeting Notes summary tool directly into the advisor’s CRM that grabs a transcript and turns it into both a CRM summary for compliance purposes and a follow-up email for communication purposes. Until then, though, some other emerging third-party tools built on top of ChatGPT can help.
Of course, first and foremost, the easiest way for financial advisors to start using ChatGPT is simply to sign up for ChatGPT itself. The standard version of ChatGPT is available directly via the OpenAI website, which upon first visit, prompts advisors to “Sign Up” for a free account, at which point they can immediately begin to give prompts to ChatGPT and receive responses. In turn, ChatGPT has rolled out a ChatGPT Plus option that provides faster responses and more access (even during popular/peak times when ChatGPT is being used), but doesn’t appear necessary for advisors at this point; notably, all of the examples contained in this article were created using ChatGPT’s free interface!
Notably, though, ChatGPT cannot directly accept document uploads – so while text to summarize can be copied/pasted into ChatGPT (albeit subject to character limits), there’s no easy way to give ChatGPT a file to upload. However, Microsoft has already announced the coming launch of a new feature called Copilot, which will insert ChatGPT directly into Microsoft’s Word, Excel, Powerpoint, and Outlook applications, which should provide ChatGPT prompts and systems directly within those documents and systems relatively soon.
Another example of emerging ChatGPT tools and add-ons is Merlin, one of many ChatGPT-driven Chrome extensions that have recently emerged that can be added to a browser to have ready access to the ChatGPT interface (e.g., to summarize articles that you’re reading, or perhaps to help generate client emails if you use a browser-based email client like Gmail). Similarly, tools like MailButler have rolled out ChatGPT assistants as an Outlook plugin (for Microsoft users who don’t use Outlook.com).
Because of the newness of ChatGPT and similar AI systems (e.g., Google’s Bard), anticipate that available tools and platforms will iterate rapidly in the next few years. This is akin to the plethora of robo-advisors that appeared in the early days before winnowing down to a few, or more substantively reminiscent of the rise of search engines in the early days of the internet when there were a bevy of competitors (with more than a dozen popular ones before Google even arrived!) that eventually winnowed down to a few leaders.
New technology innovations that are expected to ‘disrupt’ the financial advisor are not new. It was predicted with the rise of computers (with a tool like this on every desk of every home, who will need a financial advisor anymore!?), again with the emergence of the internet (with the world’s information accessible at the fingertips of every consumer, what’s the point of a financial advisor!?), and once more with the rise of robo-advisors 10 years ago (that promised to use technology to do everything that financial advisors do for only 1/4th the cost).
Yet ultimately, robo-advisors proved to be no threat at all (and instead, ‘robo’ digital onboarding capabilities have largely been integrated into the standard advisor platforms of today), and at most, the lasting impact of robo-advisors was simply to make the back-office of financial advisors more efficient – perhaps reducing the number of administrative and support and trading staff, and in the process augmenting the productivity and profitability of the advisory firms they were supposed to destroy!
Similarly, a natural projection of the trajectory of ChatGPT and similar AI tools suggests challenges for a number of back- and even middle-office jobs in advisory firms. Administrative and client support will become easier (as email-generative tools can be used for support staff as well as advisors themselves, increasing the number of clients and advisors a single administrative staff member can serve), paraplanning and financial plan preparation will likely become expedited over time, and some compliance review functions will also likely be expedited (as ChatGPT and other AI tools become ever more efficient at automating key compliance monitoring functions).
It is also inevitable that some versions of “ChatGPT for financial planning” will become available directly to consumers, rendering answers to at least the relatively simple questions where consumers just need a clear prompt of what to do but don’t necessarily have to navigate complexity. Though again, as with the evolution of self-driving cars, consumer-facing AI will inevitably start with and tackle the simplest financial planning problems, while financial advisors have historically concentrated on the most complex challenges that clients face (because it’s the highest-stakes problems where there’s more money at risk that drives consumers to pay the fees that financial advisors charge), which means even consumer-facing financial-planning AI isn’t likely to threaten financial advisors (and instead will largely serve consumers that financial advisors haven’t been able to reach effectively and efficiently in the first place).
Which means in the long run, it seems the most likely legacy of ChatGPT and AI for financial planning is not to threaten financial advisors, but to increase their productivity by streamlining more of the middle or back office. Which, in turn, will either enhance the profitability of firms, or allow them to provide similar services at a lower cost for the same profitability, increasing the market of consumers who can be served at a lower price, and even further growing the reach of financial planning.
Or stated more simply, ChatGPT is not a threat to financial advisors; instead, it is a tool that will help to grow the market for financial planning advice services?