Executive Summary
Financial advisors have a fiduciary obligation to act in their clients' best interests, and at the same time are prohibited by state and SEC rules from making misleading statements or omissions about their advisory business. These responsibilities also extend to the use of any technology used in the process of giving advice: A recommendation made with the aid of technology still needs to be in the client's best interests, while the technology also needs to carry out any function as it's described in the advisor's marketing materials and client communications.
In order to adhere to these regulatory standards of conduct while using technology, however, advisors need to have at least a baseline knowledge of how the technology works. Because on the one hand, it's necessary to understand how technology processes and analyzes client information to produce its output to have a reasonable basis to rely on that output to make a recommendation in the client's best interest. On the other hand, the advisor needs to understand what process the technology uses to begin with to ensure that their processes are being followed as described in their advertising and communications.
The recent rise of Artificial Intelligence (AI) capabilities embedded within advisor technology throws a wrinkle into how advisors adhere to their fiduciary and compliance obligations when using technology. Because while some AI tools (such as ChatGPT, which produces text responses to an advisor's prompt in a chat box) can be used simply to summarize or restate the advisor's pre-determined recommendations in a client-friendly way, other tools are used to digest the client's data and output their own observations and insights. Given the 'black box' nature of most AI tools, this raises questions about whether advisors are even capable of acting as a fiduciary when giving recommendations generated by an AI tool, since there's no way of vetting the tool's output to ensure it's in the client's best interests.Which also gives rise to the "Catch-22" of using AI as a fiduciary, since even if an AI tool did provide the calculations it used to generate its output, it would likely involve far more data than the advisor could possibly review anyway!
Thankfully, some software tools provide a middle ground between AI used 'just' to communicate the advisor's pre-existing recommendations to clients, and AI used to generate recommendations on its own. An increasing number of tools rely on AI to process client data, but instead of generating and delivering recommendations directly, they produce lists of suggested strategies, which the advisor can then vet and analyze themselves for appropriateness for the client. In essence, such tools can be used as a 'digital analyst' that can review data and scan for planning opportunities faster than the advisor can, leaving the final decision of whether or not to recommend any specific strategy to the advisor themselves.
The key point is that while technology (including AI) can be used to support advisors in many parts of the financial planning process, the obligation of advisors to act in their clients' best interests (and from a regulatory perspective, to 'show their work' in doing so) makes AI tools unlikely to replace the advisor's role in giving financial recommendations. Because ultimately, even as technology becomes ever more sophisticated, the clients who advisors work with remain human beings – which means it takes another human to truly take their best interests to heart!
Financial advisors in the business of giving investment advice are held to a fiduciary standard when advising clients. Specifically, advisors working for a Registered Investment Adviser (RIA) are subject to state and/or SEC rules that generally require advisors' recommendations to be in the client's best interest. This includes the obligation for advisors to, as the 2019 SEC Commission Interpretation regarding standards of conduct for investment advisers states, "adopt the principal's [i.e., the client's] goals, objectives, or ends" – or, to put it more precisely, advisors need to treat their recommendations as if their own assets and liabilities are on the line instead of the client's.
Similarly, advisors are also required to adhere to certain standards in advertising their services and in communicating with current and prospective clients. Advisors are prohibited from making misstatements or omissions of material facts about their services and are required to follow specific rules around the advertising of investment performance. If an advisor says that they're providing services in a certain way, then they need to follow through and actually do it that way.
These 2 foundational requirements for advisors – to work in the best interest of clients and to market themselves honestly – can and often do intersect with each other in practice.
For example, if an RIA has a specific process laid out in its Form ADV, Part 2A for researching and selecting investments for a client's portfolio, but its advisors don't follow that process and instead simply pick a portfolio model from their custodian's model marketplace without further due diligence, the firm could be considered to be in violation of both its fiduciary obligations and its requirements against misleading statements.
In this case, the RIA would have failed to follow a proper due diligence process for selecting investments, which would put it in violation of its fiduciary duty (since no reasonable person would consider implementing a custodian's model without consideration for the client's specific needs a prudent way to pick investments). At the same time, the firm would also be guilty of misleading current or potential clients by outlining an investment selection process in its advertising materials but failing to follow it in practice.
Financial Advisors' Fiduciary Duties Also Apply To Their Technology Use
The standards that apply to advisors in regard to recommendations and communication also extend to any technology they use in the course of making recommendations and communicating with clients. Meaning that if an advisor uses a piece of technology to make a recommendation, the recommendation still needs to be in the client's best interest, just as it would without the use of technology.
Crucially, the use of technology itself doesn't satisfy an advisor's fiduciary obligations: There must be a clear rationale beyond 'just' the output of the technology to substantiate the advisor's belief that the recommendation is in the client's best interest. Again, a reasonable person wouldn't consider "the software said so" to be by itself a good case for making a prudent financial decision, so advisors can't use that line of reasoning in fulfilling their fiduciary duty.
Likewise, the requirement for advisors to accurately describe their services also applies when technology is used to perform any parts of those services. To avoid misrepresenting their services, advisors who use technology to aid in parts of their processes must ensure that their advertisements and communications still reflect those processes accurately. They can do this either by affirming that the technology itself follows the process as advertised, or by updating the advertising to reflect the process as it really happens. Either way, how the advisor talks about their business practices needs to align with what actually happens in real life.
As another example, imagine an advisor who uses rebalancing software to calculate and execute trades in their clients' investment accounts. If the advisor has laid out a process for making trading and rebalancing decisions in their clients' Investment Policy Statements (IPS), then they need to ensure that the rebalancing software actually follows that process (or else update the clients' IPS to reflect the process that the software uses) – otherwise, the discrepancy between what the advisor is telling their clients about how their accounts are being managed and how the advisor is managing them in practice could amount to a material misstatement of facts in the SEC's or state regulator's eyes.
SEC Enforcement Of RIAs' Obligations When Using Technology
The regulatory risks of failing to use technology in a way that's compliant with fiduciary and anti-fraud rules aren't just hypothetical, as shown by 2 prominent cases from the realm of robo-advisors.
First, Betterment was fined $9 million by the SEC in April 2023 for failing to notify clients and update its advertising when it made changes to its tax-loss harvesting algorithm – specifically, Betterment advertised that it scanned accounts daily for tax-loss harvesting opportunities when in reality it had changed its algorithm to scan only every other day. Betterment also didn't tell clients that certain account strategies would limit its tax-loss harvesting abilities, and when coding errors caused Betterment not to harvest any losses at all for some clients, it didn't do anything to alert them after the fact. All of this, in the SEC's eyes, amounted to an omission of material facts in violation of Section 206 of the Advisers Act, since Betterment had advertised their services in a certain way (i.e., claiming that accounts were scanned for tax-loss harvesting every day, that tax-loss harvesting was available for all strategies, and that their tax-loss harvesting software would function as advertised) that they failed to carry out in reality.
In another, more extreme example, Charles Schwab was fined $187 million in June 2022 for misleading clients about the fee structure and portfolio construction process of its Intelligent Portfolios robo-advisor. According to the complaint issued by the SEC, Schwab had advertised that Intelligent Portfolios had $0 investment management fees, and that they used a portfolio construction process that would seek "optimal returns" for the robo-advisor's clients. In reality, however, Schwab earned revenue on the proprietary cash sweep account that its robo-advisor used, which amounted to a backdoor management fee. And when deciding on the amount of cash to keep in clients' accounts, rather than using the portfolio construction process outlined in its advertisements, Schwab instead based the cash allocation on what would generate a minimum amount of revenue for itself. Which ultimately led to portfolios that were over-allocated to cash, creating revenue for the company while dragging down returns for clients. (Ironically, the returns ultimately realized by Intelligent Portfolios clients were similar to what they would have been if Schwab had charged an explicit, industry-average advisory fee for the service in the first place.)
Schwab's failure to disclose its revenue arrangement, as well as the obvious conflict of interest it presented since the revenue-generating potential of each account was tied directly to how much cash it contained, led the SEC to charge Schwab with engaging in fraudulent practices and making misleading statements in its advertisements and disclosures.
The 2 cases highlighted above differ in several ways: Most significantly, Betterment's misstatements about its tax-loss harvesting program seemed to be due to more of a lapse in oversight rather than an intent to mislead clients, while Schwab appeared to be more intentional about how it chose to obscure the ways that it earned revenue from its robo-advisor (which also helps to explain why Schwab's fine, at $187 million, was over 20 times larger than Betterment's $9 million). However, both cases show that the SEC is serious about scrutinizing how advisers use technology – and more specifically, how they implement controls and oversight (or don't) to ensure that the technology isn't used, either intentionally or unintentionally, to mislead clients.
Why all this matters from an ethical perspective is that, from the client's point of view, any recommendation that's generated by an advisor's technology also has the advisor's stamp of approval, at least implicitly. Meaning that any reasonable client would expect that technology used by their advisor would follow the same standards that apply to the advisor themselves, and that any recommendation coming from the software would be, just like any other recommendation from the advisor, made with the client's best interest in mind. So it makes sense for regulators to treat any technology that the advisor uses as an extension of the advisor themselves since clients are likely to see it in the same way.
Using Technology As A Fiduciary Means Knowing How It Works
What's implied by regulators' treatment of advisors' technology as an extension of the advisors themselves is that advisors need to know at least something about how their technology works if they're using it to make recommendations. Advisors don't necessarily need to know their software's inner workings down to the level of the source code, but they generally need to know enough about how the technology incorporates all the relevant inputs – client goals, financial data, parameters, assumptions, etc. – into the output it produces to have a reasonable basis for believing that it can be used to make a recommendation that's in the best interest of the client.
For example, let's say an advisor uses a piece of financial planning software to create future cash flow projections and to model how different changes in a client's plan would impact those projections. The advisor would need to be familiar with the inputs for the projection – the client's starting income and expenses, their life expectancy, and the assumptions for inflation and market returns, just to name a few – and understand how the software applies those inputs to the projection if they wanted to be confident enough about the software's output to use it to recommend a strategy.
Conversely, imagine instead that the advisor has no idea how the software functions and simply types all the client's information into a black box, which then spits out a recommendation. The advisor has no way of knowing how the software produces the recommendation, how changing any of the inputs would affect the results, or even where the software stores all of the client information that the advisor just fed into it. If the advisor went ahead with that recommendation from the black box without scrutinizing it any further, it would be impossible to know whether or not it was really in the client's best interest – let alone an accurate assessment – since there's no way to know how the client's interests are even being factored into it (or if they're even being factored in at all!).
What's more, even if the software ended up making the 'right' recommendation by virtue of it ultimately working out well for the client, it could still be argued that the advisor, in this case, didn't fulfill their fiduciary obligation because they had no real basis (other than the software's own output) for believing that the recommendation was in the client's best interest. And if the advisor also made claims in their advertising about their financial planning process that didn't align with their actual practices (because, in reality, they simply handed everything off to the black box and used its output to tell clients what to do), that could also be flagged as a misleading statement in violation of the SEC's marketing rule.
That's obviously an extreme example and, hopefully, no advisors entrust their clients' financial decisions to the proverbial black box – the advisors that I know tend to take care to understand what's going in and coming out of their software to form reliable bases for their recommendations.
However, as the technology used by advisors has grown increasingly complex and sophisticated, the challenge that advisors face in understanding and vetting all their different technology options has grown in kind. Meanwhile, the regulatory pressure for advisory firms to monitor their technology has increased further, most notably with the SEC's introduction of proposed regulations in July of 2023 that would require RIA firms to evaluate their use of certain types of technology for conflicts of interest it may pose (and to subsequently eliminate or neutralize such conflicts) – encompassing Artificial Intelligence (AI) tools, as well as predictive data analytics along with any other tools that "optimize for, predict, guide, forecast, or direct investment-related behaviors or outcomes".
Though the proposed rules have yet to be finalized, if implemented, they would only increase the need for advisors to familiarize themselves with how their technology functions – to the point where any benefits of the software itself might not be worth the regulatory burden of understanding and vetting all of the software's outputs.
The Risks Of AI For Fiduciary Advisors
It's not a coincidence that the SEC's proposed rules come out at a time when technology being driven by Artificial Intelligence (AI) is being rapidly integrated into advisors' software tools. As recently as 2022, AI occupied only a relatively small niche of experimental tools. But the last year has seen a proliferation of AI in advisor technology, with brand new AI-focused products appearing on the market alongside existing tools incorporating AI features of their own. As AI gets increasingly embedded within advisor technology, though, it begins to raise questions about AI's role in giving financial advice and whether there is any tension between an advisor's use of AI and their role as a fiduciary.
Using AI To Communicate (Already Formulated) Recommendations
It's worth taking a moment to talk about what exactly is meant by "Artificial Intelligence" (AI). The term AI isn't neatly defined, and it can really refer to a number of different types of technology that all purport to use artificial intelligence in some way or another. And while explanations of how AI in its different forms work can quickly become incomprehensible to lay readers, you don't need a PhD-level knowledge of computing to understand what they do.
Most of the best-known AI applications today are forms of "generative AI". In general, what generative AI does is process large amounts of pre-existing data and pick up on patterns within that data through various types of advanced 'learning' techniques, which it then uses to make inferences about the future. In this way, AI is meant to mimic how humans learn by taking in information and then extrapolating it to predict what will happen next, although it does so within a defined scope and lacks the full range of human cognition and understanding.
For example, the best-known generative AI tool today, ChatGPT, is a type of Large Language Model (LLM) tool that its developers built by uploading vast amounts of text from books, articles, and websites, which they then used to 'train' the program to produce realistically human-sounding responses to the prompts that users type into its chat box interface. People can use LLM tools like ChatGPT (or others, like Google's Bard) to answer questions; to summarize, restate, or elaborate on existing text; or to simply have a 'conversation' with a human-sounding correspondent. In this sense, ChatGPT and its ilk are analogous to a 'calculator for words', where a computer can take on the task of writing clearly and effectively by doing the hard work of putting the right words into the correct order (which is really harder than it sounds much of the time!).
Being optimized around the clear communication of information, LLM tools can be a useful tool for advisors to deliver financial advice… if the advisor has already decided on which recommendations they're making. LLM tools are trained to be experts in language, but there's no guarantee about their competency or accuracy in any other subject matter. While it's possible to ask ChatGPT almost any question, and it will almost always respond with an answer, it's been well documented that those answers – particularly those involving highly specific and/or technical subject matter – can often be wildly inaccurate, from misstating basic facts around tax laws to inventing fictional legal citations.
And so, despite LLM tools' utility in helping to shape how financial advice is delivered, they haven't proven anywhere near reliable enough to be a source of individualized financial advice themselves. But for an advisor who has already gone through the process of gathering and analyzing information and developing recommendations, LLM tools can greatly reduce the time needed to communicate those recommendations in a client-friendly way. (Which is why ChatGPT-like tools are ultimately more likely to help financial advisors than replace them since they can streamline client communications and other advisory firm functions without diminishing the value of the advisor in providing the advice itself!)
Using AI To Generate Financial Planning Recommendations
Where using AI gets dicier for advisors around their fiduciary obligations is when they start to involve AI in developing and generating advice itself.
As noted above, general-purpose Large Language Model (LLM) tools like ChatGPT are ill-suited to give individualized financial advice because they're designed to predict words, not to understand specific knowledge domains like taxes, law, and finance. Any recommendation generated from ChatGPT would be highly prone to inaccurate, outdated, or flat-out made-up information, and so there would be plenty of reason to doubt that such a recommendation would be in the client's best interest.
But even if LLM tools were better trained on the specifics of financial planning and could deliver financial advice in a way that was at least free of factual errors, there would still be a significant issue around using such tools to actually generate advice in a fiduciary capacity, as opposed to simply communicating pre-determined recommendations: It would be nearly impossible to vet the methods used to produce that advice.
Simply put, AI technology – LLM tools included – is the closest thing there is to the 'black box' described earlier that takes in information and produces output without showing any kind of work. If an advisor simply typed a client's information into ChatGPT and sent the results along to the client, the advisor would be hard-pressed to show that they had met their fiduciary duty to give advice that they justifiably believed to be in their client's best interest (since a reasonable person probably wouldn't accept advice from a piece of technology without taking steps to verify how the technology actually arrived at that result). Furthermore, with no transparency into the AI's processes for creating recommendations, there's no way to ensure that those processes align with what the advisor describes in their marketing materials, since it's hard to avoid making misstatements or omissions about the processes used in giving advice when the advisor doesn't (and can't) know those processes to begin with!
The reason AI tools are effectively so opaque is that they incorporate a massive amount of information into each operation: for example, the dataset of books and Internet pages used to train ChatGPT totaled around 300 billion words, from which the program built billions of parameters dictating how it would string words together in response to a prompt. Furthermore, many AI programs by their nature continue to evolve as new data is fed into the system, which means that the calculations themselves change while the program is working. If a generative AI tool did show every calculation it was making, then, it would likely be more information than any human could review anyway.
So there's a Catch-22 in play: Advisors can't fulfill their fiduciary duty when giving recommendations generated by AI tools if there's no transparency into how those tools actually produce their output; yet if the AI tools did show all of their work, it would be too much for any human to audit and appropriately vet the results anyway.
Put differently, the characteristic that makes AI such a transformative tool in many regards – its ability to draw from and weave together myriad information sources into a clean, coherent output – will always make it hard to use as a generator of advice since producing that output requires such complex computation on the back end.
Staying Compliant While Using AI Tools
As mentioned earlier, if an advisor first develops their recommendations through verifiable methods (e.g., using traditional financial planning software where the assumptions and calculations are all known to the advisor) and they are confident that their recommendations are in the client's best interest, then there's no fiduciary issue with using an AI tool simply to communicate those recommendations.
For example, if an advisor thinks it would be in a client's best interest to make a Roth conversion based on projections from their financial planning software and then types that recommendation into ChatGPT to format into an email to the client, they've already done their fiduciary duty by determining the recommendation through verifiable (non-AI) methods; the AI is just a tool to communicate the recommendation in a client-friendly way.
Notably, though, advisors who use ChatGPT and other LLM tools to communicate are still subject to rules around advertising and client communications, which the AI might not be 'smart' enough to pick up on. In the example above, the advisor would still want to read over the ChatGPT-generated email to ensure that it accurately captures what the advisor wants to say (and avoids any compliance-related red flags like guaranteeing specific outcomes, which the AI may not be trained to catch), and to make any edits accordingly.
From the perspective of delivering advice in a fiduciary capacity, however, as long as the advice is in the client's best interest to begin with, there's minimal risk posed by the advisor's use of AI to communicate that advice (so long as it avoids material misstatements, includes appropriate disclosures, and meets any other communications standards required by regulators – all of which the advisor would need to verify in any client communication regardless of whether it was generated by AI).
Using AI In The Financial Planning Process (But Not To Generate Recommendations Themselves)
Although the safest way to use AI in giving financial advice may be as a communications aid after the advisor has developed the recommendations themselves, an increasing number of tools are emerging that seek to insert AI into other parts of the planning process – up to and including generating financial planning recommendations. Which raises questions about how planners can use these tools in a fiduciary manner – or whether it's even possible to do so in the first place.
It's difficult to assess exactly how many tools exist today that use AI to create and deliver financial planning recommendations to clients, but there are at least a few examples. In one article, Vanguard's Chief Information Officer, Nitin Tandon, mentions using AI within the company's Personal Advisor and Digital Advisor services to create financial plans for clients. Additionally, Envestnet/Yodlee refers to their AI FinCheck program as a "virtual financial assistant", purporting it to be a tool that "holistically gauges consumers' financial health… and suggests proactive actions to automate and improve their finances" on behalf of financial services firms.
Increasingly common, however, are tools that don't quite make the full leap to giving advice, but instead use AI to analyze client data and suggest potential strategies, which the advisor can then review for relevancy and appropriateness for the client. For example, Conquest Planning, which originated in Canada and was recently launched in the U.S., touts a "Strategic Advice Manager" that "performs thousands of complex calculations around every piece of client information" and delivers a list of ranked and prioritized strategies for each client. (Notably, Conquest's website avoids any mention of artificial intelligence – perhaps recognizing the unease that both advisors and consumers may feel around AI-generated financial advice – but their own description of how their product works reads like a straightforward portrayal of an AI-driven tool.)
Similarly, FP Alpha allows advisors to upload clients' tax, estate, and/or insurance documents, then pulls out key information and identifies potential planning strategies that could apply based on the client's personalized data. Again, it doesn't do the advisor's job of delivering recommendations to the client per se – it only suggests strategies that might be worth considering, which the advisor can then assess to decide whether they merit pursuing further.
These tools that use AI in the financial planning process, but that stop short of delivering an AI-generated recommendation themselves, may represent the middle ground between using AI to give its own financial advice and using it simply to communicate the advisor's pre-determined advice. Because although an advisor may not want to leave it to AI to generate recommendations on its own, there could still be value in AI's ability to leverage large quantities of data into suggested actions – as long as, given the issues around transparency and accuracy outlined above, the AI itself doesn't have the final say.
In a way, tools like Conquest, FP Alpha, and similar providers that come along could be thought of as a kind of 'digital analyst', which can process data and come up with a list of potential strategies faster than the advisor can themselves, but which leave the decision of whether or not to recommend a specific strategy up to the advisor.
Such tools could potentially play a valuable support role for advisors, helping them identify planning strategies to consider for clients given their specific circumstances (including perhaps uncommon or unconventional strategies that may not have occurred to the advisor on their own) – in effect, providing a more advanced version of planning checklists such as the ones created by fpPathfinder to help advisors spot potential planning issues for specific client situations. Which means not only could they be used in a fiduciary capacity (provided that the advisor subsequently conducted their own analysis to assess which, if any, of the proposed strategies would be in the client's best interest), but they could also hold the potential to enhance the quality of the advisor's advice, since they could create a more comprehensive breadth of possible planning strategies to consider and ultimately narrow down to the few that are best for the client.
The Importance Of Technology Vendor Due Diligence
One of the problems around assessing the role of AI in financial advice is that "AI" isn't a regulated term in any way – any provider can claim that their product is driven by AI in some way regardless of whether it actually uses any technology that's commonly defined as AI; conversely, a provider that does use a common AI method like an LLM tool isn't under any obligation to divulge that fact.
In many respects, the fact that AI is all the rage today means that companies have an incentive to tout their AI capabilities, but it may not always be as marketable of a term as it is today – meaning that the number of providers who call their products "AI tools" will likely fluctuate depending on what's most fashionable at the moment. Ultimately, what's important to know about the software isn't whether or not it's branded as an AI product by its provider, but how it actually functions in practice.
For advisors, this means that due diligence in researching software vendors – both potential vendors and those currently used by the advisor – is key for determining how a particular piece of software actually works, rather than relying simply on how vendors market themselves.
Questions to ask vendors during the due diligence process could include:
- Does the software use Artificial Intelligence (AI) technologies, including Machine Learning, Large Language Models (LLMs), Deep Learning, or any other processes commonly understood as AI? (And if so, in which function(s) is the AI used?)
- Is it possible for an advisor to review the underlying calculations for any output produced by the software?
- Is it possible for an advisor to edit or alter the software's output before it is presented to the client?
- Is client data used to 'train' or improve the software? If so, how is the client's privacy protected in doing so?
Given that many software providers roll out updates and new features regularly, it's important to conduct due diligence periodically on existing technology vendors as well as before signing on with new vendors.
Since the beginning of human history, people have sought out ways to perform everyday tasks more effectively and efficiently, which has, in turn, spawned generation after generation of technology designed to perform increasingly complex tasks.
But although technology today has advanced to the point where it's capable of things that would have never been thought possible – from performing brain surgery to launching and landing a rocket to beating Jeopardy! champions – one thing it still hasn't surpassed is the human brain's cognitive capacity to combine knowledge, reason, and empathy. This unique human ability is essential to turn a client's jumble of goals and circumstances into a sound recommendation.
Which is why, regardless of the specifics of regulations around being a fiduciary, technology will always struggle to replace the advisor's role in actually providing recommendations… because, at its core, giving advice in someone's best interests requires knowing them as a person, and that's a gap that technology, at least so far, hasn't yet bridged.
So, while technology can certainly aid an advisor throughout various stages of the process, when it comes time to decide on a recommendation – regardless of how much technology went into formulating that recommendation – the human brain is still the best tool for the job.