
As artificial intelligence becomes mainstream, advisors may find clients presenting them with a financial plan generated by a large language model and asking for a thumbs up.
While it's a potential challenge, financial advisors cite AI's shortcomings in developing an appropriate plan, including the risks, no matter how professional and clean the layout.
Participants on a CFP thread on Reddit recently addressed the issue, with several noting AI errors and one wondering how much longer the technology's limits will persist. ThinkAdvisor asked advisors for their experience with clients and AI.
"Clients sometimes bring a financial plan that they've assembled and want validation for," San Diego advice-only financial planner Michael Anderson of AdviceOnly wrote in an email. "I tell them all the same thing: it's easier to engineer a plan than to reverse-engineer one."
Lately he's seen more and more AI-generated plans.
"Many clients want to check our work against ChatGPT or another LLM but this is usually more time-consuming and expensive for the client," he said.
"LLM calculations are somewhat opaque, even when asking directly for them. In addition, many calculations or projections are based on decisions or priorities of the client. If the client doesn't know to give this information to the model, or if the model isn't trained to ask, it will have made many assumptions along the way which may not be accurate to a client's situation," Anderson said.
Clients haven't brought a comprehensive AI-generated financial plan to Jiayi (Kristy) Xu, founder of Global Wealth Harbor, but some have asked her to double-check aspects of their financial plan that the technology suggested.
"I have had clients ask me about specific financial planning topics or individual aspects of their plan after receiving answers from AI. What I am seeing is that AI can often provide a reasonable starting point on a concept, but clients do not always know what information the AI needs in order to generate an answer that is truly accurate for their situation," she said.
"In financial planning, the quality of the answer depends heavily on the quality and completeness of the inputs," said Xu, who noted AI can miss how interrelated factors can interact in a client's specific situation.
"Many clients may ask a very narrow question like certain financial planning topics they have heard of without including the broader context such as taxes, cash flow, account types, timing, investment structure or other competing goals and that can lead to an answer that sounds correct in theory but is incomplete or misleading in practice," she explained, also citing clients' life goals and personal constraints.
"AI often gives a more linear answer based on what is theoretically correct, without fully accounting for how one recommendation may affect other parts of the plan," she said. Xu tells clients that "AI can be a useful tool for learning about financial planning concepts, but at this point it should not be relied on for actual planning without careful review and professional judgment."
CPA and retirement planner Kurt Supe, posting on X this week, presented a hypothetical scenario to demonstrate AI's limits in financial planning.
A program like ChatGPT may produce a polished 22-page retirement plan with tables, charts and projections to age 90, he noted. But its Roth conversion strategy might ignore pension income, or the plan could ignore life expectancy factors or make a Social Security claiming recommendation that could cost $190,000 in lifetime benefits.
Every section could look exactly right but contain something materially wrong, Supe wrote.
"The most dangerous financial advice isn't obviously bad. It's confident, well formatted, completely personalized to your situation," he said. "And wrong in ways you have no way of knowing."
Credit: Adobe Stock
© Arc, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.