Artificial intelligence-based note-taking tools took the wealth management industry by storm over the past 12 months. The Oasis Group’s AI WealthTech Map includes 20 note-taking solutions.

The Oasis Group, of which I am CEO, predicts the rise of AI assistants in the next 12 months that will improve workflows and enable operations teams and advisors to complete more client services work with less human effort. But, as with any new technology innovation, these improvements come with risks.

AI solutions have risks that can include hallucinations. The risk of hallucinations is why my firm advocates maintaining a Human in the Loop (“HITL”) when using any AI solution. This article examines hallucinations, how to spot them, how to confirm that AI is hallucinating, and how to safeguard your firm from hallucinations.

What Are AI Hallucinations?

AI hallucinations refer to instances when artificial intelligence systems produce content that appears authoritative and well-reasoned but is actually incorrect or fabricated. These aren't simple errors but rather convincing fabrications that can be difficult to distinguish from accurate information. For wealth management professionals, recognizing and mitigating these hallucinations is crucial for maintaining client trust and making sound financial decisions.

Common Forms of AI Hallucinations in Wealth Management

Hallucinations generally come in six common forms.

1. Factual Inaccuracies

Definition: Factual inaccuracies occur when AI systems present incorrect information as established facts. This includes citing wrong market statistics, misstating regulatory requirements, or referencing financial products that don't exist.

Warning Signs: Watch for unusual or surprising claims that contradict your existing knowledge, outdated information that doesn't reflect recent market changes, or statistics that seem too perfect or convenient. Numbers that are overly precise without appropriate context can also indicate fabrication.

Confirmation Methods: Always cross-reference key facts with trusted industry sources, regulatory publications, or your firm's validated data repositories. For market statistics, check reputable financial databases or recent reports from established research institutions. When regulatory information is provided, verify directly with official regulatory documentation.

Safeguarding Strategies: Implement a fact-checking protocol for all AI-generated content before client presentation. Maintain an updated database of verified information that AI tools can reference. Train staff to critically evaluate numerical claims and establish clear guidelines on which sources are considered authoritative. Regular audits of AI outputs against verified information can help identify patterns of factual errors.

2. False Correlations

Definition: False correlations appear when AI identifies relationships between economic indicators or market factors that don't actually exist or lack statistical significance. The system may connect unrelated data points and suggest causation where only coincidence exists.

Warning Signs: Be skeptical of unexpected or novel correlations that haven't been documented in financial literature, relationships that seem too perfect or neat, or correlations that lack logical economic mechanisms to explain them. AI systems might also present overly simplified explanations for complex market phenomena.

Confirmation Methods: Apply statistical verification tests to assess the validity of claimed correlations. Examine the time periods used for correlation analysis and check if the relationship holds across different timeframes and market conditions. Consult with domain experts who can evaluate whether the proposed relationship makes fundamental economic sense.

Safeguarding Strategies: Develop a correlation validation framework that requires multiple forms of evidence before accepting AI-identified relationships. Ensure your AI systems are trained to differentiate between correlation and causation in their explanations. Maintain a human-in-the-loop approach for any investment strategies derived from newly identified market relationships, and implement gradual testing of these insights before full deployment.

3. Fabricated Sources

Definition: Fabricated sources occur when AI references studies, reports, or regulatory guidelines that don't exist, or substantially misrepresents the content of legitimate sources. This creates a false impression of evidence-based recommendations when no such evidence exists.

Warning Signs: Be alert to vague citations without specific details (like publication dates or author names), references to institutions that sound familiar but aren't quite right, or citations that can't be located through standard research channels. The AI may also provide overly convenient quotes that perfectly support its arguments.

Confirmation Methods: Attempt to locate every referenced source using academic databases, official regulatory websites, or financial information services. For cited statistics or claims, trace them back to their original context to verify they're being represented accurately. Contact authors or institutions directly when necessary to confirm publication details.

Safeguarding Strategies: Create a citation verification system for all AI-generated content that includes client-facing materials. Build a curated library of verified sources that the AI can reference. Train advisors to recognize red flags in citations and implement a policy requiring verification of key sources before client presentation. Consider implementing technology that automatically checks citations against databases of known publications.

4. Logical Inconsistencies

Definition: Logical inconsistencies manifest when AI produces recommendations or analyses that contradict its own previous statements or violate fundamental principles of finance or economics.

Warning Signs: Look for advice that changes substantially without clear reasoning, conflicting investment philosophies within the same document, or recommendations that would cancel each other out if implemented together. The AI might also propose strategies that violate basic financial principles like risk-reward relationships or diversification benefits.

Confirmation Methods: Review AI outputs for internal consistency by comparing different sections of analysis for contradictory statements. Apply established financial frameworks to evaluate if recommendations align with sound economic principles. Create decision trees to map out the logical flow of AI reasoning and identify break points.

Safeguarding Strategies: Implement systematic review processes that evaluate entire documents for consistency rather than reviewing sections in isolation. Develop rubrics based on financial principles that can be applied to AI outputs to ensure adherence to fundamental concepts. Train the AI using case-based reasoning that incorporates real-world financial scenarios with consistent solutions. Establish regular peer reviews where financial professionals evaluate AI-generated advice for logical coherence.

5. Overconfidence in Predictions

Definition: Overconfidence in predictions occurs when AI systems present highly uncertain market forecasts or financial projections with inappropriate levels of certainty. The system fails to adequately communicate the range of possible outcomes or the limitations of its predictive capabilities.

Warning Signs: Be cautious of precise numerical predictions without confidence intervals, consistent use of definitive language ("will happen" instead of "might happen"), absence of discussion about alternative scenarios, or lack of acknowledgment of known market uncertainties and external factors that could impact outcomes.

Confirmation Methods: Evaluate whether predictions include appropriate probability distributions or ranges. Check if the AI acknowledges key assumptions underlying its forecasts and how changes to those assumptions would affect outcomes. Compare the confidence level with historical prediction accuracy in similar market conditions.

Safeguarding Strategies: Configure AI systems to always present predictions with appropriate confidence intervals and probability distributions. Implement mandatory scenario analysis for all forward-looking statements. Train staff to communicate prediction uncertainty effectively to clients. Maintain historical records of prediction accuracy to calibrate confidence levels in future forecasts. Create visualization tools that clearly illustrate the range of possible outcomes rather than single-point predictions.

6. Client-Specific Fabrications

Definition: Client-specific fabrications happen when AI "fills in the blanks" about a client's financial situation, goals, or preferences without having adequate data to support these assumptions. These fabricated details can lead to highly personalized advice that's misaligned with the client's actual circumstances.

Warning Signs: Notice when AI provides detailed personal insights despite limited client data, makes specific assumptions about client preferences without explicit input, references client history that hasn't been documented, or creates overly detailed client personas based on demographic information alone.

Confirmation Methods: Review all client-specific claims against documented client interactions and intake forms. Implement verification processes where key assumptions are explicitly confirmed with clients. Create data completeness scores that indicate when the AI is working with insufficient client information.

Safeguarding Strategies: Develop clear policies about the minimum data requirements before AI can generate personalized recommendations. Implement systems that explicitly flag assumptions made by the AI and require advisor verification.

Train client-facing staff to distinguish between data-driven insights and AI-generated assumptions. Create client confirmation protocols where key elements of their profile are verified before major financial decisions. Design AI interactions to explicitly acknowledge data gaps and request additional information rather than making assumptions.

Leveraging AI Responsibly

Wealth management firms should harness the power of AI no matter their size because their competitors will leverage AI to accomplish more client work with less resources. However, AI hallucinations represent real risks to client outcomes and regulatory compliance for wealth management firms. As AI tools become more integrated into financial advisory services, developing robust verification processes and maintaining human oversight becomes increasingly important.

The most effective wealth management approaches will combine the computational power of AI with human judgment and expertise. By understanding the different forms AI hallucinations can take, financial professionals can better leverage these powerful tools while mitigating their unique risks.

John O’Connell is founder and CEO of The Oasis Group, a consultancy that specializes in helping wealth management and technology firms solve their most complex challenges.

NOT FOR REPRINT

© Arc, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.