Five major generative AI engines — ChatGPT, Claude, Perplexity, Gemini, and Microsoft Copilot — are delivering inaccurate and potentially harmful financial advice to ultra-high-net-worth (UHNW) families, according to The Wealth AI Audit, a joint study by 5W and Haute Wealth released on May 6, 2026.
Overview
The audit evaluated how these AI platforms respond to high-stakes financial planning questions involving premium financing, private placement life insurance (PPLI), irrevocable life insurance trusts (ILITs), estate liquidity, business succession, and charitable legacy planning. Despite fluent and confident responses, all five engines consistently failed to provide risk disclosures required of human financial advisors, cited outdated tax law, and produced contradictory answers when given identical prompts.
A critical finding is that all AI systems continue to base estate planning recommendations on the expired assumption that the federal estate, gift, and GST tax exemption will revert to approximately $7 million per person. This so-called "sunset" provision was nullified when President Trump signed the One Big Beautiful Bill Act (OBBBA) on July 4, 2025, which permanently raised the exemption to $15 million per individual ($30 million per married couple), effective January 1, 2026, with inflation indexing thereafter. AI models trained on pre-2025 advisory content still reflect the outdated framework.
What the audit found
The study documents three primary categories of failure:
- Material misstatements of law: AI engines uniformly recommend estate planning strategies based on the repealed estate tax sunset, potentially leading clients to over-structure or under-leverage current exemptions.
- Missing risk disclosures: None of the platforms included warnings about the risks of premium financing, such as interest rate volatility or policy lapse, which are mandatory in regulated human advisor communications.
- Inconsistent responses: Repeated identical prompts yielded conflicting advice from the same model, indicating instability in reasoning.
Additionally, the audit identifies five recurring hallucination patterns across all platforms, including false citations of non-existent IRS rulings and misrepresentation of boutique advisory firm services.
The report notes that 66% of Americans who have used generative AI report doing so for financial advice, and 85% acted on the AI’s recommendation, based on an Intuit Credit Karma poll of 1,019 adults conducted August 7–14, 2025.
Regulatory scrutiny is increasing. FINRA’s 2026 Annual Regulatory Oversight Report, published December 9, 2025, included a dedicated section on generative AI, flagging hallucination, bias, and accuracy failures as supervisory priorities for broker-dealers.
Tradeoffs
While AI tools offer speed and accessibility, they lack fiduciary duty, regulatory compliance safeguards, and real-time legal updates. Their training data lags behind legislative changes, and they cannot assess individual client risk tolerance or family dynamics.
The absence of source transparency and the tendency to generate authoritative-sounding but incorrect guidance create material liability risks for families and firms relying on AI-generated financial plans.
When to use it
The audit does not recommend using current generative AI systems as standalone advisors for UHNW financial planning. Instead, it suggests AI may serve as a preliminary research aid when paired with verification from licensed, fiduciary professionals.
The full report is available at 5wpr.com/research and hautewealth.ai.
Bottom line: AI is now a silent participant in high-net-worth financial decisions, but its outputs require rigorous human validation due to persistent inaccuracies and regulatory non-compliance.