As artificial intelligence tools become increasingly popular for advice, some U.S. attorneys are urging clients to avoid treating AI chatbots as confidential confidants, especially when legal liability or personal freedom is at stake. This caution intensified after a federal judge in New York ruled earlier this year that the former CEO of a bankrupt financial services firm could not withhold his AI chatbot conversations from prosecutors investigating securities fraud charges.
In response to this ruling, legal professionals have been advising that chats with AI platforms such as Anthropic’s Claude and OpenAI’s ChatGPT might be subject to discovery by prosecutors in criminal investigations or by opposing parties in civil litigation. Alexandria Gutiérrez Swette, an attorney at New York-based Kobre & Kim, emphasized the need for clients to exercise caution when interacting with these tools.
While communications between clients and their lawyers are generally protected under U.S. law, AI chatbots do not enjoy such confidentiality. Consequently, lawyers are recommending measures to help safeguard privacy when using AI tools. Over a dozen prominent U.S. law firms have issued advisories and sent emails to clients outlining strategies to reduce the risk of AI-generated chats being introduced as evidence in court. Similar warnings have also been incorporated into client agreements by some firms.
For example, New York-based Sher Tremonte recently included a clause in its client contracts stating that sharing a lawyer’s advice or communications with an AI chatbot could nullify the attorney-client privilege that typically protects such exchanges.
In a notable case that triggered these concerns, Bradley Heppner, former chairman of the bankrupt financial services company GWG Holdings and founder of Beneficent, was charged last November with securities and wire fraud. Heppner, who pleaded not guilty, had used Anthropic’s Claude to draft reports related to his defense, which he shared with his attorneys. His legal team argued that these AI-generated documents should remain confidential, as they contained privileged defense information.
However, prosecutors contended they were entitled to access materials Heppner created with Claude, noting that his lawyers were not directly involved in generating the content and that attorney-client privilege does not extend to AI platforms. The Manhattan-based U.S. District Judge Jed Rakoff ruled in February that Heppner must produce 31 documents created by Claude, stating that no attorney-client relationship can exist between an AI user and a platform like Claude.
Meanwhile, on the same day, U.S. Magistrate Judge Anthony Patti in Michigan ruled differently in a separate case. He decided that a woman representing herself in a lawsuit against her former employer did not have to disclose her ChatGPT conversations about her employment claims, treating those chats as her personal work product rather than discoverable communications.
Both OpenAI and Anthropic’s privacy policies indicate that user data may be shared with third parties and advise consulting qualified professionals before relying on their chatbots for legal advice. Judge Rakoff noted during a hearing that Claude explicitly informs users they should not expect privacy in their inputs.
In light of these developments, law firms are scrambling to establish guidelines for AI use. Some recommend choosing “closed” AI systems designed for corporate use, which may offer stronger protections, although these remain largely untested in court. Firms also suggest that AI legal research is more likely to be privileged if conducted under a lawyer’s direction, with explicit statements included in chatbot prompts, such as “I am doing this research at the direction of counsel for X litigation,” as advised by New York’s Debevoise & Plimpton.
Additionally, disclosures about AI use are increasingly being incorporated into law firm contracts with clients, reflecting the growing importance of addressing AI-related risks in legal practice.
