When Talking Isn’t Private: Discovery, Privilege, and the Legal Risks of Using Public AI Tools in Ontario

                                 By Lou Brzezinski


Artificial intelligence tools have quickly become part of everyday professional life. Lawyers, businesses, and individuals now routinely turn to platforms like ChatGPT or Claude to brainstorm ideas, summarize documents, or test arguments.

A critical legal question follows from this trend—one that many users overlook:

If you share legal information with a public AI tool, does that information remain private or privileged?

Under current law, the answer is uncomfortable: often no.

This article explains why prompts and outputs from public AI tools are likely discoverable in Ontario litigation, why privilege usually does not apply, and what recent developments— particularly a U.S. court decision—may signal for Canadian practice.

The Illusion of Confidentiality

Contemporary AI tools are powerful but limited. They do not “think” like people or exercise independent legal judgment. They operate as complex statistical models trained on enormous datasets, producing responses based on patterns in that data.

Most widely used AI tools are cloud-based services. Their terms of use commonly permit:

  • Storage of user inputs

  • Use of those inputs to train or improve models

  • Sharing with affiliates or service providers

  • Disclosure to governmental authorities when required by law

    In practical terms, there is often no sound basis for assuming privacy or confidentiality over what is typed into these platforms. Treating them as if they were a secure, private workspace is risky.

    A Cautionary Example: United States v. Heppner


    These risks came into sharp focus in United States v. Heppner (S.D.N.Y., February 2026).

    In that case, a criminal defendant under FBI investigation used a public version of Claude, an AI tool developed by Anthropic, to explore questions related to his case. Acting on his own initiative—without instructions from his lawyers—he entered prompts that included information he had received from counsel and generated extensive AI conversations. He then saved those materials and shared them with his legal team.

When his devices were seized, the authorities also obtained the AI-generated documents. The defendant argued that these materials were protected by attorney–client privilege or the work-product doctrine.

Judge Jed Rakoff rejected those arguments outright. The court held that:

      The materials were not lawyer work product, because they were not created by counsel or at counsel’s direction The AI conversations were not attorney–client communications, because the AI tool was not a lawyer The platform’s privacy policies undermined any claim of confidentiality
The court concluded there was no “remotely plausible basis” for claiming privilege over the AI 
communications.

Although this is a U.S. decision, its reasoning aligns closely with established Canadian principles regarding privilege, confidentiality, and disclosure to third parties.

How Ontario Law Treats AI Prompts and Outputs 

  •  AI Chats Are “Documents”

    Ontario’s Rules of Civil Procedure define “document” broadly to include data and information in electronic form. The courts already treat as documents:

Emails

Text messages

Social media communications

Cloud-stored records

Prompts entered into AI tools, the responses they generate, saved chat logs, screenshots, and downloaded outputs all fall squarely within this definition. Courts focus on content and relevance, not the technology used to create or store it.

If an AI interaction exists, is stored somewhere, and is relevant to an issue in litigation, it can be requested in discovery and—absent privilege—ordered produced.

2. Solicitor–Client Privilege Rarely Applies

In Ontario, solicitor–client privilege protects confidential communications between a client and their lawyer made for the purpose of seeking or giving legal advice.

AI-related communications generally fail this test:

The AI provider is not a lawyer There is no solicitor–client relationship with the platform .Terms of service often permit storage, internal use, and in some cases sharing of data .Voluntary disclosure of privileged content to a third party typically waives privilege

Sharing legal advice with a public AI tool is functionally similar to forwarding it to a stranger or copying an unrelated third party on a privileged email. Even if the information was originally privileged, that privilege may be lost once it is voluntarily disclosed to an outside party without adequate confidentiality protections.

3. Litigation Privilege Is Also Uncertain

Ontario recognizes litigation privilege for materials created for the dominant purpose of actual or reasonably anticipated litigation, typically by or for counsel.

There may be situations where an argument for litigation privilege is possible, for example: Counsel uses a secure, enterprise-grade AI system The use forms part of counsel’s litigation strategy Robust contractual and technical confidentiality protections are in place, and data is not used to train public models By contrast, when an individual independently uses a public AI platform without any instruction from their lawyer, Ontario courts are likely to view the resulting material as personal research or notes, not privileged legal work product. The reasoning in Heppner would be persuasive: this is not a communication with counsel, nor is it work prepared by or for counsel under conditions of confidentiality.

Discovery Consequences in Ontario Civil Litigation

In civil proceedings, parties must list all relevant, non-privileged documents in their Affidavit of Documents. This obligation extends to AI-related materials where they:

  • Are stored locally or in a cloud account

  • Bear on issues such as knowledge, intent, drafting history, due diligence, or

    decision-making

    Once litigation is reasonably foreseeable, deleting or altering AI chats and outputs can raise spoliation concerns, just as the destruction of emails, texts, or other electronic records would.

    Criminal Investigations

In criminal matters, AI-related data can be obtained in several ways, subject to applicable constitutional and statutory safeguards:

  • Seizure of devices under warrant or other lawful authority

  • Production orders or similar mechanisms directed at AI providers

  • Access to cloud-stored data through user accounts found on seized devices

    Whether police can compel an AI provider to disclose data will depend on factors such as jurisdiction, where the data is stored, and privacy and data-protection laws. From the user’s standpoint, however, information accessible from their devices or online accounts is potentially reachable through lawful investigative tools.

    Are There Ontario Cases on Point?

    At present, there are no reported Ontario decisions that directly address the discoverability or privilege status of AI prompts and outputs.

    That gap does not mean that AI-related materials exist in a legal vacuum. Ontario courts already have a well-developed framework governing:

  • Electronic discovery

  • Solicitor–client privilege and waiver

  • Litigation privilege

  • Proportionality in discovery

  • Preservation and spoliation

    AI-generated content fits comfortably within this existing structure. Courts do not need new doctrines to conclude that AI chats are discoverable and usually not privileged; they can reach that conclusion by applying established principles to a new technological context.

    Practical Takeaways For Individuals

  • Do not assume that conversations with AI tools are private or legally protected

  • Do not paste legal advice, confidential instructions from your lawyer, or sensitive

    personal facts into public platforms

  • Treat public AI tools as you would a third party, not as a confidential legal adviser

    For Businesses and Legal Departments

  • Adopt clear policies governing the use of AI tools in the organization

  • Prohibit employees from uploading privileged, confidential, or regulated information into

    public AI platforms

  • Consider enterprise solutions that offer:

o Contractual confidentiality obligations
o Data segregation
o No-training or limited-use commitments

Include AI platforms and their associated data in: o Litigation-hold notices

o Records-management policies o E-discovery planning

Bottom Line

Unless and until Ontario courts hold otherwise, the safest working assumption is this:

Prompts and outputs from public AI tools are discoverable documents, not privileged communications.

These tools can be useful and efficient. Used without a clear understanding of their legal implications, however, they can quietly erode privilege and other protections that clients and counsel expect to rely on.

Back to blog