Protecting your practice: a guide to the privacy of AI notetakers

April 14, 2025
9
min read
Protecting your practice: a guide to the privacy of AI notetakers

Disclaimer: All information in this blog post is accurate as of the publication date.

As mental health professionals, we navigate a delicate balance every day. We create safe spaces where clients can share their deepest vulnerabilities, while safeguarding their trust through confidentiality. It's a sacred covenant that forms the foundation of therapeutic healing.

Yet today, many of us find ourselves at a crossroads. The promise of AI-powered tools to reduce administrative burden and return our focus to client care is undeniably attractive. These tools offer a path away from documentation fatigue and back to the heart of why we became therapists in the first place: to be fully present with our clients.

But this promise comes with legitimate concerns. When we invite technology into our practice, are we inadvertently compromising the very confidentiality we've sworn to uphold? Are we trading convenience for privacy? Many therapists experience a profound unease at this possibility, and rightfully so.

The good news is that you don't have to choose between efficiency and ethics. By understanding the landscape of data privacy in therapy technology, you can make informed choices that honor both your need for support and your commitment to client confidentiality.

This guide — fact checked by a data privacy lawyer — will walk you through the critical privacy considerations when evaluating any AI-powered clinical tool. We hope it helps you ask the right questions before entrusting your practice, and your clients' most personal moments, to a technology partner.

How to tell if a tool is private

No two privacy policies are the same. We brought together experts in AI, clinical ethics, and privacy law to figure out what to look for in an AI-powered medical scribe. That led us to identify six key criteria against which a privacy policy can be judged:

  1. Users have to opt-in before their data is used to train the AI tool.
  2. Users have the power to opt-out of data being used to improve the product.
  3. The terms do not allow the company to sell data collected.
  4. Privacy policies are actively maintained and frequently (at least annually) updated.
  5. The company maintains SOC 2-certified data privacy practices.
  6. Therapists — where legally possible — are involved in responding to a subpoena or government request for information.

Read on for a more detailed description of each criterion, as well as a current round-up of leading tools.

1. Explicit consent for model training

When you document a session using an AI notetaker, that deeply personal narrative doesn't simply disappear after your note is generated. Many companies reserve the right to use that data — your client's most vulnerable moments — to train their AI models. The therapeutic space holds some of our clients' most vulnerable disclosures, and the question of who controls how these stories are used touches the very heart of our ethical practice.

The key distinction isn't whether AI learning happens (after all, continuous improvement benefits everyone) but whether clients and therapists maintain meaningful choice and control over their participation in that learning process.

What to look for

If you are concerned with how session recordings might be used, you may want to consider tools that allow you to generate notes without recording sessions at all.

Otherwise, review the privacy policy for explicit language about data usage for AI training. Look for terms like "model training," "service improvement," or "algorithm development." The policy should clearly state:

  • Whether client session data is used for AI training
  • Whether participation is opt-in (consent-based) or opt-out (automatic)
  • How the data is de-identified and protected
  • What specific information is retained
  • How long the data is kept

The gold standard is explicit opt-in consent — where nothing is used for training unless you actively choose to participate — rather than opt-out models where your data is used by default unless you take action.

Comparing AI tools

Nabla, Freed, and ClinicalNotes all explicitly state they may  use client data to train their models, with vague descriptions of anonymization processes.

Several other platforms like Blueprint, Heidi, and Mentalyc have terms in their privacy policy that would implicitly allow them to use data for training without clearly communicating this to users.

Twofold and Autonotes do not disclose any information about if and how data is used for training their models.

Upheal requires explicit opt-in consent before any data is used for training purposes. This consent-first approach means your and your clients' data is never used to train AI models unless you've both made an active, informed choice to participate. Additionally, all data is thoroughly de-identified according to rigorous standards detailed in Upheal's privacy policy.

2. The power to opt out

Client autonomy doesn't end when technology enters the therapy room. Just as clients can request certain information be kept out of their formal records, they should have a say in how their data is used by technology providers. The power to opt-out of data being used to improve a product isn’t  just a nice feature — it’s an extension of therapeutic ethics in digital form.

What to look for

Seek language that explicitly outlines opt-out procedures. The policy should address:

  • How to initiate an opt-out request
  • What specific data usages can be opted out of
  • Whether opting out affects service functionality
  • Confirmation processes for opt-out requests

Be wary of policies that make opting out difficult or impossible, or that bury opt-out information in dense legal language.

Comparing AI tools

The following tools do not allow your clients to opt out of their data being used to improve the platform, but do follow local laws allowing users to opt out of marketing communications:

  • Blueprint
  • Heidi
  • Mentalyc
  • ClinicalNotes
  • Twofold

AutoNotes and Nabla do not disclose whether or not clients can opt-out of their data being used. The phrase “opt-out” does not appear in their privacy policies once.

Freed, on the other hand, does allow users to opt-out by digging into their settings. Clients are, by default, opted in. Per Freed’s privacy policy, opting out may limit your access to some features and functionalities.

Upheal does not require that its users share this kind of sensitive data. If users do want to share their data, Upheal collects explicit opt-in consent before de-identifying and using it.

3. Selling session data

Perhaps the most troubling practice in the therapy tech space is the selling of data. When companies monetize session information — even in aggregated form — they transform sacred therapeutic exchanges into commercial assets. This fundamentally violates the core ethic of beneficence that therapy depends upon.

What to look for

Examine the policy for language about data sharing with third parties, particularly for commercial purposes. Red flags include:

  • Vague terms about "business partners" or "affiliates"
  • Permission to share data for "business purposes" without clear definition
  • Language about data as a company asset that can be transferred during acquisition
  • Absence of explicit promises never to sell data

The strongest policies will contain a clear, unequivocal statement that client data will never be sold or shared for commercial purposes.

Comparing common tools

Most tools do not disclose if and how they sell your clients’ data in their privacy policies, including Heidi and Nabla.

ClinicalNotes’ privacy policy would allow them to sell de-identified session data, and AutoNotes reserves the right to disclose de-identified data “for research purposes.”

While Freed and Twofold do not mention the sale of data in their privacy policies, their terms of service explicitly give them the right to sell aggregated data.

Upheal, Mentalyc, and Blueprint all explicitly state that they do not sell personal information.

4. Up-to-date privacy policies

The digital privacy landscape evolves constantly, with new regulations, threats, and best practices emerging regularly. A company's commitment to maintaining current privacy policies reflects their overall dedication to protection. Outdated policies may indicate neglect of privacy concerns or failure to adapt to emerging standards.

What to look for

When considering a tool, check how current their privacy policy is by looking for:

  • Last update date (should be within the last year)
  • References to current regulations (HIPAA, GDPR, CCPA, etc.)
  • Clear version history or change documentation
  • Proactive notification systems for policy updates

Companies with regularly updated policies demonstrate ongoing commitment to privacy protection.

Comparing AI tools

The following tools have recently updated their current privacy policy:

  1. Upheal (January 2025)
  2. Heidi (October 2024)
  3. Freed (September 2024)
  4. ClinicalNotes (September 2024)
  5. Blueprint (July 2024)
  6. Nabla (April 2024)

At the time of writing(March 2025), Mentalyc had not updated its privacy policy since 2023. Since, they’ve removed the date from their privacy policy. Neither AutoNotes nor Twofold share the date of their most recent update, which makes it a little more difficult to track changes transparently.

5. SOC 2 Certification

Many companies claim HIPAA or GDPR compliance, but these claims often rely on self-assessment rather than external verification. Third-party certifications like SOC 2, by contrast, require rigorous independent auditing of security practices, providing objective confirmation of a company's privacy protections.

What to look for

Search for:

  • Explicit mention of SOC 2 Type II certification (not just "compliance")
  • Information about independent auditors
  • Recency of certification (should be current)
  • Willingness to share audit reports upon request

Companies that are truly committed to their security practices will prominently display certification information rather than hiding behind vague compliance claims.

Comparing AI tools

Many tools currently report holding SOC 2 certifications:

However, Twofold and AutoNotes do not appear to have any such certifications.

While Mentalyc boasts SOC 2 “compliance,” their website currently states that they have not “invested heavily” like Upheal in achieving SOC 2 certification.

6. Subpoena vulnerability

As therapists, we understand the limited circumstances under which we might be legally compelled to break confidentiality. When client information resides on third-party servers, those parties might also be compelled to share your clients’ data. Fully-informed consent is the only ethical way to proceed; any AI tool you use should be clear upfront about if and how they will disclose sensitive information in response to a subpoena.

What to look for

Examine policies for:

  • Transparent disclosure about subpoena response processes
  • Commitment to notifying users about legal demands when legally permitted
  • Data minimization practices that limit what can be subpoenaed
  • Zero-knowledge or end-to-end encryption that technically limits access

Companies with strong privacy values will be honest about legal limitations while implementing technical and policy safeguards to maximize protection.

Comparing common tools

In their privacy policy, Twofold does not directly explain how they might use data to fulfill their “legal obligations,” and  whether that might extend to sharing data with government agencies.

Heidi does not disclose how they would respond to government requests for information.

Most other tools state that they may disclose personal information if subpoenaed without notifying the clinician:

  • Blueprint
  • Freed
  • AutoNotes
  • ClinicalNotes

If Nabla is legally compelled to disclose information stored in their platform, they will notify the therapist within a few business days.

Upheal and Mentalyc both commit to notifying providers when served with a subpoena (except where they are legally required to not notify).

By involving clinicians to the greatest legal extent possible, these platforms help therapists maintain therapeutic privilege. Upheal uniquely takes it a step further with product features that allow clinicians to separate notes and transcripts, ensuring that only minimally required information is disclosed to government agencies.

Choose a partner, not just a product.

In a field built on trust and confidentiality, the technology tools we choose reflect our deepest professional values. When evaluating AI-powered documentation tools, we're not just selecting software—we're choosing partners who will either honor or compromise our ethical commitments.

At Upheal, we built our platform with the understanding that we're guardians of therapy's most sacred element: the confidential therapeutic relationship. Our approach to privacy isn't an afterthought or marketing tactic—it's the foundation of everything we create.

We invite you to apply the standards in this guide to any technology you consider bringing into your practice, including ours. Ask the difficult questions. Read the privacy policies. Demand transparency. Your clients trust you with their most vulnerable moments, and you deserve technology partners who honor that trust as seriously as you do.

Because in the end, the most advanced AI in the world isn't worth compromising the fundamental promise we make to every client: what you share here is protected.

Share this post
Upheal
Upheal
Your platform for smart therapy
,

More blog posts