How AI tools use session data

Trust is the foundation of the therapeutic alliance.
When clients share their deepest vulnerabilities with us, they're entrusting us with their most personal stories — stories that deserve and require our utmost protection.
It's right, then, for therapists to be critical of AI platforms for therapists, particularly when it comes to privacy, security, and ethics.
This article is written to introduce therapists to how AI works, in hopes of arming clinicians to make an informed decision when considering how AI supports their private practice.
Demystifying AI
As AI has grown in popularity, it’s been talked about like a kind of magic. AI features are decorated with ✨sparkles ✨, and technologists claim to build “generative” platforms that create content out of thin air.
AI is definitely advanced — but it’s not magic.
Artificial intelligence is a concept that describes using large datasets to create sophisticated models, which can then be applied to create content that looks, sounds, and reasons like it was crafted by a human. But, AI is not really generating content from nothing. It is deconstructing (a lot of) data, synthesizing it, and then reconstructing information in a ready-to-use format.
The mysticized reputation of AI is similar to that of mental health care. Therapy isn’t magic — though it might feel like it sometimes. It’s an evidence-based application of our collective knowledge of psychology in order to meaningfully help people.
AI is just data science. Used well, it could help a lot of therapists help a lot of people. But to responsibly use AI to its full potential, we have to understand it for what it really is.
Where do AI tools get their data?
Therapy applies a vast, shared knowledge to deliver nuanced and individualized care. That shared knowledge is called psychology, and it’s a massive dataset of individual case studies, relevant academia, and lived experience.
AI, at its core, works similarly.
Just like the discipline of psychology, AI systems begin with incredibly vast datasets. The data they use is often freely available online, and sometimes privately purchased. AI tools don’t directly use this raw data, just as a therapist would never base a client’s treatment on a Google search.
In the same way that clinical researchers translate data into insights, Large Language Models (LLMs) synthesize data into broader patterns. You can think of LLMs as the equivalent to the collective body of all clinical literature in our field — comprehensive, and inconceivably detailed.
So — just as the DSM-5 distills countless research studies into a practical framework — AI platforms use LLMs to prepare information in a way that is ready-to-use. When we use an AI platform for clinical documentation, we're accessing a synthesis of patterns, much like consulting diagnostic criteria as a practical synthesis of a particular phenomenon.
Tools like Upheal harness AI so therapists can offload this kind of information processing, and save their brain power for the more expert and human elements of care.
How do AI tools learn
A misconception about LLMs is that they are continuously listening in, leeching information from users to stockpile an even greater store of data to use for their own interests. In reality; LLMs are too big to learn this way. They are like textbooks; updated periodically when there is enough new data to warrant a new version.
An LLM, however, is different from an AI-powered tool (like an ambient scribe). And any such tool you are using might well “listen in” to learn from your data, even though that data may not inform the LLM they’re based on.
Commonly, AI tools track how folks are using the product to inform improvements to the platform. This isn’t necessarily new, nor a feature of AI — software companies have long tracked usage data to learn how their products are used.
This isn't that different from how therapists continuously learn from their clients. (In this field, we call it experience!) While therapists can’t copy-and-paste the same treatment plan across clients, they do learn from every session and continuously improve the quality of the care they provide.
For therapists, professional development can be accelerated with resources like CEs or peer consultation groups.
For software companies, product improvement can be accelerated by AI it works with a greater volume of data. This kind of quickened learning definitely has benefits, and also, definitely has implications worth examining.
In the absence of regulatory support for AI, it is possible for some unsavory tools to use sessions with your clients to influence business decisions. Some could even share or sell that information. (Though, it would need to be anonymized or de-identified — there’s no getting around HIPAA!)
Even with HIPAA-compliant de-identification, an ethical AI tool is one that is transparent about if and how it uses client data, requiring explicit opt-in consent — similar to how clinicians obtain informed consent for using case material in clinical research.
To know how a company is using your session data, you’ll need to look at the most boring page on their website: the privacy policy.
How to evaluate AI tools for therapists
As therapists, we're trained to look beneath the surface—to understand not just what our clients are saying, but the deeper patterns and meanings behind their words. We need to bring this same careful attention to evaluating AI tools that will interact with our practice and our clients' information.
What to look for
Reading the full terms and conditions of any clinical tool you use is always a good idea. These are some of the key considerations to look for:
- How data is collected
- How data is used
- How data is retained
- How data is disclosed to third parties
The more explicit the terms are, the better. For example, you might want an AI note-taker that notices how often you change the format of their SOAP notes, so that they can create more customizable templates. But, you probably don’t want them selling sensitive client information from your sessions or “aggregated data” to a third party.
Comparing AI note-takers
AI note-takers for therapists have exploded onto the scene, and not all of them were created equal.
A free AI note-taker, for example, can probably be offered without payment because it performs simple tasks using freely-available LLMs. These platforms typically are not built-for-purpose, and may not have been designed with the specifics of clinical application in mind. (They might not even be HIPAA-compliant!)
Platforms that cost more, by contrast, were likely developed with significant clinical guidance (like Upheal), using dedicated resources for a more tailored and considered approach than just another ChatGPT wrapper.
The tools you really need to worry about are the ones that are both cheap and intended for clinicians. As the adage goes: “if you’re not their customer, you are their product.” Tools that are built for clinical applications require significant resources to be fit-for-purpose. If they are low-cost, or totally free, they might be keeping the lights on by training their tool on your session data, or by selling session data to other companies that are trying to build chatbot therapists.
If you’re just beginning to explore AI tools, start with products that have tiered pricing and strong privacy terms. You can experiment with an inexpensive plan, with the confidence that they’re treating data ethically and responsibly. Upheal’s free AI note-taker doesn’t come with all the features of its paid plans, but it does come with stringent privacy practices and robust clinical leadership guiding the product.
Your clients. Your data.
The integration of AI into therapeutic practice represents both an opportunity and a responsibility. By understanding how AI works, asking the right questions, and choosing tools that prioritize privacy and security, we can harness these technologies while maintaining the sacred trust of our therapeutic relationships.
At Upheal, we understand deeply that every session transcript, every progress note, and every client interaction represents not just data, but a piece of someone's healing journey. And that journey belongs in the hands of you, an experienced, human clinician.
The future of therapy will undoubtedly include AI tools—but that future must be built on a foundation of ethical practice and unwavering respect for client confidentiality. By choosing tools that align with these values, we can embrace innovation while staying true to our fundamental commitment: supporting healing journeys with integrity, empathy, and care.