AI: right or wrong? 4 ethical considerations of AI in therapy
Whether we’re ready for it or not, AI is making its impact across industries. But what about the therapy space? Can we combine millions of years of human evolution with this latest human advancement – and more to the point, should we?
AI in therapy: a tech-tonic shift or ethical rift?
As AI continues to take off in the therapy space, so do the ethical questions related to its use. AI presents promising solutions and tackles some of the significant challenges in the industry. These include:
- Improving diagnosis procedures
- Building personalized treatments
- Simplifying note-taking
- Increasing accessibility to mental health care
- And reducing therapist burnout
Platforms like Upheal, for example, streamline the documentation process by automatically drafting session notes, freeing up valuable time for therapists to focus on their patients. These functions have helped reduce the administrative burden on clinicians, improve patient engagement, and make mental health more accessible (Sadeh-Sharvet & Hollan, 2020).
However, with this technological revolution comes the responsibility to navigate the ethical dilemmas it introduces. As AI continues to weave its way into therapy, it’s critical to pause and reflect on the ethical boundaries that come with these advancements.
In this blog, we explore key ethical considerations that mental health professionals should keep in mind as they integrate AI into their practice, ensuring that innovation goes hand-in-hand with responsibility.
1. Client confidentiality: are private conversations truly private?
When integrating AI into therapy, one of the foremost ethical concerns is client confidentiality — ensuring that sensitive data remains secure and private. There are real concerns about potential data breaches and the possibility of digital mental health companies sharing or even selling patient data.
To mitigate these risks, it's crucial to understand the measures taken by companies to protect this information.
Given that there may still be stigma surrounding the use of AI in therapy, failure to be upfront can lead to feelings of distrust, which may negatively impact therapeutic outcomes.
What we can do about this
Transparency is essential. Mental health professionals must educate themselves, and then inform clients about the scope of AI’s involvement in their therapeutic process to gain their consent and foster trust.
At the same time, companies creating AI technologies should do so hand in hand with their audience to adhere to emerging concerns. For example, Upheal strictly follows HIPAA security and privacy regulations, de-identifying session data and deleting audio recordings immediately after sessions. Additionally, the use of patient data for AI model training is entirely optional, allowing clients full autonomy over how — or if — their data is used.
By being transparent and offering clients the choice to opt in or out, professionals can respect their autonomy and uphold ethical standards while leveraging the benefits of AI.
Although this is just one example, Upheal offers clinicians the flexibility to pause AI recordings if a client feels uncomfortable discussing sensitive topics.
2. AI can miss the mark on diversity: algorithmic bias
Another really important consideration is to be aware that the AI is being trained on data that has biases and therefore can produce bias.
For instance, one study found that a model predicting depressive symptoms performed worse for people of color as the datasets were not demographically representative, to begin with. We do need to keep in mind that the lack of cultural sensitivity is not an AI-specific problem, but rather a reflection of the current biases in society broadly, so it is important to be aware of this nuance.
The impact of these biases can be profound, leading to misdiagnosis, improper treatment recommendations, or even a lack of access to appropriate care for minority populations.
What we can do about this
When using tools in the therapeutic space, it’s important to choose products that acknowledge this limitation and also work towards improving it.
Companies that are using large diverse datasets, employing methods to increase representativeness, conducting regular audit checks, and fine-tuning the data can improve the overall quality of the AI model.
Initiatives like "fairness-aware" AI are gaining more popularity, where models are being designed and adjusted specifically to account for discrimination against a certain subgroup of the population.
With continued effort to improve representativeness in this field, clinicians can ensure that their use of AI aligns with the values of equity and inclusion, while still leveraging the technological advantages AI brings to therapy.
3. AI replacing the human connection
While AI offers a plethora of opportunities to assist therapy, there are a few things it cannot (or should not) replace such as the human connection which builds a strong therapeutic relationship – one of the strong predictors of effective therapy. This is because there are nuances and complexities of human emotions that a therapist can attune to on a deeper level while a machine would be unable to do so.
Additionally, in diagnosis, there’s a risk of misdiagnosis if clinicians rely solely on AI-generated assessments without integrating their own professional judgment. AI, while highly accurate in some aspects, is prone to errors — it can generate inaccurate information or conclusions because it doesn’t reason through problems like a human does. In these cases, the clinician’s expertise is irreplaceable.
What we can do about this
When weaving AI into therapy, it is important not to lose sight of what a clinician brings to the table and not over-rely on these tools.
Firstly, clinicians should prioritize efforts to build a strong therapeutic alliance with clients so they can fully attune to their needs. AI can then be used to supplement, not replace this connection. For example, AI note-taking can allow mental health providers to be more present with their clients, engaging in active listening without worrying as much about documentation.
Moreover, when dealing with complex cases of diagnosis, clinicians should follow their expertise and judgment, using AI as another relay of information rather than as a decision-making tool.
4. Potential for misuse
With the buzz around AI in many mental health startups today, it's easy to get swept up in the hype. Some companies roll out therapy chatbots long before they’re ready to provide meaningful care due to the absence of clear industry guidelines. One of the key concerns here is quality control. With poorly trained AI models, there’s a real danger of users receiving subpar support, which can be harmful in a therapeutic context.
Users also need to have proper informed consent. For example, a company called KoKo was under scrutiny for providing therapy to thousands of people without informing them that the care was given by a generative AI tool ChatGPT3 (Grohol, 2023).
There is also the risk of diverting from forming genuine human connection and over-relying on a robot to share all our feelings and emotions. Even though it is convenient to have that kind of support 24x7, it could have spillover effects into the real relationships in people’s lives. Yochachan Bigman, a psychologist studying the ripple effect of machine-human interaction noted that it can cause us to view “humans as tools too” (Abrams, 2023). What can such interactions do to long-term human relationships? These are questions psychologists are still trying to answer, and we as a field are in the process of finding the right balance between technology and traditional therapy.
What can we do about this
As you are looking for chatbots to deploy in your space, keep in mind that not every AI tool is built with the same level of ethical consideration, thorough training, or mental health expertise (Plante, 2023). It is helpful to keep track of the FDA guidelines regarding AI therapy chatbots and choose companies that prioritize user safety; however, do remember that even the government guidelines are work in progress, reflecting how nascent this space is.
Another way is to read the research that the companies claim they have conducted to make sure they are scientifically sound and make note of the limitations. As the space is growing, the literature in this area is constantly changing, therefore it is helpful to keep an eye out for companies that are genuinely engaging in ethical research and beware of those that are making inflated claims without scientific support (a good step is to read more about their clinical leadership in the team – you will be surprised to see that many companies don’t have one).
TLDR: using AI ethically
In conclusion, we can see that AI presents a blend of promising opportunities but with these opportunities come responsibilities. We cannot ignore the ethical pitfalls that this solution presents.
Transparency and confidentiality must always be prioritized — clients deserve to know how their data is being handled and they should have a say in how it's used as well.
When choosing an AI tool, remember to choose one that upholds high ethical standards, complies with regulations like HIPAA, and where the company and users are aware of potential biases and limitations. With careful oversight, you can make AI a powerful ally driving efficiency while maintaining the trust and integrity that therapy requires.