Therapists are increasingly discussing AI for mental health with their prospective and existing clients, often at the urging of the client.
getty
In today’s column, I examine the growing trend of therapists informing their prospective clients and existing clients about the use of AI in mental health. There are two major elements involved: (1) disclosing to a client the use of AI by the therapist, and (2) forewarning the client about the use of AI by the client. Those are significant topics worthy of a therapist-client discussion and deserve rapt attention.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Psychology
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that entails the field of psychology, such as providing AI-driven mental health advice and performing AI-based therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
You might find of keen interest that AI and psychology have had a longstanding relationship with each other. There is a duality at play. AI can be applied to the field of psychology, as exemplified by the advent of AI-powered mental health apps. Meanwhile, psychology can be applied to AI, such as aiding us in exploring better ways to devise AI that more closely approaches the human mind and how we think. See my in-depth analysis of this duality encompassing AI-psychology and psychology-AI at the link here.
Two Paths Of AI And Therapy
Some believe that therapists in the modern era of nearly ubiquitous AI ought to be helping their prospective clients and actual clients to understand how AI fits into the therapy milieu. This involves both the use of AI that the therapist might be undertaking, along with any use of AI that a client might be performing on their own to seek AI-driven mental health guidance.
Let’s unpack those two major avenues, doing so by first focusing on AI usage by the therapist as part of their practice.
You might not know that there is an emerging trend of making suitable disclosures regarding any substantive use of AI by the therapist within their practice, especially pertaining to the actual delivery of mental health services.
Notice that I said substantive use of AI that is within the scope of the practice. If the AI usage is minimal and inconsequential, probably no overt disclosure is needed, though make sure to confer with legal counsel to ascertain that aspect. If the AI usage by a therapist is totally outside the scope of their practice, such as using AI to aid their home cooking or learn how to play a piano, that doesn’t seem to fall within the therapist-client boundaries.
In short, the AI usage being referred to is within the scope of the practice and meets some threshold of being considered substantive.
AI Use Within The Therapy Sphere
We can now dive into the matter at hand.
Therapists are gradually adopting AI capabilities across both the administrative side of their operations and the therapeutic side. A therapist might decide to use AI in the administrative tasks of their practice, including adopting specialized AI for billing, scheduling, and the like. Clients are unlikely to care whether AI is adopted for administrative chores. The main aspect would be whether the AI could adversely leak any private information of the client and not conform to customary HIPAA stipulations.
On the therapeutic side of the practice, clients are likely to be keenly interested in knowing whether the therapist is leaning into AI.
Why so?
A notable basis for coming to a human therapist is to get the human-to-human consultations and tapping into the human-based expertise of the therapist. If the therapist potentially uses AI as a therapy crutch, perhaps they are shortchanging the therapy by letting AI take the reins. The therapist only serves to deliver whatever the AI has to say. They are a mere robot-like extension of the AI.
Clients would rightfully be distraught at that disconcerting possibility.
That’s not to say that therapists should stay away from AI on the therapy side. Not at all. It is important to set expectations and dispel any misconceptions that prospective or existing clients might have about AI usage by therapists.
The gist is that if the AI is used by a therapist in a sensible and service boosting manner, that’s a crucial talking point to bring up with clients. A client might then perceive the AI usage in a much more favorable light. You see, the crux is that the therapist is seeking to provide the best feasible therapy and dipping into AI to try and ensure that such a goal is achieved.
As I’ve repeatedly stated, a therapist ably armed with AI can indubitably outdo therapists who shun or hide from contemporary use of AI. My prediction is that we will transform step-by-step away from the classic dyad of therapist-client and move into a world of the therapist-AI-client triad, see my in-depth discussion at the link here.
Strict Obligation Or Merely Voluntary
A rising assertion is that therapists should voluntarily disclose how AI is being used in their practice, rather than waiting for a legal obligation that requires them to do so.
This notion of doing so voluntarily has its ups and downs.
On the downside, a disclosure about AI usage in the therapist’s practice might unduly disturb potential clients and existing clients. Confusion can arise. Why is my therapist, or my about-to-be therapist, telling me about AI? Should I be concerned? In a sense, the topic could be the proverbial ringing of alarm bells when no such alarm was envisioned or anticipated.
Maybe the therapist should remain mum. Only if the therapist is overtly asked about the AI topic should they then engage in a dialogue about AI.
In contrast, an upside viewpoint is that by bringing up AI, the therapist is showcasing how state-of-the-art they are. Assuming they are genuinely and sensibly using AI, this is a big plus if suitably explained. Prospective clients might find that avid AI usage is an attractive upside of selecting the therapist. Existing clients could perceive that the therapist is not one of those stuck-in-the-mud types who are still doing couch-based therapy using pen-and-paper the old-fashioned way.
It is hard to know how someone will react to a disclosure about AI usage by a therapist. The odds are that such a disclosure will resonate more with the rising generation of digital natives, more so than those who are barely able to utilize their smartphones or still have a conventional cellphone. Interest in technology is a likely indicator of receptiveness toward learning about how a therapist is actively engaged in AI usage.
Physicians In A Similar Boat
One means of figuring out the various ways to best communicate about AI usage is to explore the same considerations when it comes to physicians who use AI in their medical practices.
In a recent research paper entitled “Ethical Obligations to Inform Patients About Use of AI Tools” by Michelle M. Mello, Danton Char, Sonnet H. Xu, JAMA AI In Medicine, July 21, 2025, these key points were made (excerpts):
- “Setting a patient disclosure policy should be part of every decision by a health care organization to deploy an AI tool.”
- “We propose that the policy — notify patients that the tool is being used, seek their consent to use it, or neither — should be driven by two determinations. First, how serious is the risk that using the tool could cause physical harm to patients? Second, to what extent do patients have a meaningful opportunity to exercise agency in response to a disclosure?”
- “Using plain and concrete language, patient notifications should cover (1) the fact that an AI tool is being used; (2) what functions it performs; (3) a basic description of how it works, including the clinician’s role; (4) why the organization believes it improves care; (5) a basic description of how the organization monitors the tool’s performance, including differential performance across patient subgroups; and (6) where applicable, what choices the patient has about having the tool used in their care.”
- “An ethical framework is generally consonant with the legal doctrine of informed consent, which requires disclosure of material information.”
- “State laws may specify what patients must be told about certain uses of AI tools.”
There are valuable lessons to be learned from that somewhat analogous setting.
Shaping A Written Disclosure
By reviewing physician-focused disclosures of AI usage, it is feasible to sketch some of the elements that might be suitable for a therapist-oriented version.
For example, imagine this kind of language, which of course needs to be carefully devised and vetted via your legal counsel:
- “As part of my commitment to providing effective, ethical, and up-to-date mental health care, I want to be transparent about how I use artificial intelligence (AI), including large language models (LLMs) and generative AI, in my practice.”
Then, the content might include how the therapist uses AI, such as:
- “I may use AI tools from time to time in limited and carefully considered ways to support aspects of my practice. Examples of how I might use AI include, but are not limited to, aspects such as (a) organizing my therapy notes or creating summaries of my notes, (b) generating psychoeducational materials or therapeutic handouts, (c) developing worksheets, journal prompts, or mindfulness scripts, (d) performing administrative tasks such as scheduling or billing support, and in additional ways. Any use of AI tools does not replace my clinical judgment, and I do not use AI to make diagnoses. I decide on treatment independently.”
That is just a straw man to illustrate how the physician-focused approaches can be potentially recast into the therapist realm.
AI Usage By The Client
Shifting gears, the second major aspect of an informational discussion about AI with clients has to do with whether a client or prospective client is using AI on their own for mental health purposes.
The deal is this. People are oftentimes making use of popular generative AI such as ChatGPT to directly obtain AI-driven mental health guidance on their own (there are 400 million weekly active users of ChatGPT, of which some proportion likely use the AI for therapy-like purposes). Most people are probably unfamiliar with the downsides and gotchas that can befall such usage.
People around the globe are routinely using generative AI to advise them about their mental health conditions. It’s one of those proverbial good news and bad news situations. We are in a murky worldwide experiment with unknown results. If the AI is doing good work and giving out proper advice, great, the world will be better off. On the other hand, if AI is giving out lousy advice, the mental health status of the world could be worsened. For more on the population-level impacts, see my analysis at the link here.
An informed therapist can set the story straight about what AI for mental health can both positively and negatively do when used in an unsupervised setting.
Therapists In The Blind
Some therapists know almost nothing about the use of AI for mental health in terms of not knowing how therapists would use AI, nor do they grasp how everyday people are using AI to garner mental health advice. They are therapists completely oblivious to the rapidly expanding use of AI in mental health. It’s the proverbial head-in-the-sand stance.
I bring up that portion of therapists to point out that they presumably see no need to do any type of disclosure about AI per se. They aren’t using AI. They don’t care that their clients or prospective clients are perhaps using AI. AI is utterly out of sight and out of mind in that regard.
The trouble they face is that even if they aren’t embracing AI, the chances are that their clients are. In that sense, clients are essentially going behind the back of therapist and using AI. The AI might tell them that the advice of the therapist is hogwash. The AI might tell them that they should do Z, while the therapist is telling the client to do A. And so on.
A tug of war ensues, encompassing a battle between the unseen AI third-party advisor to a client and what the therapist is trying to therapeutically undertake with the client.
If a family member or friend were coaching the client behind the scenes, a therapist would nearly always choose to adroitly confront the nature of that intervention. Yet, if it is AI, they stand aloof and often seem unwilling to address the 600-pound elephant in the therapy session room.
AI In Mental Health Is Staying
The bottom line is that therapists are going to be dragged into the AI arena whether they like it or not. They might opt to use AI in their practice, or they might not. If they are doing so, it might be advantageous to provide suitable disclosures, either as mandated or as chosen to do voluntarily. Meanwhile, the chances that their prospective clients or existing clients are opting to use AI for mental health guidance are already substantial and expanding steadily every day.
Setting aside the disclosure considerations, what does it say about a therapist if they are asked about AI in mental health and have no clue how to answer such a question? It’s not an oddball question. It is not a techie, nerd-like question. It is a natural question that goes along with the widespread use of generative AI and LLMs.
Not being sufficiently informed to address such questions is, shall we say, a bad look.
Recall the immortal words of John Locke: “He that judges without informing himself to the utmost that he is capable, cannot acquit himself of judging amiss.” Wise words that are still applicable in the budding era of AI.


