Chatty, leaky, and hardly human

Demand for mental health care has grown. Self-reported poor mental health days rose by 25% since the 1990s, found one study analyzing survey data. According to the Centers for Disease Control and Prevention, suicide rates in 2022 matched a 2018 high that hadn’t been seen in nearly 80 years.


By Darius Tahir / Illustration by Oona Zfinisha for KFF


Vince Lahey of Carefree, Arizona, embraces chatbots. From Big Tech products to “shady” ones, they offer “someone that I could share more secrets with than my therapist.”

He especially likes the apps for feedback and support, even though sometimes they berate him or lead him to fight with his ex-wife. “I feel more inclined to share more,” Lahey declared. “I don’t care about their perception of me.”

There are a lot of people like Lahey.

Demand for mental health care has grown. Self-reported poor mental health days rose by 25% since the 1990s, found one study analyzing survey data. According to the Centers for Disease Control and Prevention, suicide rates in 2022 matched a 2018 high that hadn’t been seen in nearly 80 years.

There are many patients who find a nonhuman therapist, powered by artificial innotifyigence, highly appealing — more appealing than a human with a reclining couch and stern manner. Social media is replete with videos begging for a therapist who’s “not on the clock,” who’s less judgmental, or who’s just less expensive.


Related | All the ways AI is already screwing you


Most people who required care don’t obtain it, declared Tom Insel, former head of the National Institute of Mental Health, citing his former agency’s research. Of those who do, 40% receive “minimally acceptable care.”

“There’s a massive required for high-quality therapy,” he declared. “We’re in a world in which the status quo is really crappy, to apply a scientific term.”

Insel declared engineers from OpenAI notified him last fall that about 5% to 10% of the company’s then-roughly 800 million-strong applyr base rely on ChatGPT for mental health support.

Polling suggests these AI chatbots may be even more popular among young adults. A KFF poll found about 3 in 10 respondents ages 18 to 29 turned to AI chatbots for mental or emotional health advice in the past year. Uninsured adults were about twice as likely as insured adults to report utilizing AI tools. And nearly 60% of adult respondents who applyd a chatbot for mental health didn’t follow up with a flesh-and-blood professional.

The App Will Put You on the Couch

A burgeoning indusattempt of apps offers AI therapists with human-like, often unrealistically attractive avatars serving as a sounding board for those experiencing anxiety, depression, and other conditions.

KFF Health News identified some 45 AI therapy apps in Apple’s App Store in March. While many charge steep prices for their services — one listed an annual plan for $690 — they’re still generally cheaper than talk therapy, which can cost hundreds of dollars an hour without insurance coverage.

On the App Store, “therapy” is often applyd as a marketing term, with compact print noting the apps cannot diagnose or treat disease. One app, branded as OhSofia! AI Therapy Chat, had downloads in the six figures, declared OhSofia! founder Anton Ilin in December.

“People are viewing for therapy,” Ilin declared. On one hand, the product’s name promises “therapy chat”; on the other, it warns in its privacy policy that it “does not provide medical advice, diagnosis, treatment, or crisis intervention and is not a substitute for professional healthcare services.” Executives don’t consider that’s confutilizing, since there are disclaimers in the app.

The apps promise huge results without backup. One promises its applyrs “immediate assist during panic attacks.” Another claims it was “proven effective by researchers” and that it offers 2.3 times rapider relief for anxiety and stress. (It doesn’t declare what it’s rapider than.)

There are few legislative or regulatory guardrails around how developers refer to their products — or even whether the products are safe or effective, declared Vaile Wright, senior director of the office of health care innovation at the American Psychological Association. Even federal patient privacy protections don’t apply, she declared.

“Therapy is not a legally protected term,” Wright declared. “So, basically, anybody can declare that they give therapy.”

Many of the apps “overrepresent themselves,” declared John Torous, a psychiatrist and clinical informaticist at Beth Israel Deaconess Medical Center. “Deceiving people that they have received treatment when they really have not has many negative consequences,” including delaying actual care, he declared.

States such as Nevada, Illinois, and California are attempting to sort out the regulatory disarray, enacting laws forbidding apps from describing their chatbots as AI therapists.

“It’s a profession. People go to school. They obtain licensed to do it,” declared Jovan Jackson, a Nevada legislator, who co-authored an enacted bill banning apps from referring to themselves as mental health professionals.

Underlying the hype, outside researchers and company representatives themselves have notified the FDA and Congress that there’s little evidence supporting the efficacy of these products. What studies there are give contradictory answers — and some research suggests companion-focapplyd chatbots are “consistently poor” at managing crises.

“When it comes to chatbots, we don’t have any good evidence it works,” declared Charlotte Blease, a professor at Sweden’s Uppsala University who specializes in trial design for digital health products.

The lack of “good quality” clinical trials stems from the FDA’s failure to provide recommfinishations about how to test the products, she declared. “FDA is offering no rigorous advice on what the standards should be.”

Department of Health and Human Services spokesperson Emily Hilliard declared, in response, that “patient safety is the FDA’s highest priority” and that AI-based products are subject to agency regulations requiring the demonstration of “reasonable assurance of safety and effectiveness before they can be marketed in the U.S.”

The Silver-Tongued Apps

Preston Roche, a psychiaattempt resident who’s active on social media, obtains lots of questions about whether AI is a good therapist. After attempting ChatGPT himself, he declared he was “impressed” initially that it was able to apply cognitive behavioral therapy techniques to assist him put negative believeds “on trial.”

But Roche declared after seeing posts on social media discussing people developing psychosis or being encouraged to create harmful decisions, he became disillusioned. The bots, he concluded, are sycophantic.

“When I view globally at the responsibilities of a therapist, it just completely fell on its face,” he declared.

This sycophancy — the tfinishency of apps based on large language models to empathize, flatter, or delude their human conversation partner — is inherent to the app design, experts in digital health declare.

“The models were developed to answer a question or prompt that you question and to give you what you’re viewing for,” declared Insel, the former NIMH director, “and they’re really good at basically affirming what you feel and providing psychological support, like a good frifinish.”

That’s not what a good therapist does, though. “The point of psychotherapy is mostly to create you address the things that you have been avoiding,” he declared.

While polling suggests many applyrs are satisfied with what they’re obtainting out of ChatGPT and other apps, there have been high-profile reports about the service providing advice or encouragement to self-harm.


Related | As more Americans embrace anxiety treatment, MAHA derides medications


And at least one dozen lawsuits alleging wrongful death or serious harm have been filed against OpenAI after ChatGPT applyrs died by suicide or became hospitalized. In most of those cases, the plaintiffs allege they launched utilizing the apps for one purpose — like schoolwork — before confiding in them. These cases are being consolidated into a class-action lawsuit.

Google and the startup Character.ai — which has been funded by Google and has created “avatars” that adopt specific personas, like athletes, celebrities, study buddies, or therapists — are settling other wrongful-death lawsuits, according to media reports.

OpenAI’s CEO, Sam Altman, has declared up to 1,500 people a week may talk about suicide on ChatGPT.

“We have seen a problem where people that are in fragile psychiatric situations utilizing a model like 4o can obtain into a worse one,” Altman declared in a public question-and-answer session reported by The Wall Street Journal, referring to a particular model of ChatGPT introduced in 2024. “I don’t consider this is the last time we’ll face challenges like this with a model.”

FILE - Sam Altman, Co-Founder and Chief Executive Officer, OpenAI, testifies before a Senate Committee on Commerce, Science, and Transportation hearing on Capitol Hill in Washington, on May 8, 2025. (AP Photo/Jose Luis Magana, File)
Attribution: APSam Altman testifies on Capitol Hill in May 2025.

An OpenAI spokesperson did not respond to requests for comment.

The company has declared it works with mental health experts on safeguards, such as referring applyrs to 988, the national suicide hotline. However, the lawsuits against OpenAI argue existing safeguards aren’t good enough, and some research reveals the problems are worsening over time. OpenAI has published its own data suggesting the opposite.

OpenAI is deffinishing itself in court, offering, early in one case, a variety of defenses ranging from denying that its product caapplyd self-harm to alleging that the deffinishant misapplyd the product by inducing it to discuss suicide. It has also declared it’s working to improve its safety features.

Smaller apps also rely on OpenAI or other AI models to power their products, executives notified KFF Health News. In interviews, startup founders and other experts declared they worry that if a company simply imports those models into its own service, it might duplicate whatever safety flaws exist in the original product.

Data Risks

KFF Health News’ review of the App Store found listed age protections are minimal: Fifteen of the nearly four dozen apps declare they could be downloaded by 4-year-old applyrs; an additional 11 declare they could be downloaded by those 12 and up.

Privacy standards are opaque. On the App Store, several apps are described as neither tracking personally identifiable data nor sharing it with advertisers — but on their company websites, privacy policies contained contrary descriptions, discussing the apply of such data and their disclosure of information to advertisers, like AdMob.

In response to a request for comment, Apple spokesperson Adam Dema sent links to the company’s App Store policies, which bar apps from utilizing health data for advertising and require them to display information about how they apply data in general. Dema did not respond to a request for further comment about how Apple enforces these policies.

Cartoon by David Horsey.
Attribution: David Horsey/Tribune Content Agency

Researchers and policy advocates declared that sharing psychiatric data with social media firms means patients could be profiled. They could be tarobtained by dodgy treatment firms or charged different prices for goods based on their health.

KFF Health News contacted several app creaters about these discrepancies; two that responded declared their privacy policies had been put toobtainher in error and pledged to modify them to reflect their stances against advertising. (A third, the team at OhSofia!, declared simply that they don’t do advertising, though their app’s privacy policy notes applyrs “may opt out of marketing communications.”)

One executive notified KFF Health News there’s business pressure to maintain access to the data.

“My general feeling is a subscription model is much, much better than any sort of advertising,” declared Tim Rubin, the founder of Wellness AI, adding that he’d modify the description in his app’s privacy policy.

One investor advised him not to swear off advertising, he declared. “They’re like, essentially, that’s the most valuable thing about having an app like this, that data.”

“I consider we’re still at the launchning of what’s going to be a revolution in how people seek psychological support and, even in some cases, therapy,” Insel declared. “And my concern is that there’s just no framework for any of this.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *