“In the case of AI in the mental health industry (psychiatry and psychology), this is but another lucrative but ultimately damaging tool that spells only millions more people being arbitrarily labeled, drugged, electroshocked and worse.” – Jan Eastgate, President CCHR International
Mental health screening and surveillance of apps, social media and phones are being used to monitor all behavior and then through computer programming, predict mental disorder and the need for psychiatric or psychological intervention.
By Jan Eastgate
President CCHR International
The Mental Health Industry Watchdog
May 18, 2020
Psychiatry’s diagnostic methods have long been recognized as and criticized for being about as unscientific as reading tea leaves or a crystal ball. There is not a single physical or medical test to confirm any mental disorder listed in the American Psychiatric Associations’ Diagnostic and Statistical Manual for Mental Disorders. Psychiatrists literally vote diagnoses into existence based largely upon arbitrarily determined behaviors. The diagnostic system takes huge license with people’s lives and their vulnerabilities.
Statistics are bandied around about the financial cost of “untreated” mental illness, but no one questions the veracity of such statistics and why, despite multi-billions of dollars spent on research and treatment of mental health issues, things are only getting worse. With this comes a multi-billion-dollar psychiatric drug market, worth over $35 billion a year in the U.S. alone—and potentially sentencing consumers to a lifetime of side effects and harm, not long-term help.
Now this farce—at the expense of people’s lives—is forming the foundation of “artificial intelligence in mental health” screening and diagnosing—taking Aldous Huxley’s Brave New World about a futuristic conditioned society, to a whole new level. Artificial Intelligence (AI) is now marketed as a means to “prevent” or quickly identify the “growing” numbers of people, including children and youths, said to be mentally ill. And when they say, “prevent,” that can include labeling someone “at risk of becoming mentally ill” and drugging them to prevent the onset of the disorder.
A team of researchers from the University of Colorado Boulder are working to apply machine learning AI in psychiatry, with a speech-based mobile app that can categorize a patient’s mental health status. Peter Foltz, a research professor at the Institute of Cognitive Science and co-author of the paper, believe they “can create tools that will allow them to better monitor their patients.” [Emphasis added]
“Language is a critical pathway to detecting patient mental states,” says Foltz. “Using mobile devices and AI, we are able to track patients daily and monitor these subtle changes.”
To further this Brave New World scenario, if the app detects a worrisome change, it could notify the patient’s doctor to check in. As Foltz stated, as patients “often need to be monitored with frequent clinical interviews by trained professionals,” there are not enough clinicians and the App can assist.
“It’s a recipe for disaster,” said Ann Cavoukian, who spent three terms as Ontario’s privacy commissioner and is now the distinguished expert-in-residence leading the Privacy by Design Centre of Excellence at Ryerson University in Toronto. “I say that as a psychologist,” she explained in an interview. “The feeling of constantly being watched or monitored is the last thing you want.”
Foltz and his colleagues designed the mobile app that takes patients through a series of repeatable verbal exercises, like telling a story and answering questions about their emotional state. An AI system then assesses those soundbites for signs of mental distress, both by analyzing how they compare to the individual’s previous responses, and by measuring the clips against responses from a larger patient population.
The team asked human clinicians to listen to and assess speech samples of 225 participants – half with severe psychiatric issues; half healthy volunteers – in rural Louisiana and Northern Norway. They then compared those results to those of the machine learning system. Foltz said: “We found that the computer’s AI models can be at least as accurate as clinicians.”
- Foltz claims the following example—sentences that don’t follow a logical pattern can be a critical symptom in schizophrenia. Shifts in tone or pace can hint at mania or depression, and memory loss can be a sign of both cognitive and mental health problems.
- Henry Nasrallah, a psychiatrist at the University of Cincinnati Medical Center who has also written about AI’s place in the field claims that talking in a monotone can be a sign of depression; fast speech can point to mania; and disjointed word choice can be connected to schizophrenia. When these traits are pronounced enough, a human clinician might pick up on them—but AI algorithms, Nasrallah says, could be trained to flag signals and patterns too subtle for humans to detect.
- In a study with Columbia University psychiatrists, a neuroscientist working with IBM said they were able to predict, with 100 percent accuracy, who among a population of at-risk adolescents would develop their first episode of psychosis within two years. IBM is building an automated speech analysis application that runs off a mobile device. By taking approximately one minute of speech input, the system uses text-to-speech, advanced analytics, machine learning, natural language processing technologies and computational biology to provide a real-time, overview of the patient’s mental health.
A 2017 report from the National Council for Behavioral Health claimed that within five years, the U.S.’s already “overburdened” mental health system may be short as many as 15,600 psychiatrists. Some mental-health apps and programs already incorporate AI—like Woebot—an app-based mood tracker and chatbot that combines AI and principles from cognitive behavioral therapy—but said it would probably be some five to 10 years before algorithms are routinely used in clinics, according to psychiatrists interviewed by TIME in 2019.
- Woebot is a Facebook-integrated bot [device/software that executes commands] whose AI is “versed in cognitive behavioral therapy.” Clinical research psychologist Dr. Alison Darcy developed the AI-powered chatbot with a team of psychologists and AI experts. With Woebot, the user and chatbot exchange messages. This reportedly allows the AI to “learn about the human” and to tailor conversations accordingly. Because this technology is integrated with Facebook Messenger—a platform with 1.3 billion monthly users and not bound by medical privacy rules—Darcy’s bot opens the door to mental health treatment for hundreds of millions of people who might not otherwise gain access. Darcy’s history includes spending several years in the Psychiatry Department at the Stanford School of Medicine and the researchers at the Stanford University School of Medicine studied the efficacy of Woebot.
- Companion and mind.me, are apps that can be installed on a phone or smartwatch. Left to work in the background, their AI collects data from its user 24 hours a day and without direct input. Companion was developed in conjunction with the U.S. Department of Veterans Affairs. Its design “listens” to the user’s speech, noting the number of words spoken and the energy and affect in the voice. The app also “watches” for behavioral indicators, including the time, rate, and duration of a person’s engagement with their device. 
At IBM, scientists are already using transcripts and audio inputs from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania and depression. It is asserted that it only takes about 300 words to help clinicians predict the probability of psychosis in a user, IBM reports. This is a frightening prospect and something previously exposed by CCHR—psychiatrists guessing at whether a person may become mentally disordered, which can lead to powerful antipsychotics being prescribed to prevent the onset of psychosis.
Dr. John Torous, chair of the American Psychiatric Association’s Committee on Mental Health Information Technology, concurred that mental health diagnostics have not been quantified well enough to program an algorithm.
A review was conducted of 28 studies of AI and mental health that used electronic health records, mood rating scales, brain imaging data, novel monitoring systems (e.g., smartphone, video), and social media platforms to predict, classify, or subgroup mental health illnesses including depression, schizophrenia or other psychiatric illnesses, and suicide ideation and attempts. While the study predicted it could be possible to help mental health practitioners re-define mental illnesses more objectively than currently done in the DSM-5, identify these illnesses at an earlier or prodromal [before the appearance of initial symptoms] stage when interventions may be more effective, and personalize treatments based on an individual’s unique characteristics. However, caution is necessary in order to avoid over-interpreting preliminary results, and more work is required to bridge the gap between AI in mental health research and clinical care.
Even an article in Psychiatry Online points out that “Discussions about artificial intelligence in health care have raised concerns about the dehumanization of healing relationships.” But the main argument was that “Computer-generated recommendations may carry a false authority that would override expert human judgment” and “raises false hopes that machines will explain the mysteries of mental health and mental illness.” However, the real point is that the DSM 5 and psychiatry’s ability to diagnose any mental disorder is not based on science; it’s based on arbitrary whims and AI will only exacerbate this.
A “machine learning algorithm” is already created. As one article states, “In the future, patients might go to the hospital with a broken arm and leave the facility with a cast and a note with a compulsory psychiatry session due to flagged suicide risk.”
According to preliminary studies, “changes in typing speed, voice tone, word choice and how often kids stay home could signal trouble. There might be as many as 1,000 smartphone-based ‘biomarkers’ for depression….”
There is now a rapid growth in the “Artificial Intelligence in Mental Health Care” market. Top key players are IBM Watson AI XPRIZE, Acadia Healthcare Co., Inc., Universal Health Services, Inc. (UHS), Magellan Health Inc., National Mentor Holdings Inc., Behavioral Health Services Inc., Behavioral Health Network Inc., North Range Behavioral Health, and Strategic Behavioral Health, LLC. Most companies in the Global Artificial Intelligence in Mental Health Care Market are currently adopting new technological trends in the market, according to a market report.
The Artificial Intelligence in Global Health report, published on April 1, 2019, was funded by the USAID’s Center for Innovation and Impact and the Rockefeller Foundation, in close coordination with the Bill & Melinda Gates Foundation.
In a March 2020 article by Peter Simons, he reported that in 2018, California’s state government began rolling out a new “mental health” initiative. The tech companies of Silicon Valley were creating smartphone apps that could prompt users to seek mental health care, and the state wanted to provide support. Of the thousands of mental health apps in existence today, the state selected two. The first app is called 7 Cups, by a company called 7 Cups of Tea. They’re focused on connecting mental health service users, in text-based chat sessions, with what they call “listeners”—volunteers who are trained in “active listening.” But, according to The New York Times, the company has been plagued with issues, including listeners having inappropriate conversations with their clients and investigations of its alleged financial misconduct.
The other company partnering with California is Mindstrong Health. Their app, branded Mindstrong on March 17, 2020, previously known as Health, is available on the Google Play Store and the Apple App Store. According to Simons, “It is the Mindstrong app that most raises the specter of Brave New World, Aldous Huxley’s classic dystopian novel of eugenics and psychiatric surveillance….As we use our smartphone and computers, our typing rhythms, swiping habits, typing errors and so forth are all data points that can be a compiled into a mental health portrait of the user, one that the creators of Mindstrong claim can successfully diagnose ‘depression, anxiety, and other psychiatric disorders.’”
The app installs a special keyboard on your phone. That way, the app can record information about the way you type at all times (whether you have the app open or not). The Mindstrong website states that its app connects users with psychiatrists and “credible therapists.” Users can video chat with psychiatrists about their prescription drugs, although it’s unclear how often that happens, says Simons.
As reported in Stat News in 2018, “Does the app live up to its promise? There’s no way to tell. Almost no one outside the company has any idea whether it works.”
Mindstrong Health began with Paul Dagum, a Stanford doctor and researcher who also holds advanced degrees in theoretical physics and theoretical computer science. He’s also the owner of numerous patents for artificial intelligence technology—algorithms designed to assess large amounts of data to provide predictions. Another co-founder, Richard D. Klausner M.D. who served as the Executive Director for Global Health of the Bill and Melinda Gates Foundation.
Former U.S. National Institute for Mental Health director, psychiatrist Thomas Insel is another co-founder of Mindstrong Health.
In 2013, as outgoing NIMH director, Insel pointed out the lack of effectiveness of the DSM5. He stated: “While DSM has been described as a ‘Bible’ for the field, it is, at best, a dictionary, creating a set of labels and defining each…The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure.” Two years later he predicted that technology would change mental health. “In the future, when we think of the private sector and health research, we may be thinking of Apple and IBM more than Lilly and Pfizer. Here are two fascinating previews of this new world I noted during my travels last week. One was the publication of results from a collaboration between Columbia University and IBM,” he wrote.
Further, “The biomarkers for depression and psychosis and post-traumatic stress disorder are likely to be objective measures of cognition and behavior, which can be collected by smartphones.”
In 2015, Insel began working with Verily, a life sciences tech company owned by Google’s parent company Alphabet, Inc. After a short stint there, he left to co-found Mindstrong Health with Dagum. In 2020, California Governor Gavin Newsom, appointed Insel to be a “special adviser” on the state’s mental health system, despite the fact that Insel remains president of Mindstrong, which California has contracted with.
For general healthcare, the market is already lucrative—valued at $2.10 billion with an anticipated market $36.15 billion in 2025 and $54.10 billion by 2026. In February 2020, Mobi Health News reported that global investment in mental health technology reached $769 million in 2019, according to a study by early stage investor Octopus Ventures. There’s been an almost five-fold increase in mental health tech investment in the last six years, rising from $159 million in 2014.
In the case of AI in the mental health industry (psychiatry and psychology), this is but another lucrative but ultimately damaging tool that spells only millions more people being arbitrarily labeled, drugged, electroshocked and worse.
 “AI in psychiatry: detecting mental illness with artificial intelligence,” Health Europa, 19 Nov. 2019, https://www.healtheuropa.eu/ai-in-psychiatry-detecting-mental-illness-with-artificial-intelligence/95028/
 “The Brave New World of Mental Health,” The Washington Spectator, 8 Mar. 2019, https://washingtonspectator.org/the-brave-new-world-of-mental-health/
 “Artificial Intelligence Could Help Solve America’s Impending Mental Health Crisis,” TIME, 20 Nov. 2019, https://time.com/5727535/artificial-intelligence-psychiatry/
 Op. cit., Health Europa
 Op. cit., TIME
 Op. cit., TIME
 Op. cit., The Washington Spectator
 “Entrepreneur of the Week: Dr. Alison Darcy, Woebot Labs, Inc.,” The Longevity Network, 18 Jul. 2017, https://www.longevitynetwork.org/spotlight/entrepreneur-of-the-week/alison-darcy-woebot-labs-inc
 Op. cit., The Washington Spectator
 Op. cit., TIME
 S. Graham, et al., “Artificial Intelligence for Mental Health and Mental Illnesses: an Overview,” Current Psychiatry Reports, 2019 Nov 7;21(11):116. doi: 10.1007/s11920-019-1094-0, https://www.ncbi.nlm.nih.gov/pubmed/31701320
 “Mental Health Apps: AI Surveillance Enters Our World,” Mad in America, 21 Mar. 2020, https://www.madinamerica.com/2020/03/mental-health-apps-ai-surveillance/
 Op. cit., Mad in America
 Thomas Insel, “Post by Former NIMH Director Thomas Insel: Look who is getting into mental health research,” NIMH, 31 Aug. 2015, https://www.nimh.nih.gov/about/directors/thomas-insel/blog/2015/look-who-is-getting-into-mental-health-research.shtml
 Op. cit., Mad in America
 https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-healthcare-market-54679303.html; https://www.openpr.com/news/1909058/global-artificial-intelligence-in-healthcare-market-2020
 “Global investment in mental health technology surges above half a billion pounds,” Mobile Health News, 3 Feb. 2020, https://www.mobihealthnews.com/news/europe/global-investment-mental-health-technology-surges-above-half-billion-pounds