Automating Mental Health: The Global Promise and Peril
from Digital and Cyberspace Policy Program and Net Politics

Automating Mental Health: The Global Promise and Peril

The use of algorithmic and data-driven technology in mental health care has expanded rapidly, posing new challenges for public governance. 
United Nations Secretary-General Antonio Guterres speaks during a news conference at U.N. headquarters.
United Nations Secretary-General Antonio Guterres speaks during a news conference at U.N. headquarters. REUTERS/Eduardo Munoz

Piers Gooding is a Mozilla Foundation fellow and researcher at the Melbourne Social Equity Institute at the University of Melbourne Law School.

In November 2018, a Facebook employee in Texas alerted police in the Indian state of Maharashtra about a 21-year-old man who had posted a suicide note on his profile. The intervention came after Facebook expanded its pattern recognition software to detect users expressing suicidal intent. Mumbai police rushed to the young man’s home, reaching him in time to provide aid and counseling.

More on:

Health

Cybersecurity

Digital Policy

A year earlier, Canadians with a documented history of mental health hospitalizations were refused entry at the U.S. border. An inquiry by the Office of the Privacy Commission of Canada found that the Toronto Police had collected non-criminal mental health data, which had been shared with U.S. Customs and Border Protection. U.S. customs officials then used the information to turn away several Canadian citizens.

These incidents highlight the changing nature of global responses to mental health. The use of algorithmic and data-driven technology in mental health care is expanding rapidly, with new products including psychiatric pharmaceuticals with inbuilt microchips that “track” medication compliance and “precision psychiatric medicine” that uses big data to treat individuals based on their genes and lifestyle. Much of it is occurring in a vacuum of public debate and governance, which is being exploited and misused by unscrupulous businesses and, in some cases, government agencies.

Distress and the Pandemic

The COVID-19 pandemic has accelerated the digitization and virtualization of health and social services as governments and communities grapple with lockdown and social distancing measures. The Food and Drug Administration, for example, suspended many of its usual rules for digital therapeutic devices in mental health care to widen access to care during the pandemic. Similar moves have been made by governments elsewhere in the world and civil society organizations seeking to offer support to people in distress.

The pandemic itself has clearly undermined mental well-being on a grand scale. UN Secretary General António Guterres made this point in his address for World Mental Health Day, warning that global rates of mental ill-health were vast, even before the pandemic hit.

In economic terms, the World Health Organization estimates the financial cost to the global economy of depression and anxiety is $1 trillion per year in lost productivity. Although these figures are not uncontested (including by those who question the expanded use of certain psychiatric classifications), the enormous scale of distress, however it is conceived, make it unsurprising that advocates are searching for a technological fix. And mental health care, as one of the few areas of health care that doesn’t typically require a physical examination by treating clinicians, is well placed for a “digital revolution.”

More on:

Health

Cybersecurity

Digital Policy

Are Distress and Mental Ill-Health Amenable to Technological Solutions?

This digital transformation goes well beyond online counseling. In justice systems, courts have used computational modeling to estimate the likelihood that convicted persons with mental health conditions will reoffend, and forensic psychiatric patients have been tracked using electronic GPS monitoring devices in several jurisdictions.

Mobile phone apps for mental well-being have proliferated, with some reports suggesting there are over 10,000 apps designed to improve users’ mental health in some way. Advances in mobile technology have given rise to a research field and industry concerned with “digital phenotyping” or behavioral sensing, which uses phones and other devices, such as Fitbits, to undertake continuous, passive assessment of behavior, mood, and cognition by applying machine learning to physiological and biometric data. Mindstrong Health is an app with start-up investment from Jeff Bezos’s venture capital firm and co-founded by the former U.S. National Institute of Mental Health director, Thomas Insel. The app is designed to track users’ mood and cognition to identify early signs of depression and other diagnostic categories. For the estimated five billion people who will be using smartphones by 2025, ubiquitous sensing and digital technologies have the potential to profoundly transform societal responses to psychological distress. But the promise and perils have been relatively little discussed, with a dearth of ethical and legal considerations in the existing research on algorithmic and data-driven tech in mental health care.

This gap, and the sheer volume and velocity of information that can be shared, undermines technical and institutional transparency, and prevents responsible public governance of digital mental health initiatives. Such is the challenge of the increasing datafication of life more broadly, with its complex balance sheet of harm and benefit. Health data might improve public health outcomes, but that same data could be re-purposed not just by government agencies that may leverage that information against the person’s interests, but by a multitude of private parties, such as pharmaceutical companies, insurers and data-brokers. Cybercriminals could also place greater value on mental health data, as appears to have been the case in the recent hack of confidential treatment records of tens of thousands of psychotherapy patients in Finland.

There appear to be limited public governance processes currently able to determine norms in relation to digital mental health interventions. AI-based suicide alerts for social media are a good example. Whereas suicide prediction in medical systems is governed by laws such as the U.S. Health Insurance Portability and Accountability Act, as well as regulations that protect the safety of human research subjects, social medial platforms using the same technology do so outside the health-care system where it is almost completely unregulated, and corporations often maintain their prediction methods as proprietary trade secrets. This is just one of many unresolved questions of public governance in the brave new world of digitized mental health care, but there are many more to come.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail