Suicide prevention hasn’t improved for 40 years. Thankfully, AI is changing that.

We’ve covered the use of AI in a variety of industries, from law to sports. But advances in medicine are perhaps the most important our society can make. Unfortunately, they’re also among the most challenging to achieve. From cancer research to Alzheimer’s studies, scientists are working tirelessly to better understand devastating conditions and create better treatments. But progress moves slowly, and nowhere is this more apparent than in suicide prevention. In 2016, researchers came to the grim finding “that there has been no improvement in the accuracy of suicide risk assessment over the last 40 years.” 

The challenges in suicide prevention are substantial. When confronted with decisions about whether to hospitalize potentially suicidal patients, clinicians must determine the likelihood that someone will take their own lives in the immediate future. In some cases, hospitalization is vital. But in others, the patient might benefit from other therapeutic techniques and coping mechanisms that will help them manage drastic emotional incidents in the future. These are life and death decisions, and the pressure is enormous. 

Yet psychiatrists and other practitioners can refer only to guidelines that often prove less than useful in assessing someone’s suicide risk. A working group from the Department of Veterans Affairs and the Department of Defense said of existing suicide screening protocols, “suicide risk assessment remains an imperfect science, and much of what constitutes best practice is a product of expert opinion, with a limited evidence base.” 

Suicide is the tenth leading cause of death among Americans, with more than 44,000 people dying by their own hands each year. Depression and anxiety, which are closely correlated with suicide attempts, is on the rise in the U.S., including among teenagers. Last year, the suicide rate in the U.S. reached its highest point in 30 years. Doctors, caregivers, and loved ones are desperate to help people who are suffering. But many of the indicators commonly used to gauge someone’s risk level, such as past hospitalizations or incidents of self-harm, can be misleading. 

Fortunately, researchers may have found a powerful new tool for improving risk assessment methods. Recent experiments in using artificial intelligence to predict whether patients are at risk for committing suicide have shown promising results and returned surprising indicators that human observers are likely to miss. 

Augmented suicide prevention 

Software-based suicide prevention monitoring systems have already been used to track young students’ web searches and flag any alarming usage patterns, such as those related to suicide. However, artificial intelligence could offer a sharper, more proactive approach to risk detection and prevention. One group of scientists and researchers are working on a machine learning algorithm that so far has an 80-90% accuracy rate predicting whether a patient will try to commit suicide in the next two years. When analyzing whether someone might try to kill themselves in the next week, the accuracy rate went to 92%. 

The algorithm learned by analyzing 5,167 cases in which patients had either harmed themselves or expressed suicidal tendencies. One might wonder how a computer program could do in months what doctors with years of experience struggle with regularly. The answer is by finding underlying indicators that humans might not think to look for. While talk of suicide and depression are obvious indicators that someone is suffering, frequent use of melatonin may not jump out as much. Melatonin doesn’t cause suicidal behaviors, but it is used as a sleep aid. According to the researchers, reliance on the supplement could indicate a sleep disorder, which, like anxiety and depression, correlates strongly with suicide risk.  

Researchers are discovering that rather than there being a few tell-tale signs, such as a history of drug abuse and depression, suicide risk may be better assessed through a complex network of indicators. Machine learning systems can identify common factors among thousands of patients to find the threads that doctors and scientists don’t see. They can also make sense of the web of risk factors in ways the human mind simply can’t process. For instance, taking acetaminophen may indicate a higher chance of attempting suicide, but only in combination with other factors. Computer programs that can identify those combinations could dramatically enhance doctors’ abilities to predict suicide risk

Machine learning is being explored for other predictive uses as well. Scientists are experimenting with using machine learning to study fMRI brain scans to gauge a patient’s suicide risk. In a recent study, a machine learning program detected which subjects had suicidal ideas with 90% accuracy. Granted, the study only involved 34 people, so more research is needed. But the results align with other work being done, and it seems that the potential for machine learning to play a critical role in suicide prediction is strong. 

Machine learning could also become an essential tool for diagnosing post-traumatic stress disorder (PTSD). Between 11-20% of veterans who served in the Iraq and Afghanistan wars suffer from PTSD, and the most recent data available showed veteran suicides comprising 18% of deaths in the U.S. Psychiatrists and counselors may struggle to diagnose PTSD if soldiers don’t share the full extent of their trauma or symptoms with them, making it difficult to know whether they’re at risk for committing suicide. However, one ongoing study is looking at how voice analysis technology and machine learning can be used to diagnose PTSD and depression. The program is being fed thousands of voice samples and learning to identify cues such as pitch, tone, speed, and rhythm for signs of brain injury, depression, and PTSD. Doctors would then be able to help people who can’t or won’t articulate the pain they’re experiencing. 

Other forms of AI will become increasingly useful in the race to prevent suicide as well. Natural language processing algorithms could analyze social media posts and messages to identify concerning phrases or conversations. They could then alert humans who would intervene by reaching out to the potentially troubled person or contacting a resource who could offer support. Popular social media platforms already offer resources and support, to varying degrees, for both users who are considering harming themselves and for concerned friends and family who spot alarming posts. 

However, increasingly sophisticated natural language processing and machine learning techniques could identify at-risk users with greater accuracy and frequency. If we rely solely on people to report concerning content, there’s a good chance cries for help will be missed. The massive amount of content uploaded to popular social platforms each minute makes it impossible for users to see everything their friends have posted. But computer programs can scour for language that points to problems at all times, adding an important buffer for people who need help. 

Some researchers are even looking to leverage data mining and behavioral data to better identify and assist people in need. Commercial brands regularly use behavioral information to hone their marketing messages according to people’s buying patterns and preferences. But doctors, social workers, and support organizations could soon use those tools for a more altruistic purpose.    

Wearables may also play a role in suicide prevention. If doctors could persuade at-risk patients to use tracking apps that gather data about their speech patterns and behavioral changes, they might be able to use that information to track when someone is more likely to become suicidal. The breadth of data gathered through apps and wearables could be analyzed to better understand mental health issues and intervene before patients’ circumstances become extreme. 

From heartbreak to healing 

It’s important to note that while AI may support suicide prevention, people will continue to play a critical role in helping at-risk loved ones recover and maintain a healthy mental state. Social connectedness and support are essential to suicide prevention. Regular, positive interactions with family, friends, peers, religious communities, and cultural groups can mitigate the effects of risk factors like trauma and drug dependence and alleviate anxiety and depression. 

Nothing is more heartbreaking to a family than learning a loved one has taken their own life and wondering what they could have done to help. Artificial intelligence soon may give people a greater chance of intervening before it’s too late and give those suffering from severe mental illness an opportunity to experience rich, healthy lives.