Facebook using AI to try to prevent suicide
Facebook is using artificial intelligence to address one of its darkest challenges: stopping suicide broadcasts.
The company said Monday that a tool that lets machines sift through posts or videos and flag when someone may be ready to commit suicide is now available to most of its 2 billion users (availability had been limited to certain users in the United States). The aim of the artificial intelligence program is to find and review alarming posts sooner, since time is a key factor in preventing suicide.
Facebook said it will use pattern recognition to scan all posts and comments for certain phrases to identify whether someone needs help. Its reviewers may call first responders. It will also apply artificial intelligence to prioritize user reports of a potential suicide. The company said phrases such as “Are you ok?” or “Can I help?” can be signals that a report needs to be addressed quickly.
In the case of live video, users can report the video and contact a helpline to seek aid for their friend. Facebook will also provide broadcasters with the option to contact a helpline or another friend.
Users are also given information to contact law enforcement, if necessary.
“We’ve found these accelerated reports – that we have signaled require immediate attention – are escalated to local authorities twice as quickly as other reports,” Guy Rosen, Facebook vice president of product management, wrote in a company blog post.
Facebook has been testing this program in the United States and will roll it out to most of the countries in which it operates, with the exception of those in the European Union. The company did not elaborate on why EU countries – which have vastly different privacy and other internet laws than the United States – are not yet participating. But Facebook said it is speaking with authorities on the best ways to implement such a feature.
The social network focused new energy on identifying and stopping potential suicides after Facebook experienced a cluster of live-streamed suicides in April, including one in which a father killed his baby daughter before taking his own life. The company said in May that it would hire 3,000 additional workers to its 4,500-employee “community operations” team, which reviews posts and other content reported for violent or otherwise troubling content.
Facebook chief executive Mark Zuckerberg said at that time that the company would use artificial intelligence to help identify problem posts across its network, but he acknowledged that this was a very difficult problem to address. “No matter how many people we have on the team, we’ll never be able to look at everything,” he said in May.
The artificial intelligence feature underscores Facebook’s reliance on algorithms to monitor and police its network. In this case, the algorithm determines not only what posts should be reviewed but also in what order humans should review them.
Facebook has been using artificial intelligence across its site to accomplish various tasks. It scans posts for instances of child pornography and other objectionable content that should be removed. It also teaches robots to read human facial expressions. (It denied reports from an Australian researcher in May that it was scanning photos and targeting users with advertisements based on their emotions.) The company did not say if it would use something similar to the AI suicide prevention tool for other situations that raise concerns for the network.