Facebook uses AI to spot suicidal users

The tool can scan posts for signs the poster is having a tough time.

Another string has been added to Facebook’s bow. The social media giant is using Artificial Intelligence to scan users’ posts for signs they’re having suicidal thoughts.

When it finds someone that could be in danger, the company flags the post to human moderators.

These people respond by sending the user resources on mental health.

In more urgent cases, they contacted first-responders who can try to find the individual.

The social network has been testing the tool for months in the US.

It is now rolling out the program to other countries.

The tool won’t be active in any European Union nations, where data protection laws prevent companies from profiling users in this way.

How Facebook helps us already

CEO Mark Zuckerberg, said he hoped the tool would remind people that AI is “helping save peoples’ lives today.”

He added that in the last month alone, the software had helped Facebook flag cases to first responders more than 100 times.

“If we can use AI to help people be there for their family and friends, that’s an important and positive step forward,” wrote Zuckerberg.

The AI looks for comments like ‘are you ok?’ and ‘can I help?’

Despite this emphasis on the power of AI, Facebook isn’t providing details on how the tool judges who’s in danger.

The program has been trained on posts and messages flagged by human users in the past.

The program looks for tell-tale signs, like comments asking users “are you ok?” or “can I help?”

The technology also examines live streams.

It identifies parts of a video that have more than the usual number of comments, reactions, or user reports.

Human moderators will do the crucial work of assessing each case the AI flags and responding.

Using AI to identify mental health issues

Although this human element should not be overlooked, research suggests AI can be a useful tool in identifying mental health problems.

One study used machine learning to predict whether or not individuals would attempt suicide within the next two years.

It had an 80-90% accuracy rate.

However, the research only examined data from people who had been admitted to a hospital after self-harming.

Wide-scale studies on individuals more representative of the general population are yet to be published.

Privacy concerns

Some may also be worried about the privacy implications of Facebook.

The company has previously worked with surveillance agencies like the NSA.

Do we want them to examine user data to make such sensitive judgements?

The company’s Chief Security Officer, Alex Stamos addressed these concerns on Twitter.

He said that the “creepy/scary/malicious use of AI will be a risk forever.”

This is why it was important to weigh “data use versus utility.”

However, TechCrunch writer Josh Constine noted that he’d asked Facebook how the company would prevent the misuse of this AI system and was given no response.

Source WIRED
Leave A Reply

Your email address will not be published.