Social media platforms are under scrutiny — but AI isn’t a silver bullet
Facebook has been in the news as internal documents show it is struggling to effectively moderate its platform with Artificial Intelligence. What does this mean for the future of AI and social media?
There’s a general misunderstanding about what AI can and can’t do
Internal company reports published by media outlets have highlighted that Facebook’s long held promises that Artificial Intelligence would address the company’s problems with content moderation have failed.
According to the Wall Street Journal, the documents showed that when it comes to hate speech “Facebook employees have estimated the company removes only a sliver of the posts that violate its rules—a low-single-digit percent, they say.”
As a Conversational AI firm, we’ve spent a lot of time thinking about the capabilities, ethics, and limits of both Artificial Intelligence and the social media spaces where we do a lot of our work.
Firstly, I think it’s important to emphasise that we’re a long way off what I call ‘general AI’ for use in everyday life. It feels as though society is striving for general AI, or accepts it is inevitable, without considering the outcomes — both good and bad. When it comes to AI, it’s important to remember that humans are in control here.
One quote from the Wall Street Journal piece really lept out at me.
“This is one of the hardest problems in machine learning,” said J. Nathan Matias, an assistant professor at Cornell University. “It’s also an area that so many companies and policy makers have just decided was going to be the solution—without understanding the problem.”
From where I’m sitting, it’s misleading to frame this as a social platform failure — it is a failure to anticipate scale. What’s plaguing Facebook is the sheer amount of the content its AI is trying to moderate, rather than AI-based content moderation itself.
This is largely because, at this point in time, the industry has shown that AI is most effective when trained and used in a narrow scope.
I don’t envy the task ahead of Facebook. Messaging flows in a number of ways, and they have to consider them all: individual to organisation, individual to individual, and organisation to individual.
At Pattr, we’re focusing on the last part: individual to organisation. This means we can effectively train our AI, and tools such as sentiment analysis, in a targeted way, giving organisations the tools to look after their Conversation Health.
The problems social media platforms are experiencing are bigger than the companies themselves
I’m of the view that the content moderation problems facing social media platforms are complex, and they cannot be solved by these companies alone.
Decentralisation and a multi-layered approach can have an impact, allowing the platforms, users and organisations to all be part of the solution.
Why? Because social networks have opened up new communities of people, and new ways of interacting with supporters and promoters that has brought significant value to everyone.
It is incumbent on us all to be better online citizens. At the same time, the organisations who benefit from these communities need to take a role in protecting them.
Ultimately, issues such as hate-speech and abuse online are problems in wider society. A minority of people have made some corners of the internet toxic, and they’re detracting from the good things that social media gives us.
Now, it’s a problem that we’re all responsible for solving.
Thinking about how you can improve Conversation Health for your organisation? Reach out to us.
See Pattr in Action
Pattr brings you closer to your customers.
A conversational AI SaaS platform to power, enable, enrich and understand conversations between you and your customers, in real-time and at scale.