How AI can help prevent abuse of sports players on social media

Online abuse of players is a major problem for sports clubs and organizations around the world. Pattr’s AI-powered solutions keep players safe and online conversations healthy.
Players are demanding action on social media abuse
It goes without saying that the increasing dominance of social media over the past decade and a half has completely changed the landscape of professional sports.
Some of those shifts have been positive. Through platforms like Facebook, Twitter and Instagram, players have never been more in touch with their fanbase and more aware of their position as public figures. Social media is a great leveler — it breaks down barriers and facilitates communication, and gives normal people incredible access to their favourite players both on and off the field.
But this comes with problems. The heated, tribal nature of pro sports fan culture can easily spiral into trolling and abuse, and it is having a devastating effect on players and their mental health.
Receiving one offensive comment is bad enough. But when it’s an avalanche of abusive messages among the millions received by top players, it becomes a significantly bigger issue.
In November, two UK academics used sentiment analysis techniques to study messages sent to pro sports players and athletes on Twitter. They found that almost all of the athletes they studied — including superstars like Roger Federer and Cristiano Ronaldo — were subject to some level of abuse, including “criticism of their professional performances as well as personal insults”.
After the Tokyo Olympics last year, World Athletics published a study on targeted, abusive messages sent to athletes via social media. It found “disturbing” levels of abuse, including “sexist, racist, transphobic and homophobic posts”. Of those, 65% were deemed “gravely abusive”, requiring intervention from the platforms that hosted them. The study found 87% of abusive posts were directed towards female athletes.
With players now expected to maintain active social media presences to engage with their fans and support the operations of their organizations, this quickly becomes a workplace safety issue.

Sports clubs take action
It’s no wonder then, that dealing with this problem has become a top priority for sports businesses. In January, the Daily Mail reported that Liverpool F.C. had hired a full time therapist to help its players deal with the effects of trolling and online abuse, and in March it was reported the Premier League was actively investigating 400 reports of such abuse.
This growing awareness has been felt in Australia too. AFL team St Kilda specifically put out a public statement last July calling out racism directed at its players online. “Too many times this year, our players and their loved ones have been victims of this type of abuse — enough is enough,” the statement read. “If you engage in online abuse, you are not with us.”
The scale of trolling and abuse faced by sports players online has also led to attempts at action from social media platforms. When Instagram introduced more granular controls for users to deal with offensive comments and DMs in August, it briefed the media that abuse of sports stars was one of the driving motivations behind the change.

AI provides a solution
Despite the growing appetite to do something about the problem — in large part driven by the players themselves — disciplinary moves from sports clubs and in-house solutions offered by social media platforms have been unable to keep abuse in check. We know what the problem is, but the issue seems too large for stakeholders to get a handle on it.
Part of the challenge comes with scale. Sports organizations can’t realistically be expected to investigate every single report of bad behaviour online. Giving users the ability to filter or block certain keywords in comments is helpful for the average person, but it doesn’t mean much when trying to build a healthy community among hundreds of thousands (or even millions) of followers.
Pattr’s AI-powered solution takes a different approach to online abuse. We know that the way people talk online is constantly developing and changing, and simple filtering systems can’t be expected to keep up.
Nor can human social media teams be expected to pick up the slack. Not only is sifting through abusive comments in order to hide and delete them a lousy job, it doesn’t scale particularly well. With the amount of lively conversation that goes on around an AFL Grand Final or an English Premier League match, not even a fully-staffed social media team can be expected to catch everything.
We do things a little differently. Our Conversational AI sentiment and intent analysis in conjunction with sophisticated text and image classification can identify harmful content in whatever form it appears — whether it’s a comment, image or erstwhile emoji — and hide it before players and the community see it.
Pattr sits quietly behind Facebook, Instagram and Twitter 24/7, ensuring nothing is missed and everything is actioned. This lets players and social teams focus on the positive stuff, like posting great content and engaging with fans, without worrying about abuse and spam clogging up their feeds.
We firmly believe that AI can be the bedrock of a solution to minimise the effects of online abuse and keep players safe. By tackling abusive comments and posts as soon as they are made, and keeping them away from the eyes of players and the community, we can minimise the harm of online hate — and make social media a healthier space for genuine conversation and engagement.
Interested in a trial? Speak to our sports team today.
Share this article
See Pattr in Action
Pattr brings you closer to your customers.
A conversational AI SaaS platform to power, enable, enrich and understand conversations between you and your customers, in real-time and at scale.