Twitter will now prompt users to review and revise “potentially harmful or offensive” replies.
The social media platform, which has often faced criticism over abusive user behaviour, tested the feature last year. Twitter said the tests showed that the prompts reduced offensive replies.
On Wednesday, the company said it would roll the prompts out to English language accounts using Twitter on Apple and Android.
In a blog post, Twitter said they had found that prompts led 34% of people to revise their initial reply or to decide against sending their reply at all. Users composed, on average, 11% fewer offensive replies after being prompted for the first time, Twitter said.
They were also less likely to receive offensive and harmful replies back.
When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
You had feedback about prompts to revise a reply so we made updates:
▪️ If you see a prompt, it’ll include more info on why you received it
▪️ We’ve improved how we consider the context of the conversation before showing a prompt
This is now testing on Android, iOS, and web. pic.twitter.com/rxdttI1zK2
— Twitter Support (@TwitterSupport) August 10, 2020
Say something in the moment you might regret? 😬 We’ve relaunched this experiment on iOS that asks you to review a reply that’s potentially harmful or offensive.
Think you’ve received a prompt by mistake? Share your feedback with us so we can improve. pic.twitter.com/t68az8vlYN
— Twitter Support (@TwitterSupport) February 22, 2021
After testing and improving prompts that ask you to review a potentially harmful or offensive reply, we learned that this feature can help encourage more meaningful convos.
— Twitter Support (@TwitterSupport) May 5, 2021