Twitter now allows users to 'rethink' harmful replies
As part of a limited experiment, Twitter will be asking users if they're sure they want to post that mean tweet before they hit publish
On Tuesday, Twitter announced that it has launched a new prompt that will display when users are about to publish a mean tweet, giving them a chance to rethink it.
As part of a limited experiment, Twitter will be asking users if they're sure they want to post that mean tweet before they hit publish. The company announced this on Tuesday, officially describing it as "a prompt that gives you the option to revise your reply before it's published if it uses language that could be harmful."
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.— Twitter Support (@TwitterSupport) May 5, 2020
Such an experiment closely resembles the steps that Instagram started taking to reduce bullying on the platform. As of last year, when a user makes a comment or writes a post that could be offensive to others, Instagram will ask if they're sure they want to post the content and gives them an opportunity to edit it.
Such content is identified by the platform to be harmful if it resembles messages that were previously reported.
This Twitter experiment is only taking place on iOS devices; however, depending on how successful it is at reducing the number of harmful tweets published on the platform, it could soon be compatible with other operating systems.