On Monday, Twitter said it would allow some users to fact-check the misleading tweets, the company’s latest attempt to combat misinformation.
Users participating in the program, known as Birdwatch, can add notes to disprove false or misleading posts and evaluate the reliability of the comments verifying the authenticity of other users. Users in the United States who verify their email address and phone number using Twitter and have not violated Twitter’s rules in recent months, can sign up for Birdwatch.
Twitter will start Birdwatch as a small test program with 1,000 users, and the actual testing they produce won’t show up on Twitter, but will appear on a separate website. If the test is successful, Twitter plans to expand the program to more than 100,000 people in the coming months and will make their contributions public to all users.
Twitter continues to grapple with misinformation on the platform. In the months leading up to the US presidential election, Twitter added an authenticity verification label written by its employees to tweets from featured accounts, temporarily disabling its proposed algorithm, and add context to trending topics. However, false statements about the coronavirus virus and elections went viral on Twitter despite the company’s efforts to get rid of them. But Twitter also faced backlash from some users claiming that the company was discarding too much information.
Giving some control directly to users can help restore trust and allow the company to move faster to resolve false claims, Twitter said.
“We apply labels and add context to tweets, but we don’t want to limit our efforts in cases where something breaks our rules or gets public attention, “Keith Coleman, Twitter’s vice president of product, wrote in a blog post announcing the program. “We also want to expand the reach of voices that contribute to solving this problem and we believe a community-based approach can help.”