Anjana Susarla earned an undergraduate degree in Mechanical Engineering from the Indian Institute of Technology, Chennai; a graduate degree in Business Administration from the Indian Institute of Management, Calcutt… Read more
Anjana Susarla earned an undergraduate degree in Mechanical Engineering from the Indian Institute of Technology, Chennai; a graduate degree in Business Administration from the Indian Institute of Management, Calcutt…Read more
Governments and observers across the world have repeatedly raised concerns about the monopoly power of Big Tech companies and the role the companies play in disseminating misinformation. In response, Big Tech companies have tried to preempt regulations by regulating themselves.
With Facebook’s announcement that its Oversight Board will make a decision about whether former President Donald Trump can regain access to his account after the company suspended it, this and other high-profile moves by technology companies to address misinformation have reignited the debate about what responsible self-regulation by technology companies should look like.
Research shows three key ways social media self-regulation can work: deprioritize engagement, label misinformation and crowdsource accuracy verification.
The technology companies could adopt a content-labeling system to identify whether a news item is verified or not. During the election, Twitter announced a civic integrity policy under which tweets labeled as disputed or misleading would not be recommended by their algorithms. Research shows that labeling works. Studies suggest that applying labels to posts from state-controlled media outlets, such as from the Russian media channel RT, could mitigate the effects of misinformation.
In an experiment, researchers hired anonymous temporary workers to label trustworthy posts. The posts were subsequently displayed on Facebook with labels annotated by the crowdsource workers. In that experiment, crowd workers from across the political spectrum were able to distinguish between mainstream sources and hyperpartisan or fake news sources, suggesting that crowds often do a good job of telling the difference between real and fake news.
Experiments also show that individuals with some exposure to news sources can generally distinguish between real and fake news. Other experiments found that providing a reminder about the accuracy of a post increased the likelihood that participants shared accurate posts more than inaccurate posts.
In my own work, I have studied how combinations of human annotators, or content moderators, and artificial intelligence algorithms – what is referred to as human-in-the-loop intelligence – can be used to classify health care-related videos on YouTube. While it is not feasible to have medical professionals watch every single YouTube video on diabetes, it is possible to have a human-in-the-loop method of classification. For example, my colleagues and I recruited subject-matter experts to give feedback to AI algorithms, which results in better assessments of the content of posts and videos.
Tech companies have already employed such approaches. Facebook uses a combination of fact-checkers and similarity-detection algorithms to screen COVID-19-related misinformation. The algorithms detect duplications and close copies of misleading posts.
Ultimately, social media companies could use a combination of deprioritizing engagement, partnering with news organizations, and AI and crowdsourced misinformation detection. These approaches are unlikely to work in isolation and will need to be designed to work together.
Coordinated actions facilitated by social media can disrupt society, from financial markets to politics. The technology platforms play an extraordinarily large role in shaping public opinion, which means they bear a responsibility to the public to govern themselves effectively.
Calls for government regulation of Big Tech are growing all over the world, including in the U.S., where a recent Gallup poll showed worsening attitudes toward technology companies and greater support for governmental regulation. Germany’s new laws on content moderation push greater responsibility on tech companies for the content shared on their platforms. A slew of regulations in Europe aimed at reducing the liability protections enjoyed by these platforms and proposed regulations in the U.S. aimed at restructuring internet laws will bring greater scrutiny to tech companies’ content moderation policies.
Some form of government regulation is likely in the U.S. Big Tech still has an opportunity to engage in responsible self-regulation – before the companies are compelled to act by lawmakers.
Anjana Susarla, Omura-Saxena Professor of Responsible AI, Michigan State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
00votes
Article Rating
Subscribe
0 Comments
Inline Feedbacks
View all comments
Contributor?
Join our community now! Log in to comment or become a contributor to share your unique insights and expertise. Your voice matters—get involved today!