Credibility Crisis: How to Save Social Media From Itself

Credibility Crisis: How to Save Social Media From Itself

The crisis of content credibility on social media platforms can be solved through self-regulation and trust-building mechanisms.

 

Should social media owners be liable for the content published on their sites, or should they benefit from the privilege of neutrality given to them at the dawn of the internet by the Communications Decency Act? Answers of “yes” or “no” create much discord and polarization. At the same time, extremist content of all kinds does raise concerns. In principle, these challenges should be solved in the spirit of the First Amendment, even when the judge is a private entity. To achieve this, we need to focus on the rights, obligations, and opportunities given to users by social media platforms. Users are the ultimate beneficiaries of social media content, either in terms of freedom of expression or intellectual property. The users should be in charge of their own words. Focusing on legal constraints will always leave the people and two fundamental human values—freedom and opportunity—behind.

Self-Regulation and Trust on Social Media

 

According to Section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Computer networks, and social media specifically, are like road networks that are open to anyone but cannot be held responsible for any traffic accidents. 

Both former President Donald Trump and President Joe Biden blamed the “open but non-liable” principle as the cause for the proliferation of problematic content and would like it repealed. However, this might also create the premise to solve the content credibility crisis. If content shared on social media falls short of expectations, we need to bring in the people themselves to judge its quality and value. This is already done through the comments feature universally adopted by all major social network platforms. But social media suffers one major design failure: its focus on “like” or “share” buttons and the attention-seeking behavior they instigate. “Liking” instills an emotional response in users, leading to exaggeration, a caustic tone, personal attacks, and even the invention of arguments and facts. This may be remedied, at least partially, by a technical measure that avoids regulation and its impact on free speech. We can prune negative information-sharing behaviors by replacing the “like” function with a “trust” button, which will provide social media with a self-regulation mechanism.

Furthermore, a self-regulated social media platform that explicitly embraces trust mechanisms will return us to the initial vision of the internet: trusting the people to trust each other and the content they see as valid. But trust, like financial credit, should be earned, not given. Moreover, social media networks need a social trust mechanism that works independently of top-down control mechanisms.

We need a new incentive mechanism that builds trust and keeps users engaged with social media. The main motivator for using social media is self-expression, which is buttressed by the emotional satisfaction of “liking” and being “liked” or reposted affirmatively. The emotional reward of posts or comments being liked or commented on lures people to use social media and stay engaged. 

In order to encourage users to consider the impact of their online behaviors, they should be offered the information needed to trust or not trust the content they are exposed to. In other words, social media should self-regulate and avoid top-down regulation by inviting people to trust rather than “like” content. This would slow down the reaction time, offering users the opportunity to make up their minds about what and whom to trust.

The EUNOMIA consortium, founded by the European Union in 2020 (with our participation), created a “trust” toolkit that can be plugged into most social media experiences, especially open-source ones, such as Mastodon. The toolkit prompts users to consider the quality of content before “liking” or sharing it. Machine learning evaluation mechanisms relying on language representation models provide information about the origin of a post, its emotional content, its degree of subjectivity, and other information that may be relevant when assessing trustworthiness. The trust mechanism acts as a scaffold, protecting users from making rash decisions. Each action on a site—posting new content, sharing existing content, or commenting—is preceded by information cues that scaffold such actions. These cues let users know if the content they are about to post is overly subjective or has the potential to incite or spread unverified information. Suppose users are about to re-share existing content. In that case, they are informed about the original creator’s track record—for instance, how long they have been on the site and their follower count—and especially the journey of the content, or whether or not it has changed since its original creation.

EUNOMIA, however, provides more than just a toolkit. It has partnered with Mastodon, the most successful European decentralized social media service, offering any organization the ability to create its own social media experience. When enhanced with the EUNOMIA toolkit, Mastodon looks, feels, and works like Twitter, except for three core features. First, it replaces the simple “Like” button with a “Trust” decision-support mechanism. Second, any organization can install its own EUNOMIA+Mastodon server. This gives organizations absolute control over who joins or stays on their server and how the platform is supported financially. Some organizations can charge a fee, some can use advertising, and some can raise money philanthropically and offer free services as a non-profit. Finally, and probably most importantly, each server can link up with any other, allowing users to “trust” each other’s content, share it, or generate new trustworthy content across servers. This feature overcomes the network effect that traditional media benefits from: with more users, a platform is more valuable, and users are less incentivized to switch to other platforms.

We have created a demonstration platform at TrustFirst.net that illustrates these ideas directly and effectively. The interface is dominated by a “Trust-Don’t Trust” button, providing information cues and nudges that help the user make decisions. Finally, it makes everyone their brother or sister’s keeper. In effect, we have created a method for trusting content that eliminates the heavy hand of corporations and governments. Who said you could not have your cake and eat it too?

Sorin Adam Matei is Associate Dean of Research, FORCES Initiative Director, and Senior Fellow at the Krach Institute for Tech Diplomacy, Purdue University.

Charalampos Patrikakis is a Professor of Electrical Engineering at the University of West Attica, Greece.

George Loukas is a Professor of Cyber Security and Head of the Internet of Things and Security Centre at the University of Greenwich.

The views expressed in this article reflect the personal opinions of the authors.

Image: Reuters.