Last Week in Tech Policy #65: Fake News, Real Concerns

(by John Schoppert, Colorado Law 3L)

On Friday, February 16th, Special Counsel Robert Mueller announced the indictment of 13 Russian nationals on charges of conspiracy to defraud the United States. The announcement serves as the latest development in Mueller’s investigation into potential collusion between the Kremlin and the Trump campaign during the 2016 presidential election. More concretely, it provides further evidence that Russian operatives played a critical role in disrupting the 2016 election atop near-unanimous consensus among American intelligence agencies.

The indictments track the work of a so-called “troll factory” located in St. Petersburg, which designed and deployed divisive content over social media platforms to encourage collaboration within extreme groups online. More specifically, Russian operatives stole the identities of American citizens, posed as political activists, created posts affiliated with extreme ideologies and paid individuals to locally organize protests and rallies. While many debate over whether the Russians pushed for any one candidate over the other—as opposed to creating chaos more generally—based on internal documents, it appears that disruptive efforts were aimed at supporting the campaigns of Donald Trump and Bernie Sanders, and undermining that of Hillary Clinton.

While the recent indictments rightfully occupy a space in partisan political debates, it is important to take a step back and reflect on what they mean for those of us simply trying to operate in the digital age. An overwhelming amount of the content produced by this troll factory appeared on Facebook, America’s most popular social networking platform. Based on a 2016 poll, 68% of all American adults use Facebook, and 76% of all American Facebook users visit the site on a daily basis. Furthermore, Facebook estimates that nearly 126 million Americans viewed Russian-created content during the election season. The significant number of users that unwittingly consumed fabricated content is certainly alarming and begs an important question: how do we solve the problem of fake news, and who should bear the burden of solving it?

Perhaps because of the sheer breadth of its audience, many believe that Facebook should bear some of the burden in fixing the fake news problem, at least insofar as its platform contributed to the problem’s growth. Facebook has distanced itself from admitting the role it played in distributing Russian misinformation campaigns—and it is important to note that Facebook does not have a legal responsibility to prevent the dissemination of fake news. Under Section 230 of the Communications Decency Act, online service providers are not treated as publishers or speakers of user-generated content. However, while it may not have a legal responsibility to address the issue, Facebook might end up doing much more to solve the problem than it had originally predicted.

Amid increasing public and private pressures, it seems that Facebook, along with Twitter, Google, and other tech companies, are taking some steps to combat the fake news problem head-on. Through the implementation of “trust indicators,” participating websites will attest to each online article’s authenticity—describing how each story was built, its relevant reporting history, and each media outlet’s standards of conduct. Although large-scale implementation is still on the horizon for companies like Facebook, Twitter and Google, online publications like The Economist, The Washington Post, and Trinity Mirror have already deployed such trust indicators and are hopeful that they will help restore journalistic integrity.

Another possible fix lies in a piece of legislation called the “Honest Ads Act,” which would require digital platforms with more the 50 million monthly viewers to create a database of political ads purchased by a person or group that spends more than $500. The subsequent public file would include the ad, a description of the target audience, the number of views it generated, the date and time it ran, its price, and contact information for the purchaser. The bill aims to treat digital platforms similar to television and radio stations, both of which have long been required to disclose the purchasers of the ads aired in their respective broadcasts.

Another possible solution is to crowdsource the authenticity of articles and publications to third-party reviewers. Presumably, this would work just how it sounds. Before publication, an article would need to withstand the scrutiny of an actual human, whose job it would be to review articles for accuracy and integrity. Such a design flirts with the line of true freedom of expression and the power of digital platforms to control the content of its websites.

These represent just a few possible solutions to the fake news problem, any number of which might succeed or fail. And while much in the Russia probe remains to be uncovered, there are a few things we know for certain. We know that the Russian-sponsored operatives were able to pose as American citizens for months without any real detection or suspicion. We know that those individuals were able to seamlessly coordinate and carry out plans of violence and chaos on American soil, remotely. And finally, we know that this is not the first time Russians have deployed propaganda attacks on foreign sovereigns to interfere with democratic processes, and given the indisputable success of their most recent campaign, it likely won’t be the last time they aim such attacks at the United States.