Last Week in Tech Policy #51: Internet Platforms and Violent Content

(by Kristine Roach, Colorado Law 2L)

After the Unite the Right rally and associated violence in Charlottesville, NC on the weekend of Aug 11th, internet platforms and domain name providers responded by taking down content from The Daily Stormer, a neo-Nazi website that had encouraged some of the weekend’s events, raising a complicated debate over the responsibility that online platforms bear for hate speech, harassment, and violence, the concentration of power online, and free speech.

Godaddy, Google, Reddit, Twitter, and Facebook all removed The Daily Stormer from their platforms. Paypal stated that it was taking measures to ensure their service was not used to fund “activities that promote hate, violence or racial intolerance.”

The Daily Stormer sought a new internet home. The site now has a social media presence on VK, a Russian social media site. The site’s operators also, briefly and unsuccessfully, attempted to establish domain names in Russia, Albania and possibly China after jumping from GoDaddy to Google, in what Gizmodo reporters called  “Daily Stormer Whack-a-mole”.

Many platforms cited violations of their Terms of Service as the rationale for removing The Daily Stormer from their services. For example, Go Daddy’s Terms of Service read:

General Rules of Conduct

You acknowledge and agree that…

You will not use this Site or the Services in a manner (as determined by GoDaddy in its sole and absolute discretion) that:

Is illegal, or promotes or encourages illegal activity;

Promotes, encourages or engages in terrorism, violence against people, animals, or property;

 

Some courts have approved of this justification in other contexts. For example, the Northern District of California upheld YouTube’s removal of a video for violating its Terms of Service as a valid exercise of YouTube’s power under the contract created by its Terms of Services, establishing that under California law there is no breach of the implied covenant of good faith and fair dealing in a Terms of Service contracts. A later case dealing with the nearly identical facts reiterated the rule.

While taking down and limiting access to websites that incite violence may be ethically or morally justifiable, it is not necessarily legally required. Under Section 230 of the Communications Decency Act:

Protection for “Good Samaritan” blocking and screening of offensive material.

(1)  Treatment of publisher or speaker. No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2)  Civil liability. No provider or user of an interactive computer service shall be held liable on account of–

(A)  any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B)  any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1) [subparagraph (A)].

In 2016, the Northern District of California dismissed a case filed against Twitter by the families of two Americans killed by an ISIS inspired lone-wolf terrorist in Jordan, alleging that Twitter supported terrorism by allowing ISIS members to sign up for accounts. The case has been appealed and is now in mediation, but establishes a precedent for platforms to cite when they make choices about what kind of content to censor.

How will platforms continue to respond to incitement of violence online? Will they censor uniformly? For example, critics of President Trump are calling for Twitter to suspend his Twitter account for his recent Tweet threatening North Korea with military action if they continue firing missiles in the region. How will judicial and legislative systems respond?