Last Week in Tech Law and Policy, Vol. 12: Privacy and the Right to be Forgotten

(by Andy Sayler, Student Technologist)

Last week, the tension between privacy rights and free expression rights in the digital realm found itself back in the news when the Italian Privacy Authority issued a ruling helping to clarify when search engines must remove links to content under the European Union’s “Right to be Forgotten” rules. The ruling confirmed the idea that such requests must be balanced against the freedom of the press when the content in question is recent news that the public has a strong interest in accessing.

The “Right to be Forgotten”

The EU “Right to be Forgotten” is a concept that dates back to 2012 stating that individuals have the right to erase their personal digital information from search engines and other metadata aggregators online. The “right” was given legal force in 2014 with the Court of Justice of the European Union (CJEU) found it to be valid and enforceable. The ruling applies only to the meta-content carried by search engines, not to the original content to which such engines link.

Over the last year, search engines such as Google and Bing have created processes through which European Union users can request that the links to content related to them be removed from search indexes. According to its regularly updated transparency report, Google has thus far received requests to remove over 850,000 URLs, of which about 40% have been granted. Organization such as ChillingEffects.org and HiddenFromGoogle have also begun tracking such removal requests.

Questions Remain

The “right to be forgotten” raises a number of difficult questions regarding an individual’s privacy and ability to control their digital reputation vs the public’s right to know and the freedom of expression of the press, search engines, and individuals online. When does the public interest in accessing information outweigh an individual’s interest in privacy? To what extend can the “right to be forgotten” be used to suppress accurate and truthful information? How much privacy does such a ruling really afford since it allows the original content to remain in place?

Last week’s ruling starts to provide some clarification as to the scope of this “right” and reaffirms the fact that the right does not apply in cases where the public interest outweighs an individual’s privacy. And yet many question still remain. Perhaps the biggest of which is to what extend search engines must censor results outside of their EU-targeted sites.

Currently, Google only removes contest listings from its EU-facing sites: e.g. google.co.uk, google.it, google.de, etc. The US (and often most popular) version of the search engine, google.com, remains uneffected. EU regulators have asserted that this must change and that Google must censor all of its sites, even those intended for non-EU audiences. Yet no such “right to be forgotten” exists under US law, and such a “right” is unlikely to stand up to the strong free speech protections afforded by the First  Amendment. To what extend can the EU force US-based search engines to censor data that is protected under US law? The answer to such questions remains contested.

A Tool for Censorship?

Since the 2014 CJEU ruling, the “right to be forgotten” has been heavily criticized by a number of individuals and organizations on the grounds that it allows individuals too much power to censor legitimate speech that happens to be critical of their person or actions. Prominent UK newspaper The Guardian has noted that numerous legitimate news articles on its site have been suppressed by Google in response to “right to be forgotten” requests. The Washington Post has received a request to remove unfavorable concert reviews (a request that is arguably invalid since the Washington Post is not search engine and thus is not directly subject to the 2014 ruling). Jimmy Wales, co-founder and former chairmen of Wikipedia and the Wikimedia Foundation, has criticized the ruling as being unnecessary and dangerous.

These criticisms note that free speech limits related to slander and defamation are already well established in most countries, and additional rulings targeted specifically at forcing search engines to remove otherwise legal content are unnecessary and overly limiting of free expression. They also note that the ability to enforce such a right is going to be difficult, if not impossible. The Internet is a global place, the the ruling of a single regulatory entity (e.g. the EU) within that space can never be fully applied to actors outside that entity’s jurisdiction. While large multi-national corporations such as Google and Microsoft can likely be forced to comply with such a ruling (at least domestically), smaller organisations with no business presence in the EU can easily skirt such requirements. And as larger search engines such as Google and Microsoft remove such listings, the market for smaller search engines to directly provide such results as a service grows.

All of this raises yet more questions. How far are countries such as those in the EU willing to go to enforce such rights? Will they be forced to start blocking large swaths of the Internet that refuse to comply with EU rulings, similar to what China does with the Great Firewall? Do such rulings bring legitimacy to the more totalitarian-oriented Internet censorship policies in place in countries around the world? Do attempts to enforce such a “right” accelerate the balkanization of the Internet into a series of multiple state-controlled Internets?

Such questions will need to be answered over the coming years. The state-less nature of the Internet, coupled with tensions between countries like the US that tend to favor free expression over privacy and countries like those in the EU that tend to favor privacy over free expression, make answering such questions complicated. Yet the ramifications of these answers will have far reaching consequences for the Internet, as well as privacy and free speech rights, for many years to come.

Last Week in Tech Law & Policy, Vol. 8: Consumer Privacy Bill of Rights

(By Paul Garboczi, Student Attorney)

On Friday, the White House released a draft of the Consumer Privacy Bill of Rights Act of 2015. This Wall Street Journal article summarizes the bill fairly well. The bill essentially sets forth a set of industry best practices that the Federal Trade Commission would enforce on the private sector. Private sector firms would be encouraged to create privacy codes of conduct, and if they broke their own codes the FTC could take action (although the FTC would not be given rulemaking authority). The bill attempts to give consumers the right to access their information by requesting it from companies. However, companies could refuse such a request if it was “frivolous or vexatious.” The bill is unclear on who would decide if such requests were frivolous. It basically calls on companies to respect and protect consumer privacy without creating a robust enforcement mechanism for consumer privacy.

Since the draft was released on Friday, criticism of the bill has been swift. Consumer privacy advocates are denouncing it for not going far enough in protecting privacy. Opposers of top down regulatory schemes are criticizing it for attempting a one-size-fits-all solution to a problem that requires a flexible approach, and burdening American innovation. The FTC itself released a statement criticizing the bill for lacking “strong and enforceable protections” for consumer privacy. There is also a concern that the bill would preempt state laws, some of which provide stronger privacy protections for consumers.

Last Week in Tech Law & Policy, Vol.5: Funding Privacy

(by Joseph de Raismes, Colorado Law 3L)

This week, I would like to look at internet privacy, how privacy tools are funded, and what  the future of privacy should look like.

Last week, ProPublica ran Julia Angwin’s excellent profile of GnuPG’s lead developer Werner Koch. Koch wrote the free email  encryption tool GNuPG in 1997, and has been keeping the project alive basically single-handedly ever since. In response to ProPublica’s profile, Koch received an outpouring of support in the form of private donations and grants.

Werner Koch’s situation drew the attention of cryptographer Matt Green, who questioned the entire framework of how we fund the long-term development of privacy tools.  In his post, Matt draws attention to the fact that the US government has been an extremely important funding source for key privacy tools, but questions the sustainability of the current framework for funding research and development in this area.

In light of the Snowden revelations, real name systems, perma-cookies, browser fingerprinting, and other sophisticated tracking measures, internet privacy seems more and more like a thing of the past. Is internet privacy a value that should be fostered (and funded) in a cohesive manner?

Last Week in Tech Law & Policy, Vol.4: A Look at Health Technology

( by Allison N. Daley, Colorado Law 2L)

This week I want to focus on a specific area of tech law and policy: health care. With the advent of telemedicine as a way of providing health care at a distance, there is exciting potential for innovation, however with this innovation comes new challenges in law and policy.

As just one example, there is a new app, Harbinger, that transmits communication from Emergency Medical Service (EMS) workers in an ambulance to hospitals in real time. The hope is that such technology can improve care by sending protected health information (PHI) such as drivers licenses and insurance cards to hospitals for faster registration.  The app even allows EMS workers to send pictures and videos of injuries or accident scenes for more rapid diagnosis and treatment.

With this great technology, however, privacy concerns abound. Because cell phones store data on the device itself, PHI is much more likely to fall into the wrong hands if a cell phone is lost or stolen.  While the Health Insurance Portability and Accountability Act (HIPAA)  does not have any official rules banning the use of cell phones, the HIPAA Privacy Rule requires health care providers to implement appropriate safeguards to reasonably protect health information.

In order to solve this problem, the Harbinger app promises:

[P]atient information is encrypted with today’s most advanced methods. The data is transported to our server with the industry standard for banks and credit cards, and is stored in an encrypted format.

While this sounds like it may satisfy HIPAA standards, patients and hospitals will likely still have concerns about this new technology. The founders, both Coloradoans, are currently negotiating with hospitals and we may see the system operating by the end of the year.

For more information, check out Harbinger’s website.

See you next week!

 

Will the FCC Let You Retain Your Privacy and the Cybersecurity of Your Information When You Text 911?

(by Spencer Rubin and Trip Nistico, Colorado Law 2Ls, and Vickie Stubbs, ATLAS Institute)

Two weeks ago, the TLPC submitted reply comments on the Third Further Notice of Proposed Rulemaking (FNPRM) in the Federal Communications Commission’s Text-to-911 (TT911) docket. Among the many areas in which the FCC sought comment on rules for text messages to 911, we focused on the privacy and cybersecurity implications of sharing enhanced location information via text message to emergency responders.

Continue reading “Will the FCC Let You Retain Your Privacy and the Cybersecurity of Your Information When You Text 911?”