While last week’s blog post focused on how companies have used data collected from users to do research, this week’s will focus on how large organizations succeed or fail to protect data from hackers as well as government intrusion.
Following on last week’s post, another dark cloud continues to loom over the Internet: malware. Malware is somewhat two-faced. On one side, hackers use malware to gain access to personal information. On the other side, the government uses malware to track down criminals and terrorists. But what happens when the line separating the two starts to blur? This post will explore the “good” and “bad” sides of deceptive delivery of malware.
We’re back for Season 2 of our ongoing weekly recap of current tech policy news. As always, the TLPC Director (that’s me—Blake Reid) takes on the first blog post of the semester before the TLPC’s student attorneys take over for the duration. As summer comes to a close in Boulder, this post explores some of the dark clouds have circled over the Internet in recent weeks.
Last week, the TLPC testified at several hearings (PDF) in favor of ourproposed exemptions to Section 1201 of the Digital Millennium Copyright Act. We’ve linked below to various pictures and coverage of the hearing. Congratulations to the many TLPC students who took part!
Politico Pro coverage of security research hearing (1, 2)
As we discussed in our previous blog post on the subject, this project seeks an exemption to Section 1201 of the Digital Millennium Copyright Act’s anti-circumvention provisions for good-faith security research. Our Reply Comment responds to a variety of issued raised in the second round of the proceeding by public commenters including manufacturers and trade groups in automobile, medical device, software, and related industries. In our Reply Comment, we focus on how objectors comments are textbook examples of the adverse effects of Section 1201 chilling good-faith security research, and push back against the suggestion that an exemption should include a mandatory disclosure standard.
Government surveillance has been a frequent news items ever since the summer of 2013, when Edward Snowden leaked his first set of documents to journalists, explaining the software tools the NSA uses to monitor communications in the United States and abroad. But governments have employed shadowy means to gather intelligence about their own citizens and those of other countries, and have even attempted to disrupt the operations of governments perceived to be hostile to their interests, for many years.
In 2008 a sophisticated piece of malware called “Regin” began spying on governments and individuals in Russia, Saudi Arabia, Ireland, and a handful of other countries. Security researchers didn’t notice Regin until 2014, but the software hadn’t done any damage to infected systems: it had simply run in the background, watching its targets. Researchers initially surmised that Regin had been written by the US, Israel, or the UK to gather intelligence on foreign governments, and further investigation suggested that the British GCHQ spy agency had written the malware.
In 2010 the Stuxnet computer worm was discovered, which targeted industrial controllers in Iran and caused centrifuges used for the enrichment of nuclear material to tear themselves apart. It’s still not known for certain who wrote Stuxnet, but in 2011 Wired reported that it was “believed to have been created by the United States,” and in 2012 The New York Times reported that it was the product of a joint US-Israeli intelligence operation.
Earlier this year security researchers uncovered a suite of surveillance platforms nicknamed EquationLaser, EquationDrug, and GrayFish. Circumstantial evidence suggests that the tools may be connected with the NSA (for example, the tools in the platforms match the names of tools in an NSA spy tool catalog leaked in 2013). Five Iranian companies who were previously infected by Stuxnet were also infected by the “Equation Group” tools.
Few would argue that when a government intentionally infects another government’s systems with malware in an effort to spy on them that practice is, at least, in an ethical grey area. But is such cyberspying (some would call it cyberwarfare, especially when the destruction of property is involved) necessary to protect against attacks? Does the potential for mitigating harm outweigh the ethical implications of spying? And does a government’s mandate to protect the safety of its citizens justify the practice of hacking or spying on other governments?
This week, I would like to look at internet privacy, how privacy tools are funded, and what the future of privacy should look like.
Last week, ProPublica ran Julia Angwin’s excellent profile of GnuPG’s lead developer Werner Koch. Koch wrote the free email encryption tool GNuPG in 1997, and has been keeping the project alive basically single-handedly ever since. In response to ProPublica’s profile, Koch received an outpouring of support in the form of private donations and grants.
Professor Green is an Assistant Research Professor in the Information Security Institute at Johns Hopkins University and needs to be able to circumvent various access controls on software and devices in the process of conducting good faith security research. Such circumvention is chilled by Section 1201 of the Digital Millennium Copyright Act (DMCA). In our long comment, we argue for an exemption to Section 1201’s anti-circumvention provisions and show that preventing circumvention of access controls is chilling good faith security research and creating other adverse effects. Our short comments reiterate this point with respect to specific types of security research and urge the Copyright Office to grant a broad exemption to the Section 1201 anti-circumvention rules for all forms of good faith security research.
Next up in the proceedings is the second round of public comments filed by those that oppose each exemption. The objection comment deadline is March 27, 2015. Following that, there will be a third round of public comments in which supporters can respond to the objectors’ comments. This round closes on May 1, 2015, after which the Copyright Office will begin the internal process of making their decisions.
This week I want to focus on a specific area of tech law and policy: health care. With the advent of telemedicine as a way of providing health care at a distance, there is exciting potential for innovation, however with this innovation comes new challenges in law and policy.
As just one example, there is a new app, Harbinger, that transmits communication from Emergency Medical Service (EMS) workers in an ambulance to hospitals in real time. The hope is that such technology can improve care by sending protected health information (PHI) such as drivers licenses and insurance cards to hospitals for faster registration. The app even allows EMS workers to send pictures and videos of injuries or accident scenes for more rapid diagnosis and treatment.
With this great technology, however, privacy concerns abound. Because cell phones store data on the device itself, PHI is much more likely to fall into the wrong hands if a cell phone is lost or stolen. While the Health Insurance Portability and Accountability Act (HIPAA) does not have any official rules banning the use of cell phones, the HIPAA Privacy Rule requires health care providers to implement appropriate safeguards to reasonably protect health information.
In order to solve this problem, the Harbinger app promises:
[P]atient information is encrypted with today’s most advanced methods. The data is transported to our server with the industry standard for banks and credit cards, and is stored in an encrypted format.
While this sounds like it may satisfy HIPAA standards, patients and hospitals will likely still have concerns about this new technology. The founders, both Coloradoans, are currently negotiating with hospitals and we may see the system operating by the end of the year.
For more information, check out Harbinger’s website.