Last Week in Tech Law and Policy, Vol. 10: Digital Technologies and Innovation in the Distribution of Content

(by Sam Moodie, Student Attorney)

This past Thursday, Colorado Law’s Silicon Flatirons Center hosted a conference focusing on the current state of innovation in the creation and distribution of content.  The conference hosted well-known artists in music, film, television, and photography as well as major players in content distribution to discuss in part, how digital technologies are either enhancing or challenging traditional structures of creation and distribution.


Music has long been the stage to exemplify how digital technologies can frustrate and disrupt an entire content industry.  Some have argued that the rise of music piracy, peer-to-peer sharing, and pay-per-track have drastically reduced profits for music executives, song writers, and performing artists alike.  The new wave in music distribution is streaming—a technology made possible through licensing and advertising revenues.  However, artists claim that this model drastically under-compensates them  for their work, to the point where an artists song earning a million streams may not even earn the profit of $100.  In response, some popular artists like Taylor Swift has removed her work from Spotify, one of the most successful music streaming sites.

Many now question whether streaming has fundamentally shaken the music industry at its core, or if the traditional business structure simply needs to adapt slightly in order to remain relevant. Some take the perspective that users need to be retrained on the value of content, and how to interact with it.


Digital technology in the television industry has quickly stepped in to answer users’ demands to control their content.  The most notable means through which this has happened are subscription networks like Netflix and Amazon Prime.  This distribution model arguably assists in the democratization of television because producers can work directly with distribution companies instead of working within the traditional broadcast television structure.

Similarly, the interfaces used by these entities provide a wealth of content and allow users to interact and search for content on their own terms.  The subscription model allows for a wider array of content, often much edgier than can be found on mainstream television, and at a vastly lower price compared to cabe subscriptions.

This leads to the question of whether cable and broadcast are still relevant, and if so, if they can remain relevant in the future.  Some consider the current price of cable subscriptions to be unsustainable given the success and popularity on online streaming television.

Some see traditional and digital entities as being able to work together.  As noted at the conference,  cable providers and producers see themselves as the leaders in providing up-to-date and new content.  Coupling with entities like Netflix that provide past seasons of television shows all at once, allowing viewers to binge watch and catch up on past content, may be a perfect marriage for complete access to content.  However, with Netflix now creating its own series, how long will cable have a relevant role in this relationship?

Regardless, it is increasingly clear that these technologies are giving a considerable amount of leverage to users. The point where the balance has shifted, and industry executives are losing more and more control over their content.

Last Week in Tech Law and Policy, Vol. 9: International Hacking

(by Jeff Ward-Bailey, student technologist)

Government surveillance has been a frequent news items ever since the summer of 2013, when Edward Snowden leaked his first set of documents to journalists, explaining the software tools the NSA uses to monitor communications in the United States and abroad. But governments have employed shadowy means to gather intelligence about their own citizens and those of other countries, and have even attempted to disrupt the operations of governments perceived to be hostile to their interests, for many years.

In 2008 a sophisticated piece of malware called “Regin” began spying on governments and individuals in Russia, Saudi Arabia, Ireland, and a handful of other countries. Security researchers didn’t notice Regin until 2014, but the software hadn’t done any damage to infected systems: it had simply run in the background, watching its targets. Researchers initially surmised that Regin had been written by the US, Israel, or the UK to gather intelligence on foreign governments, and further investigation suggested that the British GCHQ spy agency had written the malware.

In 2010 the Stuxnet computer worm was discovered, which targeted industrial controllers in Iran and caused centrifuges used for the enrichment of nuclear material to tear themselves apart. It’s still not known for certain who wrote Stuxnet, but in 2011 Wired reported that it was “believed to have been created by the United States,” and in 2012 The New York Times reported that it was the product of a joint US-Israeli intelligence operation.

 Earlier this year security researchers uncovered a suite of surveillance platforms nicknamed EquationLaser, EquationDrug, and GrayFish. Circumstantial evidence suggests that the tools may be connected with the NSA (for example, the tools in the platforms match the names of tools in an NSA spy tool catalog leaked in 2013). Five Iranian companies who were previously infected by Stuxnet were also infected by the “Equation Group” tools.

Few would argue that when a government intentionally infects another government’s systems with malware in an effort to spy on them that practice is, at least, in an ethical grey area. But is such cyberspying (some would call it cyberwarfare, especially when the destruction of property is involved) necessary to protect against attacks? Does the potential for mitigating harm outweigh the ethical implications of spying? And does a government’s mandate to protect the safety of its citizens justify the practice of hacking or spying on other governments?

Last Week in Tech Law & Policy, Vol. 8: Consumer Privacy Bill of Rights

(By Paul Garboczi, Student Attorney)

On Friday, the White House released a draft of the Consumer Privacy Bill of Rights Act of 2015. This Wall Street Journal article summarizes the bill fairly well. The bill essentially sets forth a set of industry best practices that the Federal Trade Commission would enforce on the private sector. Private sector firms would be encouraged to create privacy codes of conduct, and if they broke their own codes the FTC could take action (although the FTC would not be given rulemaking authority). The bill attempts to give consumers the right to access their information by requesting it from companies. However, companies could refuse such a request if it was “frivolous or vexatious.” The bill is unclear on who would decide if such requests were frivolous. It basically calls on companies to respect and protect consumer privacy without creating a robust enforcement mechanism for consumer privacy.

Since the draft was released on Friday, criticism of the bill has been swift. Consumer privacy advocates are denouncing it for not going far enough in protecting privacy. Opposers of top down regulatory schemes are criticizing it for attempting a one-size-fits-all solution to a problem that requires a flexible approach, and burdening American innovation. The FTC itself released a statement criticizing the bill for lacking “strong and enforceable protections” for consumer privacy. There is also a concern that the bill would preempt state laws, some of which provide stronger privacy protections for consumers.