Cross-cutting Political and Regulatory Trends

This category covers cross-cutting political and regulatory trends including:

  • Migration towards new intellectual property, copyright and patent regimes which accommodate technological innovation and new social patterns of consumption whilst supporting creativity and economic sustainability in both the developed and developing world.
  • Greater transparency, access to public sector data and a growing momentum behind open government initiatives designed to empower citizens, reduce corruption and strengthen governance through new technologies.
  • An increased appetite and capacity for certain governments to monitor their citizens’ activities and control/limit the information they can access, assisted by progressively sophisticated approaches including the bulk monitoring of communications data across multiple platforms.
  • The challenges of regulating a global borderless Internet at a supranational level whilst accommodating overlapping and competing national legal jurisdictions and frameworks will continue.


Striking a new balance - the reform of intellectual property and copyright regimes

TREND: Migration towards new intellectual property, copyright and patent regimes which accommodate technological innovation and new social patterns of consumption whilst supporting creativity and economic sustainability in both the developed and developing world.

The effective implementation of copyright and IPR regimes have always suffered potential disruption from emerging technologies from photocopiers to cassette and VHS tapes (ARS Techica – 100 Years of Big Content Fearing Technology). As motion picture lobbyist Jack Vallenti testified to the US Congress in 1982 on the looming dangers of the video cassette recorder (1982 Congressional Hearing on the Home Recording of Copyrighted Works):

We are going to bleed and bleed and haemorrhage, unless this Congress at least protects one industry that is able to retrieve a surplus balance of trade and whose total future depends on its protection from the savagery and the ravages of this machine.

Perhaps unsurprisingly the arrival of an increasingly global hyper-connected Internet combined with new technologies which support the rapid identification, replication and transmission of all forms of digital expression on an unprecedented scale have led to what many describe as a copyright crisis (Journal of the Copyright Society of the USA, page 166).

In his 2009 report to the Parliamentary Assembly of the Council of Europe, on the Future of Copyright in Europe, Christophe Geiger (Director of the Centre for Intellectual Property Studies at Strasbourg University) told the Committee for Science, Education & Technology that copyright is facing a crisis of legitimacy (see page 19). Earlier in 2009 the Interim Digital Britain Report commissioned by then Prime Minister Gordon Brown, called for a copyright framework which is effective and enforceable whilst identifying a critical disconnect between the current rules and emerging socially acceptable behaviour facilitated by technology (see page 39).

Others have also cited the expanding propensity of copyright and IPR regimes to be associated with restricting access to information and increasingly criminalising consumers (Neil Weinstock Netanel, Copyright’s Paradox, 2008, page 8). 

In addition to these issues, there is a growing concern that existing copyright and IPR regimes have the potential to increasingly act as a brake on economic competitiveness and growth, particularly in relation to exploiting innovative new business models, services and products fuelled by digital technologies and the Internet’s unique participatory culture. In November 2010 the UK Prime Minister, David Cameron, commissioned a Review of Intellectual Property and Growth, led by Professor Ian Hargreaves, because the government had diagnosed “the risk that the current intellectual property framework might not be sufficiently well designed to promote innovation and growth in the UK economy.” (Intellectual Property Review 2011, page 1).

In his final report, published in May 2011, Professor Hargreaves responded to what he called the Prime Minister’s “exam question” by identifying that “Copyright, once the exclusive concern of authors and their publishers, is today preventing medical researchers studying data and text in pursuit of new treatments. Copying has become basic to numerous industrial processes, as well as to a burgeoning service economy based upon the Internet. The UK cannot afford to let a legal framework designed around artists impede vigorous participation in these emerging business sectors.” (Ibid)

Of course part of the problem is that for any regulatory mechanism to be effective it needs be capable of adapting faster than the system of actors and behaviours that it seeks to regulate. Given the accelerated pace of social and commercial behavioural change driven by hyper-connected digital technologies (which themselves continue to evolve rapidly) it is hard to envisage any system capable of universal control or a legal regime which maps precisely onto lived experience (Michael J. Madison, University of Pittsburgh School of Law research paper, 2010, page 352)  

Yet if we conclude that the current design of copyright and IPR regimes are on a collision course with current social norms and evolving new business models in a digital age then the question emerges: what would be the key characteristics of a reformed copyright/IPR system? In February 2011, WIPO Director General, Francis Gurry called for an infrastructure which permitted simplified global licencing and warned the current complex copyright system risked losing public support if it could not be made more accessible and intelligible (see 2011 WIPO press release). 

Two months later, one of the central recommendations emerging from the 2011 Hargreaves review of intellectual property (see page 30) was for the UK Government to establish an automated online Digital Copyright Exchange which would ultimately reach the capacity to operate so that rights licensing could become a one-click process (in the same way that the administration of the Internet’s Domain Name System supports machine to machine communication to connect users to a website within a few seconds). This would involve government acting as a convenor for other stakeholders (including the creative industries) to establish a network of interoperable data bases to support a common platform for licensing transactions. Similarly, the Digital Agenda for Europe, one of the Flagship Initiatives of the European Union’s Europe 2020 Growth Strategy published in 2009 included a commitment to simplifying pan-European copyright clearance, management and cross-border licensing for online works (Digital Agenda Action 1). In both cases policymakers hope that if these approaches are successfully implemented then they would increase the transparency, contestability, and efficiency of digital content markets, reduce licensing transaction costs, facilitate dispute resolution and generate a greater range of quality affordable digital products and services for consumers (see page 31).

However, the shift to a more licensing culture will have broader implications. Whereas physical objects covered by copyright – books, CDs, journals – could be owned by consumers, leaving them free to loan them to friends, or sell them on second hand, digital information obeys no such rules. Digital objects are instead licensed and the licenses dictate the terms under which they can be used. This changes the concept of ownership – a click of the ‘buy’ button on iTunes simply means consumers are buying a license to use digital content, and committing themselves to behave in accordance with the dense and often impenetrable terms and conditions. The trend is for large information providers to enforce their licenses rigorously, with repercussions that can extend as far as revoking the right to access previously purchased digital materials (Guardian Article, 2012).

In addition to making questioning the notion of ownership in the digital age, greater use of licensing, as opposed to limitations and exceptions to copyright law, has implications for long-term access to culture, along with the way that cultural goods are created. For example, institutions such as libraries are presently prevented from buying eBooks from major publishers, or are only able to access them through restrictive licenses that pay little (or no) attention to the concept of public access to, and preservation of, culture. The shift from the creation of physical products in the cultural industries – CDs, DVDs, vinyl, books – to digital – MP3s, eBooks, streaming services - has created a marketplace where licensing terms are generating new revenue streams and new ways of allocating benefits to creators and rights holders. However, this new environment is as yet unable to answer the question of how to balance ease of access to digital cultural goods with public access to information, preservation of the cultural record, or how to give appropriate recompense to the people who create culture in the age of the Internet.

Whether the future holds an expansion of licensing regimes or a renewed focus on updating copyright limitations and exceptions for the digital age, there is a growing level of political consensus that current national/international copyright and IPR regimes are no longer fit for purpose in a hyper-connected information age, and a fresh balance between the protection of creative incentives and economic imperatives for access and innovation must be struck. Vice President of the European Commission Neelie Kroes (and Commissioner responsible for the implementation of the Digital Agenda) wrote in a February 2011 blog post entitled “Is Copyright Working?” that the current copyright system is not succeeding in its objectives, that the battle to enforce it was costing millions and that innovative ideas for new systems of recognition and reward were often suppressed by rigid pre-digital legislation. A review of the Digital Agenda published in December 2012 identified the need to “update EU’s Copyright Framework” as one of 7 key 2013-2014 priorities designed to “stimulate the conditions to create growth and jobs in Europe”.

Copyright necessarily creates a proprietary monopoly (eLaw Journal 2011, page 59) in order to ensure that creators can be incentivised through the promise of profiting from their endeavours – and yet a balance must be struck at the expense of this monopoly to support freedom of expression and access to information whilst mobilising the substantial social and economic benefits of new transparent digital markets for products and services. Those who are incentivised to create without considerations of profit will always do so irrespective of the copyright regime in operation, but open licensing approaches can help drive innovation by harnessing the participatory culture of the web, whilst affording those creators a legal basis (grounded in copyright law) for maintaining certain conditions for the use of their work as opposed to simply just releasing it into the public domain.

The Continuing Rise of the Open Source Movement

In conjunction with the rising importance of copyright and IPR reform on the international political agenda, the Open Source Movement has been identified as a means of fostering innovation in products and services through marshalling successive and iterative layers of creative contribution and collaboration as an alternative to more proprietary creative models. This approach has underpinned the creation of a broad range of digital products including open source web browsers (Mozilla Firefox), web publishing platforms (Wordpress), mobile operating systems (Linux-based Android is the world’s most popular smartphone operating system - ComScore Report 2012) as well as programming languages (PERL, PHP) and server software (e.g. Apache – which hosts 55% of all active global websites) which have played a key role in the delivery and accessibility of modern web content. Wikipedia, the free online encyclopaedia which is based on the open source model is currently the sixth most popular website (Alexa site ranking) in the world. A 2012 comparative study (see page 5) assessing the accuracy and quality of its entries compared to other popular online encyclopaedias found that Wikipedia performed well against the Encyclopaedia Britannica.

However, it is worth noting that while open source approaches may have been initially driven by the aspiration to escape the limitations of proprietary systems – they effectively operate on the basis of existing copyright law, but in a context where the author or authors of the work voluntarily waive the right to restrict the reproduction, modification or commercial use of that work. Most open source licenses therefore still preserve certain restrictions such as a requirement to attribute the original authors or requiring modified or derived versions of the software to carry a different name or version number to the original software in order to preserve the integrity of the author’s source code ( Where those restrictions exist, the only current legal basis for enforcement remains traditional copyright. As the US Department of Defense’s Frequently Asked Questions on open source software webpage testifies, open source software licenses are legally enforceable (underpinned by existing copyright law) as demonstrated by the US Court of Appeals for the Federal Circuit’s ruling on Jacobsen v. Katzer .

Wikipedia itself uses the Creative Commons open licensing system which provides a simplified menu of standardised licensing options for copyright holders to waive certain rights and share their work in the public domain retaining certain rights and specifying certain conditions. In 2009 there were an estimated 350 million works licensed by the Creative Commons system (Creative Commons – history). Nevertheless, the Creative Commons guidance page takes care to emphasise that “Creative Commons licenses are not an alternative to copyright. They work alongside copyright and enable you to modify your copyright terms to best suit your needs.” (Creative Commons – about).

Increased Transparency

TREND: Greater transparency, access to public sector data and a growing momentum behind open government initiatives designed to empower citizens, reduce corruption and strengthen governance through new technologies

The trend towards open government and greater transparency is likely to continue, particularly in the developed world democracies (World Economic Forum 2012, page 144), where it is progressively evolving into a more interactive web based relationship (eJournal of eDemocracy and Open Government 2011, page 166). As of September 2012, 93 countries have enacted legislation designed to uphold the right of the public to access government information ( This in itself is significant. As Professor Prakash Sarangi suggests in his 2012 article (see page 154) on Corruption in India and the implementation of the 2005 Right to Information Act (RITA), “…the RITA encourages politicians and officials to put less stress on acting as agents of special interests, and more on acting as stewards of a public trust and even a common interest.”

Alongside rising online engagement with constituents in relation to information about policy processes and services, the UK ( and US (  administrations have also begun to publish public sector data  in support of both greater transparency but also to sponsor the creation of innovative new services based on that data. At EU level, in 2010 the European Commission published its e-Government Action Plan (a component of the Digital Agenda Flagship Initiative which is a key pillar of the EU 2020 Growth Strategy). The EU Action Plan includes a strong focus on empowering citizens and businesses to engage in the process of policy-making by increasing transparency and enhancing public access to government information (see page 5).

In April 2012, representatives from 55 governments met in Brasilia for the inaugural meeting of the Open Government Partnership, a new international initiative designed to promote transparency, empower citizens, reduce corruption and strengthen governance through new technologies. According to a 2012 correlational analysis by the World Economic Forum (see page 127) there is a positive relationship between countries with high levels of digitization (the wholesale adoption of networked digital technologies by governments, businesses and consumers) and levels of social transparency, public participation and the ability of governments to make information accessible to the public.

As government policy decisions (and their consequences) become increasingly visible to those they govern, driven substantially by the expansion of Internet connectivity, this process becomes politically difficult to reverse in the face of rising public expectation. According to the 2012 United Nations e-Government Survey (see page 3), these new opportunities have “strongly shifted expectations of what governments can and should do, using modern information and communication technologies, to strengthen public service and advance equitable, people-centred development.”

The standards expected by members of the public when using online consultation and engagement tools can also be challenging to achieve (in the UK the 2011 the Government’s ICT Strategy Strategic Implementation Plan identified this as one of the top three risks in using online channels). In addition, while most commentators focus on the positive benefits of open government, it has also been pointed out that transparency, participation and collaboration “should be viewed as means towards desirable ends, rather than administrative ends in themselves” (Open Government and e-Government: Democratic Challenges from a Public Value Perspective, 2012, page 83). Others express concern that if governments overly fixate on exciting new technologically enabled possibilities, they may run the risk of overlooking other key (if less exciting) elements of solving important policy problems (Open Knowledge Foundation Blog, September 2012).

Increased Censorship and Surveillance

TREND: An increased appetite and capacity for certain governments to monitor their citizens’ activities and control/limit the information they can access, assisted by progressively sophisticated approaches including the bulk monitoring of communications data across multiple platforms

The expansion of Internet connectivity and conversion of information and communications technologies has simultaneously fuelled the appetite and capacity of many authoritarian administrations to monitor their citizens’ activity and control/limit the information and content they can access. In a 2012 survey of 47 countries, the Freedom on the Net report (see page 1) concluded that while methods of control are evolving to become less visible, 20 of those examined countries have experienced a negative trajectory since January 2011 (Bahrain, Pakistan and Ethiopia were identified as the headline offenders). According to a 2010 Princeton University study “A Taxonomy of Internet Censorship and Anti-Censorship” (see page 18), Internet censorship has steadily increased since 1993 with the sharpest incline occurring from 2007-2010. In the aforementioned Freedom on the Net Report (see page 6), 19 of the 47 analysed countries have passed new laws or directives since January 2011 which could negatively affect online free speech, user privacy, or punish individuals posting certain types of content.

Censorship, surveillance and control methods can range from the lo-tech (physical intimidation, incarceration and legislation limiting freedom of expression) to progressively hi-tech methods (FOTN Report, see page 10) including the bulk monitoring of communications data across a range of platforms (mobile phone conversations, texts, emails, browsing history, social networking traffic…etc). The most sophisticated instances can involve real time monitoring of communications data linked to predetermined key words, email addresses and phone numbers. In a growing number of countries speech recognition software is being used to scan spoken conversations to identify sensitive key words or particular individuals of interest. A 2010 paper sponsored by Stanford University’s Center on Democracy, Development and the Rule of Law on “Networked Authoritarianism in China” (see page 17) highlighted the example of sophisticated military-grade cyber-attacks launched against Google which specifically targeted the Gmail accounts of human rights activists (either working in China or working on China-related matters).

The Open Net Initiative (ONI) a collaboration between the University of Toronto, the Berkman Centre for Cyber Law at Harvard University and Ottawa’s SecDev Group, offers a number of helpful resources, including a selection of country and regional profiles on Internet censorship and filtering practices. It also offers a series of interactive maps which show the states and regions where each type of Internet filtering takes place, as well as dedicated maps indicating states and regions where YouTube is censored and where any of the five major social media platforms (including Facebook and Twitter) have been blocked or filtered. In an interview with the Guardian in April 2012, ONI principal investigator and Director of Citzenlab Ronald Deibert said “…what we’ve found over the last decade is the spectrum of content that’s targeted for filtering has grown to include political content and security-related content, especially in authoritarian regimes. The scope and scale of content targeted for filtering has grown” (Guardian article, April 2012).

The Challenges of Supranational Internet Governance

TREND: The challenges of regulating a global borderless Internet at a supranational level whilst accommodating overlapping and competing national legal jurisdictions and frameworks will continue

The arrival of a global borderless and hyper-connected Internet has created new challenges in terms of the effective application of national legal jurisdictions and definitions of liability. In some countries Internet intermediaries (such as Internet Service Providers, web hosts and other online platforms) are increasingly being held responsible for content uploaded by third parties – often operating in other countries with different legal frameworks (Open Society Foundation – The Media and Liability for Content on the Internet, 2011, page 6).

In 2010 an Italian court convicted three Google executives of breaking the Italian Data Protection Code (BBC News article 2010) after a video showing the bullying of an autistic teenager was uploaded to Google’s online video service (shortly before Google acquired YouTube). The conviction was successful even though the video was removed shortly after Google received a notification from Italian law enforcement officials. In December 2012 an Italian appeals court rescinded the conviction (New York Times, December 2012), but this example demonstrates how the violation of national legislation can be a potential source of liability for suppliers of Internet content and services, which can also lead to increasing instances of self-protective or defensive censorship (The Media and Liability on the Internet, 2011, page 6). Another example is Twitter’s decision in January 2012 to introduce a new system to censor specific tweets on a country-by-country basis, an approach previously adopted by Facebook, Google, eBay and Yahoo who also filter content in a number of different national jurisdictions (Guardian article, Jan 2012). Twitter’s decision was also officially endorsed by the Government of Thailand whose “lese-majeste” regulations prohibit defamatory or insulting comments (online of offline) about the Royal Family which are punishable by a prison sentence of up to 15 years (Guardian article, January 2012).

In December 2012 the regional data protection office in Schleswig-Holstein in Germany issued an order determining that Facebook’s current practice of preventing its users from signing up for an online account using a pseudonym (Facebook’s long-standing policy requires users to register using their real names) was in violation of German data protection legislation. The regional data protection commissioner sent letters to Facebook founder Mark Zuckerberg in California and to Facebook’s European headquarters in Ireland threatening to issue a fine for €20,000 if the order was not met with compliance (Guardian, January 2013). In January 2013, Facebook responded in an interview with online magazine TechCrunch that its Dublin-based European operation was in compliance with both Irish and European data protection laws and that “we believe the orders are without merit and a waste of German tax payers money…”.

A further challenge exists in the form of rising levels of online criminal activity often referred to as cybercrime. According to a 2012 report by Symantec, the cost of global cybercrime has now reached $110 billion per year. A 2012 briefing paper produced by the US Congressional Research Service (see page 6) underlined the fact that the digital technologies which support cybercrime, including Internet servers and communications devices are often located in physical locations that do not coincide with the locations of either the perpetrator or the victim. As such, the task of national law enforcement agencies in successfully investigating and prosecuting cyber criminals faces significant technical and jurisdictional challenges. An illustration of the obstacles presented by jurisdiction can be found in the thwarted efforts of an active FBI investigation into the architects of the Koobface Worm which used Facebook and other social networks to infect 800,000 computers worldwide and earn the gang an estimated $2 million per year (BBC News, January 2012). In January 2012 an extensive report compiled by Facebook and security firm Sophos named five individuals based in Russia to be responsible for the Koobface operation. However, despite this information being passed on to US law enforcement, as Article 61 of the Russian constitution prohibits its citizens from being extradited to another country to face charges, no arrests have been made. 

A further example of the contemporary challenges of Internet Governance can be found in the recent breakdown of negotiations to agree a new International Telecommunications Treaty (ITR) in December 2012 at the World Conference on Telecommunications in Dubai (What really happened in Dubai – The disagreements in Dubai reflect a clash of two different international approaches to the regulation of the Internet.

Since 1998 the Internet’s Domain Name System (DNS) has been administered by the Internet Corporation for Assigned Names and Numbers (ICANN) – a California-based not-for-profit which currently operates under a Memorandum of Understanding with the US Department for Commerce. At the first and second World Summits on the Information Society (WSIS) in Geneva in 2003 and Tunis in 2005, several countries including China, Brazil, India and Russia express concern over the US government’s proximity to the technical levers operating the DNS and sought to bring ICANN’s operations under the centralised control of an international agency such as the United Nations. In response the United States, the EU and others strongly resisted any moves to undermine the status quo which they believe preserves ICANN’s independence and limits the scope for the Internet’s technical operation to be manipulated by governments.

The compromise brokered at the 2005 WSIS was the creation of the annual Internet Governance Forum sponsored by the United Nations to provide a platform for multi-stakeholder input (from civil society and industry as well as governments) and discussion on the governance of the Internet. Nevertheless, the polarisation of approaches between the current model of multi-stakeholder governance (which maintains a link with the US administration) as opposed to the traditional intergovernmental model continues to characterize the debate around the governance of the Internet.

During the Dubai negotiations in December 2012 the United States and the European Union blocked a series of resolutions, including one tabled by China, Russia and several Arab states which sought to give governments “equal rights to manage the internet” (Economist Blog, December 2012). As a result of multiple attempts to introduce references to the Internet in the new ITR which the United States and others feared might embolden governments to censor or meddle with the Internet’s infrastructure, the conference ended in failure with only 89 of the 144 countries present agreeing to sign the revised ITR (Economist, December 2012).