Literature Review

The literature review is a fundamental building block of the IFLA Trend Report. It reviews and summarises existing over 170 studies, trend forecasts, journal articles and academic papers which look at future trends which have the potential to affect the global information environment. You can access it in sections, from the table of contents on the right or download it in PDF format. The Literature Review contains hyperlinks to all sources referenced in the document. 

Cross-cutting Political and Regulatory Trends

This category covers cross-cutting political and regulatory trends including:

  • Migration towards new intellectual property, copyright and patent regimes which accommodate technological innovation and new social patterns of consumption whilst supporting creativity and economic sustainability in both the developed and developing world.
  • Greater transparency, access to public sector data and a growing momentum behind open government initiatives designed to empower citizens, reduce corruption and strengthen governance through new technologies.
  • An increased appetite and capacity for certain governments to monitor their citizens’ activities and control/limit the information they can access, assisted by progressively sophisticated approaches including the bulk monitoring of communications data across multiple platforms.
  • The challenges of regulating a global borderless Internet at a supranational level whilst accommodating overlapping and competing national legal jurisdictions and frameworks will continue.

 

Striking a new balance - the reform of intellectual property and copyright regimes

TREND: Migration towards new intellectual property, copyright and patent regimes which accommodate technological innovation and new social patterns of consumption whilst supporting creativity and economic sustainability in both the developed and developing world.

The effective implementation of copyright and IPR regimes have always suffered potential disruption from emerging technologies from photocopiers to cassette and VHS tapes (ARS Techica – 100 Years of Big Content Fearing Technology). As motion picture lobbyist Jack Vallenti testified to the US Congress in 1982 on the looming dangers of the video cassette recorder (1982 Congressional Hearing on the Home Recording of Copyrighted Works):

We are going to bleed and bleed and haemorrhage, unless this Congress at least protects one industry that is able to retrieve a surplus balance of trade and whose total future depends on its protection from the savagery and the ravages of this machine.

Perhaps unsurprisingly the arrival of an increasingly global hyper-connected Internet combined with new technologies which support the rapid identification, replication and transmission of all forms of digital expression on an unprecedented scale have led to what many describe as a copyright crisis (Journal of the Copyright Society of the USA, page 166).

In his 2009 report to the Parliamentary Assembly of the Council of Europe, on the Future of Copyright in Europe, Christophe Geiger (Director of the Centre for Intellectual Property Studies at Strasbourg University) told the Committee for Science, Education & Technology that copyright is facing a crisis of legitimacy (see page 19). Earlier in 2009 the Interim Digital Britain Report commissioned by then Prime Minister Gordon Brown, called for a copyright framework which is effective and enforceable whilst identifying a critical disconnect between the current rules and emerging socially acceptable behaviour facilitated by technology (see page 39).

Others have also cited the expanding propensity of copyright and IPR regimes to be associated with restricting access to information and increasingly criminalising consumers (Neil Weinstock Netanel, Copyright’s Paradox, 2008, page 8). 

In addition to these issues, there is a growing concern that existing copyright and IPR regimes have the potential to increasingly act as a brake on economic competitiveness and growth, particularly in relation to exploiting innovative new business models, services and products fuelled by digital technologies and the Internet’s unique participatory culture. In November 2010 the UK Prime Minister, David Cameron, commissioned a Review of Intellectual Property and Growth, led by Professor Ian Hargreaves, because the government had diagnosed “the risk that the current intellectual property framework might not be sufficiently well designed to promote innovation and growth in the UK economy.” (Intellectual Property Review 2011, page 1).

In his final report, published in May 2011, Professor Hargreaves responded to what he called the Prime Minister’s “exam question” by identifying that “Copyright, once the exclusive concern of authors and their publishers, is today preventing medical researchers studying data and text in pursuit of new treatments. Copying has become basic to numerous industrial processes, as well as to a burgeoning service economy based upon the Internet. The UK cannot afford to let a legal framework designed around artists impede vigorous participation in these emerging business sectors.” (Ibid)

Of course part of the problem is that for any regulatory mechanism to be effective it needs be capable of adapting faster than the system of actors and behaviours that it seeks to regulate. Given the accelerated pace of social and commercial behavioural change driven by hyper-connected digital technologies (which themselves continue to evolve rapidly) it is hard to envisage any system capable of universal control or a legal regime which maps precisely onto lived experience (Michael J. Madison, University of Pittsburgh School of Law research paper, 2010, page 352)  

Yet if we conclude that the current design of copyright and IPR regimes are on a collision course with current social norms and evolving new business models in a digital age then the question emerges: what would be the key characteristics of a reformed copyright/IPR system? In February 2011, WIPO Director General, Francis Gurry called for an infrastructure which permitted simplified global licencing and warned the current complex copyright system risked losing public support if it could not be made more accessible and intelligible (see 2011 WIPO press release). 

Two months later, one of the central recommendations emerging from the 2011 Hargreaves review of intellectual property (see page 30) was for the UK Government to establish an automated online Digital Copyright Exchange which would ultimately reach the capacity to operate so that rights licensing could become a one-click process (in the same way that the administration of the Internet’s Domain Name System supports machine to machine communication to connect users to a website within a few seconds). This would involve government acting as a convenor for other stakeholders (including the creative industries) to establish a network of interoperable data bases to support a common platform for licensing transactions. Similarly, the Digital Agenda for Europe, one of the Flagship Initiatives of the European Union’s Europe 2020 Growth Strategy published in 2009 included a commitment to simplifying pan-European copyright clearance, management and cross-border licensing for online works (Digital Agenda Action 1). In both cases policymakers hope that if these approaches are successfully implemented then they would increase the transparency, contestability, and efficiency of digital content markets, reduce licensing transaction costs, facilitate dispute resolution and generate a greater range of quality affordable digital products and services for consumers (see page 31).

However, the shift to a more licensing culture will have broader implications. Whereas physical objects covered by copyright – books, CDs, journals – could be owned by consumers, leaving them free to loan them to friends, or sell them on second hand, digital information obeys no such rules. Digital objects are instead licensed and the licenses dictate the terms under which they can be used. This changes the concept of ownership – a click of the ‘buy’ button on iTunes simply means consumers are buying a license to use digital content, and committing themselves to behave in accordance with the dense and often impenetrable terms and conditions. The trend is for large information providers to enforce their licenses rigorously, with repercussions that can extend as far as revoking the right to access previously purchased digital materials (Guardian Article, 2012).

In addition to making questioning the notion of ownership in the digital age, greater use of licensing, as opposed to limitations and exceptions to copyright law, has implications for long-term access to culture, along with the way that cultural goods are created. For example, institutions such as libraries are presently prevented from buying eBooks from major publishers, or are only able to access them through restrictive licenses that pay little (or no) attention to the concept of public access to, and preservation of, culture. The shift from the creation of physical products in the cultural industries – CDs, DVDs, vinyl, books – to digital – MP3s, eBooks, streaming services - has created a marketplace where licensing terms are generating new revenue streams and new ways of allocating benefits to creators and rights holders. However, this new environment is as yet unable to answer the question of how to balance ease of access to digital cultural goods with public access to information, preservation of the cultural record, or how to give appropriate recompense to the people who create culture in the age of the Internet.

Whether the future holds an expansion of licensing regimes or a renewed focus on updating copyright limitations and exceptions for the digital age, there is a growing level of political consensus that current national/international copyright and IPR regimes are no longer fit for purpose in a hyper-connected information age, and a fresh balance between the protection of creative incentives and economic imperatives for access and innovation must be struck. Vice President of the European Commission Neelie Kroes (and Commissioner responsible for the implementation of the Digital Agenda) wrote in a February 2011 blog post entitled “Is Copyright Working?” that the current copyright system is not succeeding in its objectives, that the battle to enforce it was costing millions and that innovative ideas for new systems of recognition and reward were often suppressed by rigid pre-digital legislation. A review of the Digital Agenda published in December 2012 identified the need to “update EU’s Copyright Framework” as one of 7 key 2013-2014 priorities designed to “stimulate the conditions to create growth and jobs in Europe”.

Copyright necessarily creates a proprietary monopoly (eLaw Journal 2011, page 59) in order to ensure that creators can be incentivised through the promise of profiting from their endeavours – and yet a balance must be struck at the expense of this monopoly to support freedom of expression and access to information whilst mobilising the substantial social and economic benefits of new transparent digital markets for products and services. Those who are incentivised to create without considerations of profit will always do so irrespective of the copyright regime in operation, but open licensing approaches can help drive innovation by harnessing the participatory culture of the web, whilst affording those creators a legal basis (grounded in copyright law) for maintaining certain conditions for the use of their work as opposed to simply just releasing it into the public domain.

The Continuing Rise of the Open Source Movement

In conjunction with the rising importance of copyright and IPR reform on the international political agenda, the Open Source Movement has been identified as a means of fostering innovation in products and services through marshalling successive and iterative layers of creative contribution and collaboration as an alternative to more proprietary creative models. This approach has underpinned the creation of a broad range of digital products including open source web browsers (Mozilla Firefox), web publishing platforms (Wordpress), mobile operating systems (Linux-based Android is the world’s most popular smartphone operating system - ComScore Report 2012) as well as programming languages (PERL, PHP) and server software (e.g. Apache – which hosts 55% of all active global websites) which have played a key role in the delivery and accessibility of modern web content. Wikipedia, the free online encyclopaedia which is based on the open source model is currently the sixth most popular website (Alexa site ranking) in the world. A 2012 comparative study (see page 5) assessing the accuracy and quality of its entries compared to other popular online encyclopaedias found that Wikipedia performed well against the Encyclopaedia Britannica.

However, it is worth noting that while open source approaches may have been initially driven by the aspiration to escape the limitations of proprietary systems – they effectively operate on the basis of existing copyright law, but in a context where the author or authors of the work voluntarily waive the right to restrict the reproduction, modification or commercial use of that work. Most open source licenses therefore still preserve certain restrictions such as a requirement to attribute the original authors or requiring modified or derived versions of the software to carry a different name or version number to the original software in order to preserve the integrity of the author’s source code (Opensource.org). Where those restrictions exist, the only current legal basis for enforcement remains traditional copyright. As the US Department of Defense’s Frequently Asked Questions on open source software webpage testifies, open source software licenses are legally enforceable (underpinned by existing copyright law) as demonstrated by the US Court of Appeals for the Federal Circuit’s ruling on Jacobsen v. Katzer .

Wikipedia itself uses the Creative Commons open licensing system which provides a simplified menu of standardised licensing options for copyright holders to waive certain rights and share their work in the public domain retaining certain rights and specifying certain conditions. In 2009 there were an estimated 350 million works licensed by the Creative Commons system (Creative Commons – history). Nevertheless, the Creative Commons guidance page takes care to emphasise that “Creative Commons licenses are not an alternative to copyright. They work alongside copyright and enable you to modify your copyright terms to best suit your needs.” (Creative Commons – about).

Increased Transparency

TREND: Greater transparency, access to public sector data and a growing momentum behind open government initiatives designed to empower citizens, reduce corruption and strengthen governance through new technologies

The trend towards open government and greater transparency is likely to continue, particularly in the developed world democracies (World Economic Forum 2012, page 144), where it is progressively evolving into a more interactive web based relationship (eJournal of eDemocracy and Open Government 2011, page 166). As of September 2012, 93 countries have enacted legislation designed to uphold the right of the public to access government information (right2info.org). This in itself is significant. As Professor Prakash Sarangi suggests in his 2012 article (see page 154) on Corruption in India and the implementation of the 2005 Right to Information Act (RITA), “…the RITA encourages politicians and officials to put less stress on acting as agents of special interests, and more on acting as stewards of a public trust and even a common interest.”

Alongside rising online engagement with constituents in relation to information about policy processes and services, the UK (http://data.gov.uk/about-us) and US (http://www.data.gov/home)  administrations have also begun to publish public sector data  in support of both greater transparency but also to sponsor the creation of innovative new services based on that data. At EU level, in 2010 the European Commission published its e-Government Action Plan (a component of the Digital Agenda Flagship Initiative which is a key pillar of the EU 2020 Growth Strategy). The EU Action Plan includes a strong focus on empowering citizens and businesses to engage in the process of policy-making by increasing transparency and enhancing public access to government information (see page 5).

In April 2012, representatives from 55 governments met in Brasilia for the inaugural meeting of the Open Government Partnership, a new international initiative designed to promote transparency, empower citizens, reduce corruption and strengthen governance through new technologies. According to a 2012 correlational analysis by the World Economic Forum (see page 127) there is a positive relationship between countries with high levels of digitization (the wholesale adoption of networked digital technologies by governments, businesses and consumers) and levels of social transparency, public participation and the ability of governments to make information accessible to the public.

As government policy decisions (and their consequences) become increasingly visible to those they govern, driven substantially by the expansion of Internet connectivity, this process becomes politically difficult to reverse in the face of rising public expectation. According to the 2012 United Nations e-Government Survey (see page 3), these new opportunities have “strongly shifted expectations of what governments can and should do, using modern information and communication technologies, to strengthen public service and advance equitable, people-centred development.”

The standards expected by members of the public when using online consultation and engagement tools can also be challenging to achieve (in the UK the 2011 the Government’s ICT Strategy Strategic Implementation Plan identified this as one of the top three risks in using online channels). In addition, while most commentators focus on the positive benefits of open government, it has also been pointed out that transparency, participation and collaboration “should be viewed as means towards desirable ends, rather than administrative ends in themselves” (Open Government and e-Government: Democratic Challenges from a Public Value Perspective, 2012, page 83). Others express concern that if governments overly fixate on exciting new technologically enabled possibilities, they may run the risk of overlooking other key (if less exciting) elements of solving important policy problems (Open Knowledge Foundation Blog, September 2012).

Increased Censorship and Surveillance

TREND: An increased appetite and capacity for certain governments to monitor their citizens’ activities and control/limit the information they can access, assisted by progressively sophisticated approaches including the bulk monitoring of communications data across multiple platforms

The expansion of Internet connectivity and conversion of information and communications technologies has simultaneously fuelled the appetite and capacity of many authoritarian administrations to monitor their citizens’ activity and control/limit the information and content they can access. In a 2012 survey of 47 countries, the Freedom on the Net report (see page 1) concluded that while methods of control are evolving to become less visible, 20 of those examined countries have experienced a negative trajectory since January 2011 (Bahrain, Pakistan and Ethiopia were identified as the headline offenders). According to a 2010 Princeton University study “A Taxonomy of Internet Censorship and Anti-Censorship” (see page 18), Internet censorship has steadily increased since 1993 with the sharpest incline occurring from 2007-2010. In the aforementioned Freedom on the Net Report (see page 6), 19 of the 47 analysed countries have passed new laws or directives since January 2011 which could negatively affect online free speech, user privacy, or punish individuals posting certain types of content.

Censorship, surveillance and control methods can range from the lo-tech (physical intimidation, incarceration and legislation limiting freedom of expression) to progressively hi-tech methods (FOTN Report, see page 10) including the bulk monitoring of communications data across a range of platforms (mobile phone conversations, texts, emails, browsing history, social networking traffic…etc). The most sophisticated instances can involve real time monitoring of communications data linked to predetermined key words, email addresses and phone numbers. In a growing number of countries speech recognition software is being used to scan spoken conversations to identify sensitive key words or particular individuals of interest. A 2010 paper sponsored by Stanford University’s Center on Democracy, Development and the Rule of Law on “Networked Authoritarianism in China” (see page 17) highlighted the example of sophisticated military-grade cyber-attacks launched against Google which specifically targeted the Gmail accounts of human rights activists (either working in China or working on China-related matters).

The Open Net Initiative (ONI) a collaboration between the University of Toronto, the Berkman Centre for Cyber Law at Harvard University and Ottawa’s SecDev Group, offers a number of helpful resources, including a selection of country and regional profiles on Internet censorship and filtering practices. It also offers a series of interactive maps which show the states and regions where each type of Internet filtering takes place, as well as dedicated maps indicating states and regions where YouTube is censored and where any of the five major social media platforms (including Facebook and Twitter) have been blocked or filtered. In an interview with the Guardian in April 2012, ONI principal investigator and Director of Citzenlab Ronald Deibert said “…what we’ve found over the last decade is the spectrum of content that’s targeted for filtering has grown to include political content and security-related content, especially in authoritarian regimes. The scope and scale of content targeted for filtering has grown” (Guardian article, April 2012).

The Challenges of Supranational Internet Governance

TREND: The challenges of regulating a global borderless Internet at a supranational level whilst accommodating overlapping and competing national legal jurisdictions and frameworks will continue

The arrival of a global borderless and hyper-connected Internet has created new challenges in terms of the effective application of national legal jurisdictions and definitions of liability. In some countries Internet intermediaries (such as Internet Service Providers, web hosts and other online platforms) are increasingly being held responsible for content uploaded by third parties – often operating in other countries with different legal frameworks (Open Society Foundation – The Media and Liability for Content on the Internet, 2011, page 6).

In 2010 an Italian court convicted three Google executives of breaking the Italian Data Protection Code (BBC News article 2010) after a video showing the bullying of an autistic teenager was uploaded to Google’s online video service (shortly before Google acquired YouTube). The conviction was successful even though the video was removed shortly after Google received a notification from Italian law enforcement officials. In December 2012 an Italian appeals court rescinded the conviction (New York Times, December 2012), but this example demonstrates how the violation of national legislation can be a potential source of liability for suppliers of Internet content and services, which can also lead to increasing instances of self-protective or defensive censorship (The Media and Liability on the Internet, 2011, page 6). Another example is Twitter’s decision in January 2012 to introduce a new system to censor specific tweets on a country-by-country basis, an approach previously adopted by Facebook, Google, eBay and Yahoo who also filter content in a number of different national jurisdictions (Guardian article, Jan 2012). Twitter’s decision was also officially endorsed by the Government of Thailand whose “lese-majeste” regulations prohibit defamatory or insulting comments (online of offline) about the Royal Family which are punishable by a prison sentence of up to 15 years (Guardian article, January 2012).

In December 2012 the regional data protection office in Schleswig-Holstein in Germany issued an order determining that Facebook’s current practice of preventing its users from signing up for an online account using a pseudonym (Facebook’s long-standing policy requires users to register using their real names) was in violation of German data protection legislation. The regional data protection commissioner sent letters to Facebook founder Mark Zuckerberg in California and to Facebook’s European headquarters in Ireland threatening to issue a fine for €20,000 if the order was not met with compliance (Guardian, January 2013). In January 2013, Facebook responded in an interview with online magazine TechCrunch that its Dublin-based European operation was in compliance with both Irish and European data protection laws and that “we believe the orders are without merit and a waste of German tax payers money…”.

A further challenge exists in the form of rising levels of online criminal activity often referred to as cybercrime. According to a 2012 report by Symantec, the cost of global cybercrime has now reached $110 billion per year. A 2012 briefing paper produced by the US Congressional Research Service (see page 6) underlined the fact that the digital technologies which support cybercrime, including Internet servers and communications devices are often located in physical locations that do not coincide with the locations of either the perpetrator or the victim. As such, the task of national law enforcement agencies in successfully investigating and prosecuting cyber criminals faces significant technical and jurisdictional challenges. An illustration of the obstacles presented by jurisdiction can be found in the thwarted efforts of an active FBI investigation into the architects of the Koobface Worm which used Facebook and other social networks to infect 800,000 computers worldwide and earn the gang an estimated $2 million per year (BBC News, January 2012). In January 2012 an extensive report compiled by Facebook and security firm Sophos named five individuals based in Russia to be responsible for the Koobface operation. However, despite this information being passed on to US law enforcement, as Article 61 of the Russian constitution prohibits its citizens from being extradited to another country to face charges, no arrests have been made. 

A further example of the contemporary challenges of Internet Governance can be found in the recent breakdown of negotiations to agree a new International Telecommunications Treaty (ITR) in December 2012 at the World Conference on Telecommunications in Dubai (What really happened in Dubai – Internetgovernerance.org). The disagreements in Dubai reflect a clash of two different international approaches to the regulation of the Internet.

Since 1998 the Internet’s Domain Name System (DNS) has been administered by the Internet Corporation for Assigned Names and Numbers (ICANN) – a California-based not-for-profit which currently operates under a Memorandum of Understanding with the US Department for Commerce. At the first and second World Summits on the Information Society (WSIS) in Geneva in 2003 and Tunis in 2005, several countries including China, Brazil, India and Russia express concern over the US government’s proximity to the technical levers operating the DNS and sought to bring ICANN’s operations under the centralised control of an international agency such as the United Nations. In response the United States, the EU and others strongly resisted any moves to undermine the status quo which they believe preserves ICANN’s independence and limits the scope for the Internet’s technical operation to be manipulated by governments.

The compromise brokered at the 2005 WSIS was the creation of the annual Internet Governance Forum sponsored by the United Nations to provide a platform for multi-stakeholder input (from civil society and industry as well as governments) and discussion on the governance of the Internet. Nevertheless, the polarisation of approaches between the current model of multi-stakeholder governance (which maintains a link with the US administration) as opposed to the traditional intergovernmental model continues to characterize the debate around the governance of the Internet.

During the Dubai negotiations in December 2012 the United States and the European Union blocked a series of resolutions, including one tabled by China, Russia and several Arab states which sought to give governments “equal rights to manage the internet” (Economist Blog, December 2012). As a result of multiple attempts to introduce references to the Internet in the new ITR which the United States and others feared might embolden governments to censor or meddle with the Internet’s infrastructure, the conference ended in failure with only 89 of the 144 countries present agreeing to sign the revised ITR (Economist, December 2012).

Social Trends

This category covers a range of social and educational trends including:

  • Behavioural advertising and personalised search optimisation contribute to the creation of balkanised “echo chamber communities” insulated from unfamiliar or alternative cultures and perspectives.
  • The size of the digital universe will continue to expand exponentially with information and content shaped by a kaleidoscope of social, political, corporate (and on occasion extremist) agendas.
  • Technology which makes access to information easier and cheaper while facilitating communication and collective action will support both positive outcomes (empowering individuals, increasing civic participation and corporate accountability) and negative outcomes (empowering cyber criminals and terrorist/extremist networks).
  • Populations in the developed world will continue to age, while the developing world grows younger leading to differing usage patterns and competing demands on the information environment. Hyperconnectivity expands the influence and role of migrants and diasporas.
  • Rising impact of online education resources (including open access to scholarly research and massive open online courses) combined with the emergence of new media and information literacy skills offer flexible non-formal and informal skill accumulation pathways.

Internet Balkanisation and Increasingly Automated Personalisation

TREND: Behavioural advertising and personalised search optimisation contribute to the creation of balkanised “echo chamber communities” insulated from unfamiliar or alternative cultures and perspectives

In 1995 Nicholas Negroponte, founder of the Massachusetts Institute of Technology Media Lab, published his book “Being Digital” in which, alongside multiple predictions on the future applications of technology, he introduced the concept of the “Daily Me”, a virtual newspaper which was customised for each individual subscriber (Making the Daily Me, Neil Thurman, page 2). Today it is manifestly evident that Mr Negroponte’s vision was a prescient one. 

A 2012 paper from the University of Pennsylvania (see page 2) claims that web personalisation (using statistical techniques to infer a customer’s preferences and recommend content suitable to them) has now become ubiquitous across multiple providers of online news, media and services. A 2011 study from City University (see page 5) surveyed the use of online personalisation across eleven national US and UK news websites and reported an increasing range of approaches including contextual recommendations (links to external content), geo-targeted editions (based on user location), aggregated targeted filtering (selections of news stories filtered by general user popularity) and profile based recommendations (based on data on user behaviour from registration or imported from social media sites). A 2012 article, Existing Trends and Techniques for Web Personalization, in the International Journal of Computer Science Issues (see page 433) reports that web personalisation has become an indispensable tool for both web-based organisations and end users to deal with content overload, and that most major Internet companies are implementing personalisation systems.

In Republic.com 2.0 (2007) Cass Sunstein argues that the Internet has a propensity to foster social fragmentation by encouraging individuals to sort themselves into deliberate enclaves of like-minded people and assisting them in filtering out unwanted or opposing opinions (referenced in Comparative Research in Law & Political Economy, Book Review by Peter S. Jenkins, page 1). In studying hyperlinking patterns across 1,400 political blogs, Sunstein reported that 91% of links were to like-minded sites which indicated a trend towards rarely highlighting or drawing attention to opposing opinions (interview with Salon Magazine 2007). In a 2010 article for Scientific American, founder of the World Wide Web Tim Berners-Lee argued that the effects of social networking companies such as Facebook and LinkedIn shutting their users into online walled gardens could cause the Internet to be “broken up in to fragmented islands”.

In his 2011 book The Filter Bubble Eli Pariser contended that Facebook’s decision to change its filtering algorithm for status updates and newsfeeds (so that by default users would only see material from friends they recently interacted with) had the unintended consequence of suppressing updates from friends who did not share that user’s political and social values (Wall Street Journal article, 2011). Pariser also argued that increasing trends towards personalised search optimization (Google uses 57 different metrics to predict which search results will be displayed for different users) can also produce unexpected outcomes (2011 Slate Magazine article).  In the aftermath of the Deep Water Horizon oil spill in 2010 one user’s Google  search for “BP” yielded a set of links on investment opportunities with BP – while another generated links providing information on the oil spill (2010 interview with Salon Magazine). Pariser suggested that this invisible and automated customisation of our web experience risks trapping individuals in personalised information bubbles which insulate us from uncomfortable or unfamiliar views, cultures and perspectives.

Alongside Google, online retailers and media providers such as Amazon and Netflix have well established personalised recommender systems which direct users to content that is likely to interest them based on previous choices and search histories. In a 2011 interview with the New York Times, computer scientist Jaron Larnier claimed that this trend has a tendency to cocoon users within a personalised echo chamber where more and more of what they experience online conforms to an image of themselves generated by software.

However, it can also be argued that these trends merely reinforce or reflect traditional and long-standing human tendencies to engage with people, ideas and content which strike a chord with their existing values and interests – in terms of adopting a positive test strategy (Confirmation, Disconfirmation and Information in Hypothesis Testing, page 211, American Psychological Association 1987). Furthermore, research by the University of Michigan (commissioned by Facebook) revealed that while Facebook users are more likely to look at links or pictures shared by close friends, in reality they tend to get far more information from distant contacts, many of whom tend to share items users wouldn’t otherwise be aware of. This conforms to previous influential sociological research by Mark Granovetter (American Journal of Sociology: The Strength of Weak Ties 1973) that stipulates that individuals tend to form clusters of a few close friends, alongside larger numbers of disparate social acquaintances.

The International Journal of Computer Science (page 430) suggests that the tremendous growth in the number, size and complexity of information resources available online make it increasingly difficult for users to access relevant information in a context where an individual’s capacity to absorb, read and digest information is essentially fixed. In this context, personalisation is a necessary evil to prevent the information universe becoming progressively unintelligible to human enquiry. It is worth noting that Facebook’s 2010 decision to suppress certain types of newsfeeds was based on complaints from Facebook users that they were being inundated with updates from friends they barely knew (Wall Street Journal article 2010).

Information Rich or Information Overload? The Blessing and Curse of Abundant Choice

TREND: The size of the digital universe will continue to expand exponentially with information and content shaped by a kaleidoscope of social, political, corporate (and on occasion extremist) agendas, alongside a trend towards smaller more private online social networks

According to the International Data Corporation’s 2011 Digital Universe Study, in 2010 the quantity of information transmitted globally exceeded 1 zettabyte for the first time. With the amount of information within the digital universe predicted to double every two years, how will the neurological limits of the human brain for processing information constrain or define future social networking and the consumption of information and content?

The Role of Information Literacy

In a context where 250 million websites, 150 million blogs, 25 million tweets, 4 billion Flickr images compete for our attention – with an additional 24 hours of video uploaded to YouTube every minute – the amount of new digital content created in 2011 amounts to several million times that contained in all books ever written (2011 report by think tank DEMOS, page 12). Given the on-going explosion of choice in terms of the range of digital content and information we can potentially consume, the importance of information literacy skills as a tool for authenticating information and differentiating between content presented as fact (whilst often in reality being shaped a diverse range of social, political, corporate and occasionally extremist agendas) will become increasingly important. As Miller & Bartlett (2012, page 37) suggest:

The key challenge is that the specific nature of the Internet makes telling the difference between viable and unviable truth claims particularly difficult. Many of the processes and strategies we use to do this offline either no longer apply, or have become more difficult and less reliable.

There is evidence to suggest that many individuals may not be sufficiently critical of information they find online. 2010 research from the Oxford Internet Institute on patterns of online trust (in the UK) reported that “trust in people providing Internet services” exceeded trust in other major institutions including newspapers, corporations and government. Furthermore, according to the UK Journal of Information Literacy (see page 37), decisions about information quality are often based on site design, rather than more accurate checks: 15% of 12-15 year olds don’t consider the veracity of search term results and just visit the sites they “like the look of”. The pitfalls of this approach are illustrated by the site http://www.martinlutherking.org/ which claims to offer “a valuable resource for parents and teachers alike” but in reality is hosted by white supremacist group StormFront.

Indeed there are indications that such resources are proliferating. A 2009 report from the Simon Wiesenthal Centre identified over 8,000 hate and terrorist websites and claimed that this number was growing at a rate of 30% per year. The report also suggested that extremists are dynamically leveraging new technologies such as online videos via YouTube and Facebook, as well as blogs and online virtual gaming. In 2010 a study from Florida University examined online games containing racism and violence from 724 white supremacist websites and concluded that the purpose of these games was to indoctrinate players with racist ideologies and practice aggressive activities towards minorities which may influence subsequent real world interactions.

These trends present a significant challenge to the educational establishment. As Miller & Bartlett suggest in their 2012 article “Digital Fluency: towards young people’s critical use of the Internet”:

The Internet has become central to learning, but the skills to use it appropriately and well have not become central to learning how to learn. The era of mass, unmediated information needs to be attended by a new educational paradigm based on a renewal of critical, sceptical, savvy thought fit for the online age. Doubtless, today's teachers and librarians deserve sympathy because the speed of change has been very rapid and education curricula have as little free time as education and literacy professionals do. However, education must keep pace with society's turbulence, not vice versa.

The Trend Towards Intimacy in Social Networking

In his 2010 book “How Many Friends Does One Person Need?” Robin Dunbar, Director of Cognitive and Evolutionary Anthropology at Oxford University concludes that the cognitive power of the brain limits the size of social networks that any one species can develop. Drawing upon his study of the brain sizes and social networks of primates, Dr Dunbar suggests that the size of the human brain permits the formation of stable networks of 150 people (see page 4).

His argument is that in a context where meaningful relationships require a certain investment of time as well as emotional and psychological capital; there are sociological and anthropological limits to the number of people who we can know personally, trust and feel emotional affinity for. In practice the size of a broad range of social groupings have been shown to conform to the “Dunbar Number”, from Neolithic villages and military units from Roman times to the present (Harvard Magazine 2010) to the average number of friends people have on Facebook (New York Times 2010) and even the average number of Christmas cards households send out every year (Bloomberg Businessweek Technology 2013).

This research has fuelled a trend towards smaller online social networks – Path a mobile social networking application established in 2010 explicitly limits the number of friends users can add to 150 (based on the assumption that people generally have 5 best friends, 15 good friends and 50 close friends and family). As of September 2012 the Path network has expanded to over 3 million users (CNET 2012). In November 2010, South Korean firm VCNC launched a mobile social networking application “Between” which offers a private online space for couples to share photographs, memories and chat in real time. In January 2013 VCNC secured $230 million to grow its business internationally after reaching 2.35 million downloads (The Next Web 2013). Other mobile social networking applications such as Storytree and Familyleaf have been established to provide private online networks for family members. The question remains, will this trend contribute to denser more meaningful online social exchanges, or divide the web into introverted and fragmented social enclaves?

Hyper-connectivity - Challenges and Opportunities

Advances in Internet connectivity and social penetration have made access to information easier and cheaper, whilst facilitating communication, organisation and collective action. However, the same technology that assists charity fundraising, civic political participation and corporate accountability also has the simultaneous capacity to empower cyber criminals and terrorist/extremist networks. Without the evolution of interoperable and user friendly technical regimes to support online trust, secure authentication and identification at national and international level, the hazard of those latter set of behaviours risks offsetting the benefits of the former.

A 2007 report on the Digital Ecosystem looking at possible evolving scenarios to 2015, noted that the convergence of the media, telecoms and information technology industries as empowered individuals as “contributors to online communities and as creators and distributors of digital content and services” (see page 2). Indeed, in many ways 2012 represented a new high water mark for internet activism (Economist, The New Politics of the Internet, January 2013) in a context where private citizens stood shoulder to shoulder beside the likes of technology giants like Google to successfully derail the Stop Online Piracy Act (SOPA) through generating over 10 million petition signatures and 3 million emails directed at members of Congress (Forbes – Who Really Stopped Sopa and Why?). Later that year, a similar surge in online public activism (including web coordinated physical protests involving thousands of Europeans – BBC News, January 2012) led to the defeat of the Anti-Counterfeiting Trade Agreement (ACTA) in the European Parliament in June 2012 (Guardian, June 2012). These development demonstrate not only the capacity of the Internet to assist collective mobilisation and empowerment – but also the rising importance of the Internet in people’s lives, given that both pieces of US and EU legislation were seen as a threat publically accepted norms of online consumption and exploration. According to a survey of consumers in 13 countries by the Boston Consulting Group, 75% of respondents would give up alcohol, 27% sex and 22% showers for a year – if the refusing to do so meant no access to the Internet (Economist, January 2013).

Of course the capacity of technology to empower can be channelled in both positive and negative ways. A 2012 Global Information Technology Report (see page 118) produced by the World Economic Forum notes that technologies (mobile texting, Facebook, Twitter and Blackberry Messenger services) which facilitated the assembly and coordination of opposition groups in Tahrir Square in Cairo during the 2011 uprising against Egyptian President Mubarak are essentially identical to those used to organise the network of destructive flash mobs during the riots which struck multiple cities in the UK during the summer of 2011.

The November 2012 edition of the International Journal on Computer Science and Engineering (see page 1816), reports the rapid proliferation and increased sophistication of web sites and online forums used by terrorist and extremist groups for fundraising, recruitment, coordination and distribution of propaganda materials. Professor Batil identifies that the continuing evolution of the Internet to sponsor the delivery of multi-media rich content, user generated content and community-based social interactions presents an “ideal environment” for the promotion of extremist ideologies and a virtual platform for the anonymous organisation of criminal activities such as money laundering and drugs trafficking (Ibid).

The 2013 Global Agenda Report which draws upon specialist input from 1,500 global experts (from academia, business, civil society, government and international organisations), of which 900 were brought together for a 2012 Summit on the Global Agenda in Dubai, contends that “a theme common to all these discussions is the increased role in technology in 2013 and its associated risks” (see page 4). In an environment where the risks of far reaching infrastructural “cyber shocks” must be balanced against the potential benefits of networked smart cities, the experts alternately championed and doubted the benefits of an increasingly hyper-connected world for individuals and society (see page 6).

One key problem identified was the lack of legal, technical, economic or regulatory structures to determine how different parties share and control the flow of information and data (Marc Davis – Microsoft, see page 17), alongside a “lack of trust driving demand for disproportionate control” (Robert Madelin – European Commission – page 17). It was suggested that this should not be seen as a technical or technological issue, but instead as a fundamental question about the future structure of digital society, how we define and identify individuals within that society, and who has which rights to see and use information for certain purposes (Marc Davis – see page 16).

There was also simultaneous concern that as we migrate towards defining the approaches and technical standards which are required for international interoperability and trust – a large scale cyber-attack or data breach could lead to a crisis of public trust in the ability of governments and organisations to manage that data (Robert Madelin – see page 17). It was also contended that “today’s leaders have been trained in a world which no longer exists” and that the evolving threats posed by cyber criminals and cyber warfare are not adequately owned at the top level of large corporations and governments which leads to an underweight collective response to those emerging threats (Ibid).

Demographic Trends

TREND: Populations in the developed world will continue to age, while the developing world grows younger and more urbanised leading to differing usage patterns and competing demands on the information environment. Hyperconnectivity expands the influence and role of migrants and diasporas.

Migration to Urban Areas in the Developing World

The World Business Council for Sustainable Development argues in its Vision 2050 report (see page 3) that substantial changes will be necessary in all countries to accommodate the projected additional 2 billion increase in the global population by 2050 – particularly as 98% of this growth is predicted to take place in developing and emerging economies. The 2012 World Economic Forum Global Information Technology report (see page 114) notes that despite that while increases in Internet connectivity and the availability of online content and services will support future economic growth in remote or rural areas, demographic studies indicate large scale migration to cities and metropolitan areas continues to be a defining global trend.

The 2011 United Nations World Urbanization Prospects study (see page 4) forecasts that the world’s urban population will reach 6.3 billion by 2050 (up from 3.6 billion in 2011). Most of the projected growth in the world’s population will be concentrated in the cities of the developing world. As a consequence, the 21st century is likely to see an expanding number of megacities in Asia and Africa with over 10 million inhabitants (see page 5).

This trend will see millions of people aggregating together in densely populated and rapidly expanding cities in the developing world, which will generate significant logistical and infrastructural challenges, associated with the administration of water, power and shelter (Evaluation of Spatial Information Technology Applications for Mega City Management, University of Mainz, 2009, page 1). In the context of these challenges, hyper-connected technology assisted solutions; both in the management of urban infrastructure, and in the delivery of government services and healthcare could play a pivotal role in enhancing the living standards for residents of these sprawling conurbations (Global Information Technology Report 2012, page 114). The US National Intelligence Council’s 2012 report (see page ix) that information technology-based solutions to maximize citizens’ economic productivity and quality of life while minimizing resource consumption and environmental degradation will be critical to ensuring the viability of megacities.

An Ageing Population in the Developed World

According to 2012 figures released by the UN on Population Aging and Development, by 2050 the number of people worldwide aged 60 years or over will increase to 2 billion, outnumbering the number of children (0-14 years) for the first time in human history. Based on declining birth rates and rising life expectancies, the OECD predicts that by 2050 4% of the world population (and 10% of the OECD nations population) will be over 80 years old (OECD 2011, page 62). By 2030 the European Union is expected to be home to 30% of the global population over 65 (European Commission, The World in 2025, page 9).

Given that the percentage of the population active in the labour market is one of the key drivers of future economic growth, an ageing population will pose challenges for the growth prospects and world market competitiveness of many advanced economies (speech by a member of the ECB Executive Board, 2010). It is also suggested that demographic decline and a rising elderly population will compel governments and employers to maximise the contributions of new technologies to growth whilst placing a greater emphasis on retraining and lifelong learning and the recruitment of groups with lower workforce participation (RAND 2004, page 1).

A 2011 paper from the Harvard Program on the Global Demography of Ageing identifies a further trend – the “compression of morbidity” (see page 2). This describes the process by which technological and medical advances, combined with healthy lifestyles have both increased longevity, but also compressed the so called “morbid years” (the period during which the elderly lose functional independence through mental and physical deterioration) into a smaller part of people’s lifecycles. This means that significant numbers of employees will be able to work productively into later life – particularly when this work depends on problem solving, communication and collaboration as opposed to manual labour.

Decentralised and flexible working patterns such as telecommuting (2011 report from Japanese Ministry for Communications, page 3), alongside advances in networked telehealth and telecare systems (see Digital Agenda Action 78), and the emergence of progressively more intuitive user interfaces (such as those offered through touch screen and tablet computing – The Computer Journal 2009, page 847) will all enhance the capacity of the elderly to remain economically active for longer. In addition, the rising proportion of those over 60 in the developed world will lead to an increasing amount of digital content and services being directed at this target market (Harvard 2011, page 9).

The Role of Diapsoras increases in a Hyper-connected World

According to the European Commission in 2025 there will be nearly 250 million migrants, with 65% of these communities living in the developed world. There is evidence to suggest that these diaspora communities are harnessing advances in information and communication technologies to develop online communities and networks which are becoming of increasing strategic importance in the development arena (USAID Report 2008, page 2). Digital diaspora networks also have the capacity to offset the negative effects of flight of human capital from their countries of origin by facilitating knowledge transfer and technology transfer between the diaspora and their homelands (Diaspora Knowledge Flows in the Global Economy, 2010, page 1). A 2010 study by the University of Bergen identified that digital diasporas offer a forum for on-going online historical debates or “web wars” between Poland, Russia and Ukraine (see page 2). A 2012 paper from the University of New Jersey on the Korean diaspora community in the US demonstrated that virtual environments helped users reconnect with their home country and led to a less essential ethnic identity perception based on transnational ties and hybrid cultural practices (see page iii).

Open Education Resources and the Rising Importance of Non-Formal and Informal Learning

TREND: Rising impact of online education resources (including open access to scholarly research and massive open online courses) combined with the emergence of new media and information literacy skills offer flexible non-formal and informal skill accumulation pathways

Learning one set of skills at school, a vocational/technical college or at university is no longer sufficient preparation to equip people with the knowledge and expertise they will require for the duration of their working lives (2007 OECD Policy Briefing, page 1). The combined pressures of an increasingly globalised international economy, as well as a consistently iterative and rapidly changing technological environment means that individuals need to continually upgrade their skills and knowledge throughout their adult lives (page 2).

The rising importance of non-formal and informal learning fundamentally stems from the recognition that in reality “…people are constantly learning everywhere and at all times” (OECD – Recognition of Non-formal and Informal Learning). Indeed few people go through a single day of their lives which does not involve a step towards the acquisition of additional skills, experience, knowledge or competences. Furthermore, for those outside the formal education system (including disadvantaged groups as early school leavers, the unemployed, as well as adults not in formal education or training or the elderly) this form of learning is arguably far more important, relevant and significant than the kind of learning that occurs in formal settings (Ibid).

Indeed the reason why non-formal and informal learning has become increasingly visible on policy-making agendas is the acknowledgement that these flexible routes to learning represent a potentially rich source of human capital, harnessing resources which might otherwise lie dormant or underutilised. The growing popularity of proposals to increase government recognition of non-formal and informal learning pathways is based upon the realisation that such recognition makes this human capital more visible and more valuable to society at large (OECD, Pointers for Policy Development, 2012, page 1).

In light of the current challenges facing the EU in terms of rising levels of youth unemployment, skill shortages and an aging population it is perhaps unsurprising that policy-makers are progressively seeing non-formal and informal learning approaches as a means of unlocking the significant reserves of under-used human capital. In December 2012 the Council of the European Union issued a Recommendation (see page 398/4) which recognised the importance of non-formal and informal learning pathways in engaging with disadvantaged target groups including the young, the unemployed, and the low skilled – and called upon all EU Member States to make arrangements for the validation of non-formal and informal learning by 2018.

It is worth noting that this perspective is not unique to Europe. A 2010 study by Patrick Werquin, which surveyed non-formal and informal learning practices across 22 countries contended that demographic decline in particular have forced many countries around the world to reconsider their strategies for creating and identifying human capital (see page 5).

In conjunction with existing trends towards lifelong learning and the promotion of non-formal and informal learning opportunities, the increasing availability of online Open Education resources will continue to have a substantial impact on the information environment. In 2011 UNESCO report on Open Education Resources claimed that there has been an explosion in the availability of online educational material (see page 12) fuelled by collective sharing of knowledge as a consequence of growing numbers of connected people and the proliferation of web 2.0 technologies (see page 30). In particular, Appendix 5 (see page 65) provides a useful inventory of the Open Education resource repositories available in the sphere of higher education.

A 2012 report by JISC, “Learning in a Digital Age”, noted that e-portfolios, blogs, wikis, podcasting, social networking, web conferencing and online assessment tools are increasingly being employed alongside virtual learning environments to deliver “a richer, personalised curriculum to diverse learners” (see page 9). In recognition of these prevailing educational trends in August 2012 the European Commission launched a proposal (see page 1) for a European initiative on opening up education which recognised the exponential growth in online education resources and their future role in diminishing barriers to education and promoting more flexible and creative ways of learning.

In addition to the plethora of free educational courses available online, a further modulation in this trend can be observed in the arrival of Massive Open Online Courses (MOOCs). In January 2012 Sebastian Thrun, a computer science professor at Stanford University launched Udacity. By October this online education platform had raised $15 million from investors and boasted 475,000 users (Economist December 2012). In April 2012 two of Mr Thrun’s former colleagues launched Coursera with $16 million of venture capital. As of December 2012 Coursera had signed up over 2 million students in partnership with 33 universities worldwide (Ibid). In response to these developments, both Harvard and MIT announced their intention to devote $60 million towards developing their own equivalent online course repository called EdX (Harvard Magazine, July 2012). 

Finally, a further trend which has steadily built up considerable momentum is the practice of granting Open Access to the outputs of publically funded research, generally within the context of peer-reviewed journal articles and papers. Opening up this knowledge to free online access allows this research to reach wider audiences and gain greater public visibility, whilst agencies funding this research see an enhanced return on their investment (JISC).

This approach is increasingly being embraced by governments. In June 2012 the Working Group on Expanding Access to Public Research Findings, chaired by Dame Janet Finch launched its Report which claimed that the future lay with open access publishing and that the UK should embrace and recognise this change (Guardian, June 2012). In July the UK Government accepted the Finch Report recommendations and the Research Councils UK announced that all peer-reviewed research articles and conference proceedings must be made open access by April 1st 2013 (RCUK – Press Release). In November 2012, Universities and Science Minister, David Willetts announced £10 million of additional funding to aid research institutions’ transition and compliance with this new open access policy (BioMed Central).

Also in July 2012, the European Commission issued a proposal to support open access to publications and data arising from research funded by Horizon 2020 (the science/research component of the EU 2020 Growth Strategy).  In the United States the Department of Health mandates free public access to the published results of all research funded by the National Institutes of Health (see NIH Public Access Policy) and requires peer-reviewed journal manuscripts to be uploaded to the digital archive PubMed Central.  

A further indication of evolving attitudes in this area in 2012 was the launch of the Cost of Knowledge Petition which campaigned for a boycott of the journals published by Elsevier. The petition has been signed by over 3000 academics, including several award winning mathematicians, (Guardian, February 2012) and has amassed more than 13,000 signatures to date. Shortly after the boycott, Elsevier announced (Slate, February 2012) that they were withdrawing support  from draft US legislation (the Research Works Act) designed to repeal current open access policies and block similar policies being adopted by other US agencies (Harvard Cyber Law) which subsequently failed to be enacted during the 112th Congress (see GovTrack).

It would seem that such attempts to lock publically funded research away behind electronic commercial pay walls has led to something of a backlash against the academic publishing industry. In January 2012, writing in the Guardian in response to the industry-supported Research Works Act, Mike Taylor said that “academic publishers have become the enemies of science” and that this was the moment where they gave up all pretence of being on the side of scientists. The suicide in January 2013 of Internet activist Aaron Schwartz in his New York appartment after facing a prison sentence of 35 years and a $1 million fine for allegedly extracting and sharing 4.8 million documents from JSTOR (a fee-based repository of scholarly journals) is likely to remain in the public consciousness for some time (Economist, January 2013).  Later in January, the hacker-activist group Anonymous hijacked the website of the US Sentencing Commission, and also launched a further attack on Massachusetts Institute of Technology websites in protest at their treatment of Mr Schwartz.

The trend towards open access publishing will also have significant implications for the developing world. In his 2012 Washington College of Law Research Paper (see pages 43-44), “Open Access Scientific Publishing and the Developing World”, Jorge Contrenas argues that in a context the current number of peer-reviewed scientific journals yield between 1.2 and 1.6 million articles per year, sharing is critical to the advance of science and that improvements to health, infrastructure, and industry also flow from the capacity of scientists to share and build up each other’s discoveries.

Economic Trends

This category covers a selection of economic trends including:

  • The global middle class will grow to exceed 1 billion over the next decade (with the majority of this growth in Asia) creating a new generation with access to information, content and services.
  • As economic incentives put increasing pressure on developing world governments to connect the next billion Internet users, issues surrounding the affordability of broadband access and the need for further investment in infrastructure remain significant obstacles.
  • Increasing levels of technological standardisation and interoperability (potentially coupled with pressure from regulators) is likely to result in long term disintegration of many vertically integrated business models which ring fence consumers into proprietary walled gardens. Opportunities will potentially arise for new proprietary cross-industry/horizontal value chains subject to the regulatory approaches of governments.
  • The explosion in data traffic will require network operators to invest in upgrading their communications infrastructure at the same time as their core revenue streams and profits are being squeezed an expanding range of data hungry “over-the-top” services and applications (e.g. Skype, FaceTime…etc).

The Emergence of a Global Middle Class

TREND: The global middle class will grow to exceed 1 billion over the next decade (with the majority of this growth in Asia) creating a new generation with access to information, content and services

Studies by the Brookings Institute (2010), The Boston Consulting Group (2010), the US National Intelligence Council (2012) and the McKinsey Global Institute (2012), predict that over the next decade the global middle class will grow to exceed one billion people in the developing world. The majority of this growth will take place in Asia which according to the OECD (OECD Yearbook 2012) is expected to account for 66% of the global middle class population and 59% of middle class consumption by 2030 (in comparison to 28% and 23% in 2009). This trend, in conjunction with rising levels of Internet access and connectivity in the developing world will provide a new generation of global consumers with access to information, content and services. According to the Boston Consulting Group (2010 report, page 17) by 2015, Brazil, Russia, India and China will have 1.2 billion Internet users. As well as representing a potential growth engine/market for products and services – the tastes, preferences and political/economic aspirations of this new middle class will represent a tectonic shift in the demographic and cultural landscape of the Internet.

A 2012 report by the European Union Institute for Security Studies (see page 29) suggests that the emergence of a global middle class is likely to narrow material and cultural divides and foster the evolution of a global set of values which are more inclined towards the promotion of democracy and fundamental rights. Global advocacy networks which support human rights such as Human Rights Watch and Amnesty International are already benefiting significantly from the amplifier effect of new information and communications technologies (see page 31). In his May 2011 Report (see page 4) to the General Assembly, the United Nation’s Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression highlighted the role of the Internet as one of “the most powerful instruments of the 21st century for increasing transparency in the conduct of the powerful, access to information, and for building active citizen participation in building democratic societies”.

The 2012 US National Security Council Report, Global Trends 2030 (see page 11), offers a case study on the rising use of online social media by women in Muslim countries. The report argues that despite some data correlating online access with radicalisation, “indications of female empowerment and solidarity are far more plentiful”, and that women are increasingly using online communities to reach beyond their everyday social networks into “safe spaces” to discuss women’s rights, gender equality and the role of women in Islamic law. As participation in online forums is closely linked with both income and literacy, the NIC predicts that as the global middle class expands, female online participation will increase with potentially significant repercussions for societies and governments.

The 2012 report by ESPAS, Global Trends 2030 – Citizens in an interconnected and polycentric world, suggests that the convergence of a rising global middle class in conjunction with new technologies will narrow the global digital divide whilst ensuring that the “citizens of 2030 will want a greater say in their future than those of previous generations” (see page 12). While the report predicts greater citizen empowerment in the developing world as a consequence of increased access to information, it also notes that globalisation and interdependence can also sponsor feelings of frustration and impotence in the face of world events beyond the influence of many individuals and governments (see page 47). In this context the expectations gap between increased access to information and on-going socio-economic inequalities could also sponsor a rise in identity politics and nationalism (see page 155).

Broadband - the business case for reaching the 3rd billion and consequences for information access

TREND: As economic incentives put increasing pressure on developing world governments to connect the next billion Internet users, issues surrounding the affordability of broadband access and the need for further investment in infrastructure remain significant obstacles

According to the International Telecommunications Union’s 2012 report Measuring the Information Society (see page 7), the number of Internet users worldwide doubled in the five years to 2010 to reach 2 billion people. With 70% of the global population yet to experience the transformative benefits of broadband Internet access, the focus has intensified on how to reach the next 3rd billion Internet users.

Research indicates that broadband penetration increases dramatically once its annual cost falls below 3% of annual family incomes (World Economic Forum 2012, page 42). Whilst this target has been largely surpassed in the developed world, there are 30 countries in which broadband cost exceeds 50% of average annual family incomes (Broadband Commission Annual Report 2012, page 42). In Africa, fixed broadband Internet access costs on average nearly three times the monthly average income per person (ITU 2012, page 4). Significant progress can be made if governments in the developing world can successfully transition from taxing technology purchases and infrastructure projects towards incentivising technology adoption as multiplier of jobs and economic growth. A 2012 report by the World Economic Forum suggests that Brazil, Russia, India, China, Turkey and Indonesia could expand Internet users by 860 million by reducing the cost of broadband by 50% (World Economic Forum 2012, page 83).   

In 2011 the ITU reported that the cost of the “ICT Price Basket” (covering tariffs for fixed/mobile telephony and fixed broadband services) continued to fall in global terms – with the cost of access to fixed broadband in particular falling by over 50% in the previous two years (see page iii). The ITU welcomed this trend, but emphasised that broadband is still too expensive in many developing countries where it can exceed more than 100% of average monthly income (in comparison to 1.5% of average monthly income in the developed world) (Ibid). In its subsequent 2012 report (see page 5) the ITU noted the encouraging growth of mobile and wireless broadband services in developing countries, but underlined that this cannot serve as a substitute for fixed-broadband which continues to provide higher speeds, capacity and quality of service (whilst requiring much larger investments in infrastructure).

The World Economic Forum offers a number of policy recommendations which could actively address future levels of broadband adoption (page 87). Their report suggests that a mixture of (1) government investment and regulatory incentives coordinated with national broadband plans with clearly defined ICT-related objectives; (2) increasing the availability of low cost pre-paid or subscription broadband packages from telecommunications providers including entry level packages with restricted speed/data caps; and (3) creative approaches such as pre-loaded data onto devices and maximising the accessibility of free public Internet access providers. In the case of the second recommendation above, the adoption in 2009 by Safaricom (see page 84) of a $5 pre-paid service with a 200MB data cap is attributed to doubling the purchases of personal computers in Kenya between 2009-2010 (average PC market growth in Africa for that period was 3%).

The migration from vertically integrated to horizontally integrated business models and systems

TREND: Increasing levels of technological standardisation and interoperability (potentially coupled with pressure from regulators) is likely to result in long term disintegration of many vertically integrated business models which ring fence consumers into proprietary walled gardens. Opportunities will potentially arise for new proprietary cross-industry/horizontal value chains.

The long term trend towards increasing standardisation and interoperability (potentially coupled with pressure from regulators) is likely to result in the potential disintegration of many vertically integrated business models which ring fence consumers within proprietary walled gardens (e.g. Amazon, Apple….etc). At the same time, opportunities will potentially arise for the development of new proprietary cross-industry/horizontal value chains which will have new implications for consumer access to content and information.

The 2007 World Economic Forum report on the Digital Ecosystem: scenarios to 2015 (see page 8) also proposed some thought provoking questions about the potential trends likely to define the future of digital society.

  • The first trend-related question was whether future digital information/content distribution and processing systems (alongside the aggregation of digital products and services) would be primarily controlled and led by industry – or organically shaped by communities and individuals. This question also focused on whether future innovation and the commercialisation of valuable digital assets would be controlled and led by industry – or whether communities would serve as incubators for innovation with individuals successfully commercialising their own digital products and services.
  • The second trend-related question was whether the digital ecosystem would evolve towards being increasingly closed or increasingly open in its future operation.  An open system would be characterised by the interconnectedness of networks, platforms and devices supported by interoperability and common standards, a broad constellation of international actors and a supportive regulatory environment. In contrast, a closed system would be characterised by proprietary networks, platforms and devices operating within closed silos, vertically integrated content, services and conduits, maintained by a restrictive regulatory environment.

 

Figure 4 (see page 9) provides a helpful diagram which demonstrates the three potential outcomes from these trends acting together. These three outcomes are outlined below:

  • Safe Havens (closed & industry led digital ecosystem) – in an unstable geopolitical environment high profile cyber trigger concerns about online security and a clamour from consumers, businesses and governments for virtual safe havens. Industry responds by vertically integrating to create secured walled environments which provide all digital services. This leads to a small number of digital services conglomerates approved by governments offering services based on proprietary platforms which lock in users. Value lies in creating bundled network and content packages.
  • Middle Kingdoms (open & industry led digital ecosystem) – in a stable geopolitical environment consumers demand open and interoperable products and services. Industry and government co-regulation establishes common standards on privacy and security. The creation of secure identity banks ensure the protection of personal data. Governments support open systems and competition. The network is dominated by a few powerful intermediaries (middle kingdoms) between the users and a fragmented market of specialised providers and offerings. These intermediaries use powerful algorithms that find information and contextualise the results to individual needs. Value is captured by intermediaries and content creators.
  • Youniverse (open & organic community led digital ecosystem) – in a stable geopolitical environment, users want to take control of their digital experience. New organisational structures and grass roots communities grow in power while distributed innovation models become mainstream for products, services and business models. Established businesses need to find ways to engage with this new digital ecosystem by attracting communities. Traditional aggregators are superseded by personal digital agents. The joint actions of all players lead to a new equilibrium which is based on interoperability, open systems and common standards. Open source software and collaborative community structures increase in sophistication.

Supply and Demand - Net Neutrality versus Traffic Management

TREND: The explosion in data traffic will require network operators to invest in upgrading their communications infrastructure at the same time as their core revenue streams and profits are being squeezed an expanding range of data hungry “over-the-top” services and applications (e.g. Skype, FaceTime…etc)

OECD figures show that Internet traffic has risen by 13,000% in the last decade (World Economic Forum 2012, page 59), with more digital information created in 2008-2011 than in all of previous recorded history. In the developed world this explosion in traffic has put network operators under pressure to implement expensive upgrades to their communications infrastructure at the same time as their core revenue streams and profits are being squeezed by competing “over-the-top” services (Skype, FaceTime….etc).

In a context where there is no limit to the number and range of new data hungry services and applications Internet companies can offer consumers, the incentives for facility based Internet service providers to introduce traffic monitoring, inspection and network management regimes to cope with this new traffic will only increase with time. These monitoring/management systems can serve to optimise network performance and protect consumers from online threats – but they also raise questions about the security and privacy of consumer data. They also threaten to undermine the principle of network neutrality whereby network operators refrain from discriminating between different types of services, content and applications transmitted by their networks/infrastructure. 

Technological Trends

 This category looks at a selection of technological trends including:

  • Vast and expanding data sets acquired by governments and companies through their interactions with Internet users (in conjunction with that generated by scientific research, surveillance and smart object sensors), combined with an accelerated capacity to process and analyse information will expand the possibilities for innovative public/commercial services whilst simultaneously enabling sophisticated profiling of individuals and social groups.
  • Mobile will become the primary platform for access to information content and services which will empower new socio-economic groups through transforming access to healthcare, education and government/financial services.
  • Advances in artificial intelligence will enable a) next generation of web browsers to move beyond key word analysis and evaluate the specific content of websites/pages (the semantic web); b) networked devices to combine speech recognition, machine translation and speech synthesis to support real time multilingual voice translation; and c) cloud based crowd sourced translation checking of webpage text.
  • The capacity of 3D printing technology to create information-based physical objects using digital blueprints will revolutionise the concept of “access to information”.

Big Data

TREND: Vast and expanding data sets acquired by governments and companies through their interactions with Internet users (in conjunction with that generated by scientific research, surveillance and smart object sensors), combined with an accelerated capacity to process and analyse information will expand the possibilities for innovative public/commercial services whilst simultaneously enabling sophisticated profiling of individuals and social groups

The relentless flow of mouse clicks, touch screen interactions, messages, user generated content, credit card transactions, completed online forms and search queries (and more) have all contributed to the generation and acquisition of vast data sets currently held by governments and companies through their interactions with Internet users. According to McKinsey (see page vi) in the US 15 out of 17 industry sectors now hold more data per company than the US Library of Congress (which held 235 terrabytes of data as of April 2011). In addition, the collection of scientific research and surveillance data, coupled with the proliferation of networked devices and smart objects (see the Internet of Things below) has led to a further expansion of these huge stockpiles of data. Rapid improvements in the capacity of technology to process and analyse this data (often in real time) has created new economic opportunities (see page 4). A 2012 report from Intel (see page 4) suggested that innovations in data platforms and analytics enable companies to mine new sources and volumes of information (such as web data and social media data) that were previously too unwieldy or unmanageable to effectively process.

In March 2012 the International Data Corporation forecast that global revenues from the capture and exploitation of big data will reach $16.9 billion by 2015 (IDC 2012). A 2011 report from McKinsey (see page 8) projects the potential value of currently existing data sources could add €250 billion of annual value to Europe’s public sector administration and $300 billion of potential annual value to US healthcare. However, the benefits of increasingly intelligent and automated data collection and processing on such an unprecedented scale must also be balanced against concerns about the privacy and security of personal information. The aggregation of multiple data sources could allow organisations and governments to build sophisticated profiles of individuals without their knowing consent. A consumer backlash against this trend could undermine the future availability and legality of such potentially commercially/socially valuable processes.

Building upon the trend toward big data, the number of smart objects equipped with sensors and the capacity to communicate online is expanding at an exponential rate. According to estimates the number of networked devices exceeded the number of people on the planet for the first time in 2011 (World Economic Forum 2012, page 47). Industry projections suggest that by 2050 the scale of automated machine-to-machine traffic could mean that connected devices outstrip the number of connected human beings by six to one (Ibid). The exploding quantities of sensory and environmental data produced by these devices (ranging from pacemakers and tumble driers to street lights and vending machines) – coupled with increasing capacity to rapidly administer and analyse large data sets – will facilitate the development of complex automated services and smart objects ranging from everyday appliances to infrastructure.

Mobile becomes the primary platform for access to information, content and services

TREND: Mobile will become the primary platform for access to information content and services which will empower new socio-economic groups through transforming access to healthcare, education and government/financial services.

Mobile has already become the primary means of accessing the Internet across the world. Increasing speeds and adoption rates of mobile broadband will transform access to healthcare, education and empower new socio-economic groups. Since 2010 mobile broadband subscriptions have overtaken fixed broadband subscriptions and January 2012 global mobile broadband subscriptions serve over a billion users (World Economic Forum 2012, page 67-68). Forecasts suggest that by 2016 more than 80% of broadband subscriptions will be mobile, with a further 1 million connections being added every day fostered by the rollout of 3G and 4G technologies (Ibid). 

According to Cisco’s Global Mobile Data Traffic Forecast 2011-2016 (see page 3) by 2016 there will be over 10 billion mobile devices connected to the Internet with the Middle East and Africa experiencing a 104% increase in mobile data traffic (followed by Asia and Eastern Europe at 84% and 83% respectively). A 2012 report from McKinsey (see page 41) notes that more than 50% of global Internet users are now in developing countries and their number is projected to grow at five times the rate of users in the developed world. Most of this growth will be driven by mobile Internet access in a context 70% of Egyptian internet users, 59% of Indian internet users and 50% of Nigerian internet users primarily access the web through their mobile phones (see page 42).

This projected expansion of mobile Internet access, alongside the expanding availability of mobile content, applications and services will transform the lives of millions of people across the globe. In the field of health mobile broadband will expand public access to information, reduce costs and inefficiencies whilst facilitating remote care and communication with medical professionals – with further implications for the management of chronic disease, elderly care and the training of health workers (Qualcomm – Healthcare). A recent case study from Qualcomm shows how the Wireless Heart Health Project in China is distributing 3G smartphones equipped with cardiovascular monitoring sensors to under resourced community clinics which transmit patient heart data to heart specialists in Beijing who can then provide real time feedback to patients. In a context where the World Health Organisation has projected that China stands to lose an estimate $558 billion between 2005 and 2015 from cardiovascular diseases (World Economic Forum 2012, page 72) the benefits of mobile sponsored health services, particularly in remote or underserved areas will be substantial. For further case studies covering Egypt, the Philippines and South Africa please click here.  A 2010 report by McKinsey and GSMA “mHealth: a new vision for healthcare” (see page 5) estimates that remote monitoring through mobile devices could save $175-$200 billion in annual healthcare costs for managing chronic diseases in the OECD and BRIC (Brazil, Russia, India and China) countries.

In the field of education, previously identified trends towards online learning and MOOCs will be significantly amplified by mobile Internet access. A report from the European Commission’s e-learning portal (see page 2) claims that mobile learning (or m-learning) was at the top of the agenda of leading e-learning 2012 conferences in London, Sydney, German and Switzerland. A report forecasting trends in m-learning during 2010-2015 (see page 6) predicts that the global market will rise to $9.1 billion by 2015, with the highest growth rates in Africa, Latin America and Eastern Europe. A 2011 study by the Mastercard Foundation and GSMA, focused on Ghana, Morocco, Uganda and India emphasised the “rich promise” of m-learning in a context where 75 million young people in the developing world are unemployed and many lack access to basic education and employment opportunities (see page 2).

In a survey of 1,200 young people across these countries 63% believed they could learn through even a basic mobile device, with 39% most interested in m-learning services which develop their professional skills, and 27% most interested in language lessons (see page 5). A 2011 report from Alcatel Lucent “M-Learning: A Powerful Tool for addressing Millennium Development Goals” (see page 7) highlights that while only 25% of homes in developing countries have computers, one of the most important benefits of m-learning is “its inherent capability of reaching people through devices which before long will be in the pockets of every human being on the planet”. The study also stressed that through m-learning students were able to access the most up-to-date content from anywhere through a range of video, audio and text-based applications which can be repeatedly reviewed to increase comprehension and understanding (Ibid).

Advances in Artificial Intelligence

TREND: Advances in artificial intelligence will enable a) next generation of web browsers to move beyond key word analysis and evaluate the specific content of websites/pages (the semantic web); b) networked devices to combine speech recognition, machine translation and speech synthesis to support real time multilingual voice translation; and c) cloud based crowd sourced translation checking of webpage text

Research continues to enable the next generation of search engines and web browsers to evaluate and assess the specific content of pages/sites (as opposed to simply reading meta data, tags or identifying key words). If implemented effectively the semantic web would revolutionise the efficiency of search with a correspondingly positive impact on access to information and research productivity. However, this same technology could have negative implications in relation to tracking, censorship and monitoring/blocking content.

In a context where three quarters of the global population (and just under a three quarters of global Internet users) do not speak English, language still represents a significant barrier to access to information where English remains (closely followed and soon to be surpassed by Chinese) the leading language on the web (Internet World Stats). However, recent advances in combining speech recognition, machine learning, machine translation and speech synthesis technology may have the capacity to support real time multilingual voice translation via any Internet enabled device within the near future.

Despite the perennial problems of adapting to slang, regional accents and culturally specific idioms and concepts – pioneering approaches using deep neural networks (Microsoft) and cloud-based crowd sourced sentence checking (Google) are showing significant promise. In conjunction with related developments in webpage translation methods, this trend has the potential to dissolve many of the barriers which limit access to multicultural content, and has particularly exciting implications for visually impaired Internet users.

3D Printing - Access to Physical Objects Created by Information

TREND: The capacity of 3D printing technology to create information-based physical objects using digital blueprints will revolutionise the concept of “access to information”.

Widespread adoption of 3D printing technology will revolutionise the concept of “access to information” given its capacity to create information-based physical objects using digital blueprints and designs. Alongside its transformative impact on manufacturing (by drastically increasing efficiency and reducing costs) there are also other significant implications in terms of increased counterfeiting/ intellectual property infringement. 

Some argue that in light of the dramatic consequences for music copyright as a result of the convergence of the Internet, digitised music and media players, 3D printing technology may have similar implications for artistic copyright, design right, trademarks and patents, but in a rather more diverse legal framework (The Intellectual Property Implications of Low-Cost 3D Printing, 2010, page 29).

A 2011 Study by the Atlantic Council “Could 3D Printing Change the World” contended that 3D printing could introduce both a manufacturing revolution and a fundamental shift to the global economy (see page 12). The report identifies a broad range of potential impacts, including increased productivity in ageing societies (as a result of reduced labour requirements and health costs), low-cost on demand local production of products in the developing world (reducing transport costs and waste), the reduction of global economic imbalances (the localisation of production limits reliance on imports), the creation of new industries and professions, as well as trillions of dollars of new income for businesses based both on innovative products and services – as well as the legal fees associated with intellectual property dispute and resolution services (Ibid).

The 2012 study from the US National Intelligence Council “Global Trends 2030” (see page 87) adopts both a positive yet also cautionary stance:

New manufacturing and automation technologies such as additive manufacturing (3D printing) and robotics have the potential to change work patterns in both the developing and developed worlds. In developed countries these technologies will improve productivity, address labor constraints, and diminish the need for outsourcing, especially if reducing the length of supply chains brings clear benefits. Nevertheless, such technologies could still have a similar effect as outsourcing: they could make more low- and semi-skilled manufacturing workers in developed economies redundant, exacerbating domestic inequalities. For developing economies, particularly Asian ones, the new technologies will stimulate new manufacturing capabilities and further increase the competitiveness of Asian manufacturers and suppliers.

In May 2012 the charity techfortrade launched the 3D4D challenge offering a $100,000 prize for innovative projects which leverage 3D printing technologies that foster collaboration around social and economic issues in the developing world. The winning project (WOOF) enables waste plastic (from bottles for example) to be used as the raw material for 3D printing. This presents an opportunity to manufacture a wide range of low cost products from waste plastic including toilets and water collectors (Economist November 2012). Trials to address local issues in water and sanitation will begin in Mexico during 2013 in association with the NGO Water for Humans.