Social Trends

This category covers a range of social and educational trends including:

  • Behavioural advertising and personalised search optimisation contribute to the creation of balkanised “echo chamber communities” insulated from unfamiliar or alternative cultures and perspectives.
  • The size of the digital universe will continue to expand exponentially with information and content shaped by a kaleidoscope of social, political, corporate (and on occasion extremist) agendas.
  • Technology which makes access to information easier and cheaper while facilitating communication and collective action will support both positive outcomes (empowering individuals, increasing civic participation and corporate accountability) and negative outcomes (empowering cyber criminals and terrorist/extremist networks).
  • Populations in the developed world will continue to age, while the developing world grows younger leading to differing usage patterns and competing demands on the information environment. Hyperconnectivity expands the influence and role of migrants and diasporas.
  • Rising impact of online education resources (including open access to scholarly research and massive open online courses) combined with the emergence of new media and information literacy skills offer flexible non-formal and informal skill accumulation pathways.

Internet Balkanisation and Increasingly Automated Personalisation

TREND: Behavioural advertising and personalised search optimisation contribute to the creation of balkanised “echo chamber communities” insulated from unfamiliar or alternative cultures and perspectives

In 1995 Nicholas Negroponte, founder of the Massachusetts Institute of Technology Media Lab, published his book “Being Digital” in which, alongside multiple predictions on the future applications of technology, he introduced the concept of the “Daily Me”, a virtual newspaper which was customised for each individual subscriber (Making the Daily Me, Neil Thurman, page 2). Today it is manifestly evident that Mr Negroponte’s vision was a prescient one. 

A 2012 paper from the University of Pennsylvania (see page 2) claims that web personalisation (using statistical techniques to infer a customer’s preferences and recommend content suitable to them) has now become ubiquitous across multiple providers of online news, media and services. A 2011 study from City University (see page 5) surveyed the use of online personalisation across eleven national US and UK news websites and reported an increasing range of approaches including contextual recommendations (links to external content), geo-targeted editions (based on user location), aggregated targeted filtering (selections of news stories filtered by general user popularity) and profile based recommendations (based on data on user behaviour from registration or imported from social media sites). A 2012 article, Existing Trends and Techniques for Web Personalization, in the International Journal of Computer Science Issues (see page 433) reports that web personalisation has become an indispensable tool for both web-based organisations and end users to deal with content overload, and that most major Internet companies are implementing personalisation systems.

In Republic.com 2.0 (2007) Cass Sunstein argues that the Internet has a propensity to foster social fragmentation by encouraging individuals to sort themselves into deliberate enclaves of like-minded people and assisting them in filtering out unwanted or opposing opinions (referenced in Comparative Research in Law & Political Economy, Book Review by Peter S. Jenkins, page 1). In studying hyperlinking patterns across 1,400 political blogs, Sunstein reported that 91% of links were to like-minded sites which indicated a trend towards rarely highlighting or drawing attention to opposing opinions (interview with Salon Magazine 2007). In a 2010 article for Scientific American, founder of the World Wide Web Tim Berners-Lee argued that the effects of social networking companies such as Facebook and LinkedIn shutting their users into online walled gardens could cause the Internet to be “broken up in to fragmented islands”.

In his 2011 book The Filter Bubble Eli Pariser contended that Facebook’s decision to change its filtering algorithm for status updates and newsfeeds (so that by default users would only see material from friends they recently interacted with) had the unintended consequence of suppressing updates from friends who did not share that user’s political and social values (Wall Street Journal article, 2011). Pariser also argued that increasing trends towards personalised search optimization (Google uses 57 different metrics to predict which search results will be displayed for different users) can also produce unexpected outcomes (2011 Slate Magazine article).  In the aftermath of the Deep Water Horizon oil spill in 2010 one user’s Google  search for “BP” yielded a set of links on investment opportunities with BP – while another generated links providing information on the oil spill (2010 interview with Salon Magazine). Pariser suggested that this invisible and automated customisation of our web experience risks trapping individuals in personalised information bubbles which insulate us from uncomfortable or unfamiliar views, cultures and perspectives.

Alongside Google, online retailers and media providers such as Amazon and Netflix have well established personalised recommender systems which direct users to content that is likely to interest them based on previous choices and search histories. In a 2011 interview with the New York Times, computer scientist Jaron Larnier claimed that this trend has a tendency to cocoon users within a personalised echo chamber where more and more of what they experience online conforms to an image of themselves generated by software.

However, it can also be argued that these trends merely reinforce or reflect traditional and long-standing human tendencies to engage with people, ideas and content which strike a chord with their existing values and interests – in terms of adopting a positive test strategy (Confirmation, Disconfirmation and Information in Hypothesis Testing, page 211, American Psychological Association 1987). Furthermore, research by the University of Michigan (commissioned by Facebook) revealed that while Facebook users are more likely to look at links or pictures shared by close friends, in reality they tend to get far more information from distant contacts, many of whom tend to share items users wouldn’t otherwise be aware of. This conforms to previous influential sociological research by Mark Granovetter (American Journal of Sociology: The Strength of Weak Ties 1973) that stipulates that individuals tend to form clusters of a few close friends, alongside larger numbers of disparate social acquaintances.

The International Journal of Computer Science (page 430) suggests that the tremendous growth in the number, size and complexity of information resources available online make it increasingly difficult for users to access relevant information in a context where an individual’s capacity to absorb, read and digest information is essentially fixed. In this context, personalisation is a necessary evil to prevent the information universe becoming progressively unintelligible to human enquiry. It is worth noting that Facebook’s 2010 decision to suppress certain types of newsfeeds was based on complaints from Facebook users that they were being inundated with updates from friends they barely knew (Wall Street Journal article 2010).

Information Rich or Information Overload? The Blessing and Curse of Abundant Choice

TREND: The size of the digital universe will continue to expand exponentially with information and content shaped by a kaleidoscope of social, political, corporate (and on occasion extremist) agendas, alongside a trend towards smaller more private online social networks

According to the International Data Corporation’s 2011 Digital Universe Study, in 2010 the quantity of information transmitted globally exceeded 1 zettabyte for the first time. With the amount of information within the digital universe predicted to double every two years, how will the neurological limits of the human brain for processing information constrain or define future social networking and the consumption of information and content?

The Role of Information Literacy

In a context where 250 million websites, 150 million blogs, 25 million tweets, 4 billion Flickr images compete for our attention – with an additional 24 hours of video uploaded to YouTube every minute – the amount of new digital content created in 2011 amounts to several million times that contained in all books ever written (2011 report by think tank DEMOS, page 12). Given the on-going explosion of choice in terms of the range of digital content and information we can potentially consume, the importance of information literacy skills as a tool for authenticating information and differentiating between content presented as fact (whilst often in reality being shaped a diverse range of social, political, corporate and occasionally extremist agendas) will become increasingly important. As Miller & Bartlett (2012, page 37) suggest:

The key challenge is that the specific nature of the Internet makes telling the difference between viable and unviable truth claims particularly difficult. Many of the processes and strategies we use to do this offline either no longer apply, or have become more difficult and less reliable.

There is evidence to suggest that many individuals may not be sufficiently critical of information they find online. 2010 research from the Oxford Internet Institute on patterns of online trust (in the UK) reported that “trust in people providing Internet services” exceeded trust in other major institutions including newspapers, corporations and government. Furthermore, according to the UK Journal of Information Literacy (see page 37), decisions about information quality are often based on site design, rather than more accurate checks: 15% of 12-15 year olds don’t consider the veracity of search term results and just visit the sites they “like the look of”. The pitfalls of this approach are illustrated by the site http://www.martinlutherking.org/ which claims to offer “a valuable resource for parents and teachers alike” but in reality is hosted by white supremacist group StormFront.

Indeed there are indications that such resources are proliferating. A 2009 report from the Simon Wiesenthal Centre identified over 8,000 hate and terrorist websites and claimed that this number was growing at a rate of 30% per year. The report also suggested that extremists are dynamically leveraging new technologies such as online videos via YouTube and Facebook, as well as blogs and online virtual gaming. In 2010 a study from Florida University examined online games containing racism and violence from 724 white supremacist websites and concluded that the purpose of these games was to indoctrinate players with racist ideologies and practice aggressive activities towards minorities which may influence subsequent real world interactions.

These trends present a significant challenge to the educational establishment. As Miller & Bartlett suggest in their 2012 article “Digital Fluency: towards young people’s critical use of the Internet”:

The Internet has become central to learning, but the skills to use it appropriately and well have not become central to learning how to learn. The era of mass, unmediated information needs to be attended by a new educational paradigm based on a renewal of critical, sceptical, savvy thought fit for the online age. Doubtless, today's teachers and librarians deserve sympathy because the speed of change has been very rapid and education curricula have as little free time as education and literacy professionals do. However, education must keep pace with society's turbulence, not vice versa.

The Trend Towards Intimacy in Social Networking

In his 2010 book “How Many Friends Does One Person Need?” Robin Dunbar, Director of Cognitive and Evolutionary Anthropology at Oxford University concludes that the cognitive power of the brain limits the size of social networks that any one species can develop. Drawing upon his study of the brain sizes and social networks of primates, Dr Dunbar suggests that the size of the human brain permits the formation of stable networks of 150 people (see page 4).

His argument is that in a context where meaningful relationships require a certain investment of time as well as emotional and psychological capital; there are sociological and anthropological limits to the number of people who we can know personally, trust and feel emotional affinity for. In practice the size of a broad range of social groupings have been shown to conform to the “Dunbar Number”, from Neolithic villages and military units from Roman times to the present (Harvard Magazine 2010) to the average number of friends people have on Facebook (New York Times 2010) and even the average number of Christmas cards households send out every year (Bloomberg Businessweek Technology 2013).

This research has fuelled a trend towards smaller online social networks – Path a mobile social networking application established in 2010 explicitly limits the number of friends users can add to 150 (based on the assumption that people generally have 5 best friends, 15 good friends and 50 close friends and family). As of September 2012 the Path network has expanded to over 3 million users (CNET 2012). In November 2010, South Korean firm VCNC launched a mobile social networking application “Between” which offers a private online space for couples to share photographs, memories and chat in real time. In January 2013 VCNC secured $230 million to grow its business internationally after reaching 2.35 million downloads (The Next Web 2013). Other mobile social networking applications such as Storytree and Familyleaf have been established to provide private online networks for family members. The question remains, will this trend contribute to denser more meaningful online social exchanges, or divide the web into introverted and fragmented social enclaves?

Hyper-connectivity - Challenges and Opportunities

Advances in Internet connectivity and social penetration have made access to information easier and cheaper, whilst facilitating communication, organisation and collective action. However, the same technology that assists charity fundraising, civic political participation and corporate accountability also has the simultaneous capacity to empower cyber criminals and terrorist/extremist networks. Without the evolution of interoperable and user friendly technical regimes to support online trust, secure authentication and identification at national and international level, the hazard of those latter set of behaviours risks offsetting the benefits of the former.

A 2007 report on the Digital Ecosystem looking at possible evolving scenarios to 2015, noted that the convergence of the media, telecoms and information technology industries as empowered individuals as “contributors to online communities and as creators and distributors of digital content and services” (see page 2). Indeed, in many ways 2012 represented a new high water mark for internet activism (Economist, The New Politics of the Internet, January 2013) in a context where private citizens stood shoulder to shoulder beside the likes of technology giants like Google to successfully derail the Stop Online Piracy Act (SOPA) through generating over 10 million petition signatures and 3 million emails directed at members of Congress (Forbes – Who Really Stopped Sopa and Why?). Later that year, a similar surge in online public activism (including web coordinated physical protests involving thousands of Europeans – BBC News, January 2012) led to the defeat of the Anti-Counterfeiting Trade Agreement (ACTA) in the European Parliament in June 2012 (Guardian, June 2012). These development demonstrate not only the capacity of the Internet to assist collective mobilisation and empowerment – but also the rising importance of the Internet in people’s lives, given that both pieces of US and EU legislation were seen as a threat publically accepted norms of online consumption and exploration. According to a survey of consumers in 13 countries by the Boston Consulting Group, 75% of respondents would give up alcohol, 27% sex and 22% showers for a year – if the refusing to do so meant no access to the Internet (Economist, January 2013).

Of course the capacity of technology to empower can be channelled in both positive and negative ways. A 2012 Global Information Technology Report (see page 118) produced by the World Economic Forum notes that technologies (mobile texting, Facebook, Twitter and Blackberry Messenger services) which facilitated the assembly and coordination of opposition groups in Tahrir Square in Cairo during the 2011 uprising against Egyptian President Mubarak are essentially identical to those used to organise the network of destructive flash mobs during the riots which struck multiple cities in the UK during the summer of 2011.

The November 2012 edition of the International Journal on Computer Science and Engineering (see page 1816), reports the rapid proliferation and increased sophistication of web sites and online forums used by terrorist and extremist groups for fundraising, recruitment, coordination and distribution of propaganda materials. Professor Batil identifies that the continuing evolution of the Internet to sponsor the delivery of multi-media rich content, user generated content and community-based social interactions presents an “ideal environment” for the promotion of extremist ideologies and a virtual platform for the anonymous organisation of criminal activities such as money laundering and drugs trafficking (Ibid).

The 2013 Global Agenda Report which draws upon specialist input from 1,500 global experts (from academia, business, civil society, government and international organisations), of which 900 were brought together for a 2012 Summit on the Global Agenda in Dubai, contends that “a theme common to all these discussions is the increased role in technology in 2013 and its associated risks” (see page 4). In an environment where the risks of far reaching infrastructural “cyber shocks” must be balanced against the potential benefits of networked smart cities, the experts alternately championed and doubted the benefits of an increasingly hyper-connected world for individuals and society (see page 6).

One key problem identified was the lack of legal, technical, economic or regulatory structures to determine how different parties share and control the flow of information and data (Marc Davis – Microsoft, see page 17), alongside a “lack of trust driving demand for disproportionate control” (Robert Madelin – European Commission – page 17). It was suggested that this should not be seen as a technical or technological issue, but instead as a fundamental question about the future structure of digital society, how we define and identify individuals within that society, and who has which rights to see and use information for certain purposes (Marc Davis – see page 16).

There was also simultaneous concern that as we migrate towards defining the approaches and technical standards which are required for international interoperability and trust – a large scale cyber-attack or data breach could lead to a crisis of public trust in the ability of governments and organisations to manage that data (Robert Madelin – see page 17). It was also contended that “today’s leaders have been trained in a world which no longer exists” and that the evolving threats posed by cyber criminals and cyber warfare are not adequately owned at the top level of large corporations and governments which leads to an underweight collective response to those emerging threats (Ibid).

Demographic Trends

TREND: Populations in the developed world will continue to age, while the developing world grows younger and more urbanised leading to differing usage patterns and competing demands on the information environment. Hyperconnectivity expands the influence and role of migrants and diasporas.

Migration to Urban Areas in the Developing World

The World Business Council for Sustainable Development argues in its Vision 2050 report (see page 3) that substantial changes will be necessary in all countries to accommodate the projected additional 2 billion increase in the global population by 2050 – particularly as 98% of this growth is predicted to take place in developing and emerging economies. The 2012 World Economic Forum Global Information Technology report (see page 114) notes that despite that while increases in Internet connectivity and the availability of online content and services will support future economic growth in remote or rural areas, demographic studies indicate large scale migration to cities and metropolitan areas continues to be a defining global trend.

The 2011 United Nations World Urbanization Prospects study (see page 4) forecasts that the world’s urban population will reach 6.3 billion by 2050 (up from 3.6 billion in 2011). Most of the projected growth in the world’s population will be concentrated in the cities of the developing world. As a consequence, the 21st century is likely to see an expanding number of megacities in Asia and Africa with over 10 million inhabitants (see page 5).

This trend will see millions of people aggregating together in densely populated and rapidly expanding cities in the developing world, which will generate significant logistical and infrastructural challenges, associated with the administration of water, power and shelter (Evaluation of Spatial Information Technology Applications for Mega City Management, University of Mainz, 2009, page 1). In the context of these challenges, hyper-connected technology assisted solutions; both in the management of urban infrastructure, and in the delivery of government services and healthcare could play a pivotal role in enhancing the living standards for residents of these sprawling conurbations (Global Information Technology Report 2012, page 114). The US National Intelligence Council’s 2012 report (see page ix) that information technology-based solutions to maximize citizens’ economic productivity and quality of life while minimizing resource consumption and environmental degradation will be critical to ensuring the viability of megacities.

An Ageing Population in the Developed World

According to 2012 figures released by the UN on Population Aging and Development, by 2050 the number of people worldwide aged 60 years or over will increase to 2 billion, outnumbering the number of children (0-14 years) for the first time in human history. Based on declining birth rates and rising life expectancies, the OECD predicts that by 2050 4% of the world population (and 10% of the OECD nations population) will be over 80 years old (OECD 2011, page 62). By 2030 the European Union is expected to be home to 30% of the global population over 65 (European Commission, The World in 2025, page 9).

Given that the percentage of the population active in the labour market is one of the key drivers of future economic growth, an ageing population will pose challenges for the growth prospects and world market competitiveness of many advanced economies (speech by a member of the ECB Executive Board, 2010). It is also suggested that demographic decline and a rising elderly population will compel governments and employers to maximise the contributions of new technologies to growth whilst placing a greater emphasis on retraining and lifelong learning and the recruitment of groups with lower workforce participation (RAND 2004, page 1).

A 2011 paper from the Harvard Program on the Global Demography of Ageing identifies a further trend – the “compression of morbidity” (see page 2). This describes the process by which technological and medical advances, combined with healthy lifestyles have both increased longevity, but also compressed the so called “morbid years” (the period during which the elderly lose functional independence through mental and physical deterioration) into a smaller part of people’s lifecycles. This means that significant numbers of employees will be able to work productively into later life – particularly when this work depends on problem solving, communication and collaboration as opposed to manual labour.

Decentralised and flexible working patterns such as telecommuting (2011 report from Japanese Ministry for Communications, page 3), alongside advances in networked telehealth and telecare systems (see Digital Agenda Action 78), and the emergence of progressively more intuitive user interfaces (such as those offered through touch screen and tablet computing – The Computer Journal 2009, page 847) will all enhance the capacity of the elderly to remain economically active for longer. In addition, the rising proportion of those over 60 in the developed world will lead to an increasing amount of digital content and services being directed at this target market (Harvard 2011, page 9).

The Role of Diapsoras increases in a Hyper-connected World

According to the European Commission in 2025 there will be nearly 250 million migrants, with 65% of these communities living in the developed world. There is evidence to suggest that these diaspora communities are harnessing advances in information and communication technologies to develop online communities and networks which are becoming of increasing strategic importance in the development arena (USAID Report 2008, page 2). Digital diaspora networks also have the capacity to offset the negative effects of flight of human capital from their countries of origin by facilitating knowledge transfer and technology transfer between the diaspora and their homelands (Diaspora Knowledge Flows in the Global Economy, 2010, page 1). A 2010 study by the University of Bergen identified that digital diasporas offer a forum for on-going online historical debates or “web wars” between Poland, Russia and Ukraine (see page 2). A 2012 paper from the University of New Jersey on the Korean diaspora community in the US demonstrated that virtual environments helped users reconnect with their home country and led to a less essential ethnic identity perception based on transnational ties and hybrid cultural practices (see page iii).

Open Education Resources and the Rising Importance of Non-Formal and Informal Learning

TREND: Rising impact of online education resources (including open access to scholarly research and massive open online courses) combined with the emergence of new media and information literacy skills offer flexible non-formal and informal skill accumulation pathways

Learning one set of skills at school, a vocational/technical college or at university is no longer sufficient preparation to equip people with the knowledge and expertise they will require for the duration of their working lives (2007 OECD Policy Briefing, page 1). The combined pressures of an increasingly globalised international economy, as well as a consistently iterative and rapidly changing technological environment means that individuals need to continually upgrade their skills and knowledge throughout their adult lives (page 2).

The rising importance of non-formal and informal learning fundamentally stems from the recognition that in reality “…people are constantly learning everywhere and at all times” (OECD – Recognition of Non-formal and Informal Learning). Indeed few people go through a single day of their lives which does not involve a step towards the acquisition of additional skills, experience, knowledge or competences. Furthermore, for those outside the formal education system (including disadvantaged groups as early school leavers, the unemployed, as well as adults not in formal education or training or the elderly) this form of learning is arguably far more important, relevant and significant than the kind of learning that occurs in formal settings (Ibid).

Indeed the reason why non-formal and informal learning has become increasingly visible on policy-making agendas is the acknowledgement that these flexible routes to learning represent a potentially rich source of human capital, harnessing resources which might otherwise lie dormant or underutilised. The growing popularity of proposals to increase government recognition of non-formal and informal learning pathways is based upon the realisation that such recognition makes this human capital more visible and more valuable to society at large (OECD, Pointers for Policy Development, 2012, page 1).

In light of the current challenges facing the EU in terms of rising levels of youth unemployment, skill shortages and an aging population it is perhaps unsurprising that policy-makers are progressively seeing non-formal and informal learning approaches as a means of unlocking the significant reserves of under-used human capital. In December 2012 the Council of the European Union issued a Recommendation (see page 398/4) which recognised the importance of non-formal and informal learning pathways in engaging with disadvantaged target groups including the young, the unemployed, and the low skilled – and called upon all EU Member States to make arrangements for the validation of non-formal and informal learning by 2018.

It is worth noting that this perspective is not unique to Europe. A 2010 study by Patrick Werquin, which surveyed non-formal and informal learning practices across 22 countries contended that demographic decline in particular have forced many countries around the world to reconsider their strategies for creating and identifying human capital (see page 5).

In conjunction with existing trends towards lifelong learning and the promotion of non-formal and informal learning opportunities, the increasing availability of online Open Education resources will continue to have a substantial impact on the information environment. In 2011 UNESCO report on Open Education Resources claimed that there has been an explosion in the availability of online educational material (see page 12) fuelled by collective sharing of knowledge as a consequence of growing numbers of connected people and the proliferation of web 2.0 technologies (see page 30). In particular, Appendix 5 (see page 65) provides a useful inventory of the Open Education resource repositories available in the sphere of higher education.

A 2012 report by JISC, “Learning in a Digital Age”, noted that e-portfolios, blogs, wikis, podcasting, social networking, web conferencing and online assessment tools are increasingly being employed alongside virtual learning environments to deliver “a richer, personalised curriculum to diverse learners” (see page 9). In recognition of these prevailing educational trends in August 2012 the European Commission launched a proposal (see page 1) for a European initiative on opening up education which recognised the exponential growth in online education resources and their future role in diminishing barriers to education and promoting more flexible and creative ways of learning.

In addition to the plethora of free educational courses available online, a further modulation in this trend can be observed in the arrival of Massive Open Online Courses (MOOCs). In January 2012 Sebastian Thrun, a computer science professor at Stanford University launched Udacity. By October this online education platform had raised $15 million from investors and boasted 475,000 users (Economist December 2012). In April 2012 two of Mr Thrun’s former colleagues launched Coursera with $16 million of venture capital. As of December 2012 Coursera had signed up over 2 million students in partnership with 33 universities worldwide (Ibid). In response to these developments, both Harvard and MIT announced their intention to devote $60 million towards developing their own equivalent online course repository called EdX (Harvard Magazine, July 2012). 

Finally, a further trend which has steadily built up considerable momentum is the practice of granting Open Access to the outputs of publically funded research, generally within the context of peer-reviewed journal articles and papers. Opening up this knowledge to free online access allows this research to reach wider audiences and gain greater public visibility, whilst agencies funding this research see an enhanced return on their investment (JISC).

This approach is increasingly being embraced by governments. In June 2012 the Working Group on Expanding Access to Public Research Findings, chaired by Dame Janet Finch launched its Report which claimed that the future lay with open access publishing and that the UK should embrace and recognise this change (Guardian, June 2012). In July the UK Government accepted the Finch Report recommendations and the Research Councils UK announced that all peer-reviewed research articles and conference proceedings must be made open access by April 1st 2013 (RCUK – Press Release). In November 2012, Universities and Science Minister, David Willetts announced £10 million of additional funding to aid research institutions’ transition and compliance with this new open access policy (BioMed Central).

Also in July 2012, the European Commission issued a proposal to support open access to publications and data arising from research funded by Horizon 2020 (the science/research component of the EU 2020 Growth Strategy).  In the United States the Department of Health mandates free public access to the published results of all research funded by the National Institutes of Health (see NIH Public Access Policy) and requires peer-reviewed journal manuscripts to be uploaded to the digital archive PubMed Central.  

A further indication of evolving attitudes in this area in 2012 was the launch of the Cost of Knowledge Petition which campaigned for a boycott of the journals published by Elsevier. The petition has been signed by over 3000 academics, including several award winning mathematicians, (Guardian, February 2012) and has amassed more than 13,000 signatures to date. Shortly after the boycott, Elsevier announced (Slate, February 2012) that they were withdrawing support  from draft US legislation (the Research Works Act) designed to repeal current open access policies and block similar policies being adopted by other US agencies (Harvard Cyber Law) which subsequently failed to be enacted during the 112th Congress (see GovTrack).

It would seem that such attempts to lock publically funded research away behind electronic commercial pay walls has led to something of a backlash against the academic publishing industry. In January 2012, writing in the Guardian in response to the industry-supported Research Works Act, Mike Taylor said that “academic publishers have become the enemies of science” and that this was the moment where they gave up all pretence of being on the side of scientists. The suicide in January 2013 of Internet activist Aaron Schwartz in his New York appartment after facing a prison sentence of 35 years and a $1 million fine for allegedly extracting and sharing 4.8 million documents from JSTOR (a fee-based repository of scholarly journals) is likely to remain in the public consciousness for some time (Economist, January 2013).  Later in January, the hacker-activist group Anonymous hijacked the website of the US Sentencing Commission, and also launched a further attack on Massachusetts Institute of Technology websites in protest at their treatment of Mr Schwartz.

The trend towards open access publishing will also have significant implications for the developing world. In his 2012 Washington College of Law Research Paper (see pages 43-44), “Open Access Scientific Publishing and the Developing World”, Jorge Contrenas argues that in a context the current number of peer-reviewed scientific journals yield between 1.2 and 1.6 million articles per year, sharing is critical to the advance of science and that improvements to health, infrastructure, and industry also flow from the capacity of scientists to share and build up each other’s discoveries.