By Hannah Yeates LL.B., Queen’s University Belfast
recent months, the term ‘fake news’ has become a familiar one within the media.
The widespread satirical use of the term has lead to a loss of its particular
meaning as a new challenge within our culture. The problem, which should be of
growing concern to all internet users, is in need of significant regulation. It
is critical that a new level of control is gained over 21st century
news consumption in order for this dangerous issue to be tackled.This piece will set the scene of a new media and
digital age in which the vast spread of ‘fake news’ has been facilitated.
Having done so, it will explore the various worrying, yet often overlooked,
implications it has had upon democratic societies and why our current laws and
regulatory authorities have failed to prevent them. Furthering the current literary
discussion of the area, this project will turn to responses to ‘fake news’ and
will involve the careful consideration of several potential means of regaining
control; where they are likely to be effective and where, alternatively, they
will pose new problems and ultimately result in waste of resources.
Research Question & Methodology
this digital age of new media, the consumption of news has been entirely
transformed and, as a consequence, the spread of ‘Fake News’
has been facilitated. This is a huge legal problem and one that is challenging
our democracy of free speech and freedom of information in a new way. I will
research this issue of fake news; how it has emerged and then become more
evident within the 21st century, what it looks like today, and it’s
worrying implications for our society. I will question how the problem is to be
tackled, if it can be challenged at all; analysing the potential gaps within
the current law and then proposing a solution, with careful consideration of
whether that should be a legal one or otherwise. This piece will combine media
studies and journalism studies with a legal outlook. The research involved will
Chapter Two I will provide a background to the issue of fake news by discussing
the consumption of news within the 21st century. Before focusing in
on the problem at hand, it is necessary to set the scene with facts and
statistics regarding some of the elements that have contributed to it. I will
consider the growth of the phenomenon of social media and the filtering of
content that this encourages, alongside the tabloidization and sensationalism
of news sharing online. Chapter Three will then focus specifically on the
problem of fake news; its causes and the implications and threats that it
brings. I will discuss the frightening realities of inaccurate ‘viral’
circulation, generally misinformed citizens and, overall, a new challenge to
our democracy of free speech and freedom of information.
Chapter Four I will look at what the law currently requires regarding the press
and broadcasting, considering what this actually means for online news outlets
and social media. I will look at the remedies available at present and, throughout,
discuss whether more needs to be done if fake news is to be tackled. Chapter
Five will include proposals or suggestions for measures that will combat fake
news. I will consider whether a legislative solution will be effective or
realistic, and compare other regulatory solutions and control mechanisms. In
Chapter Six I will conclude by pulling my research together and looking forward
to the future of fake news.
The Social Media Phenomenon
the 21st century, keeping up to date with the world around you can
be entirely possible without ever purchasing a newspaper or watching the news
on television. The internet has transformed the way in which news is
distributed, and the use of social media has played a significant role in that.
Kaplan and Haenlein described social media as ‘a group of Internet-based
applications … that allow the creation and exchange of user-generated content’.
It was described as having become a ‘global phenomenon’ by Pew Research Center
in 2010, who stated that almost half of adults in countries such as the UK,
USA, Poland and South Korea make use of social networking websites and
These social networks have evolved beyond their core function as a means of communication
between friends, and into another online news outlet. Purcell found that 75% of
online news consumers in the US receive news via social network posts.
an individual decides to use a social media platform, then regardless of their
intention for use when signing up for the account, they will be constantly
reading updates of news on that space. As the nature of social networking is
the sharing of information, it is inevitable that the particular news stories
of that day will become part of the sharing. Kwak and Lee studied the
characteristics of Twitter as a new information-sharing platform, which they
described as bringing about the emergence of collective intelligence.
They explain how, as a result of its ‘Retweet’ function, Twitter has become ‘a
media for breaking news in a manner close to omnipresent CCTV’ and proof of
this is in the fact that news has broken out on Twitter before CNN live
growth in the use of social media, and in particular its use for reading news
online, appears (from the outset) to be a good thing. Hermida and Fletcher found
that social media are ‘becoming central to the way people experience news’. They
discuss the way in which technologies have given users the ability to personalise
their news stream and that users value this means of filtering content. They
describe it as an ‘evolution of the public sphere’ which is ‘reshaping’ the
entire dynamic of news publication and distribution, subsequently affecting the
role of journalists and ‘established flows of information’.
The writers do not, however, present these effects as being overly detrimental.
Singer certainly discusses this transformation in a positive light, referring
to how media organisations have embraced it as offering them new ways to
distribute and target their news content to a broad audience.
There are many positive aspects of social media news sharing.
all age groups consume news the same way. Of course, there remain a significant
number of people who get their news at 6 o’clock on television, but others may
never sit to watch a news broadcast or read a newspaper. Statistics from the
Reuters Institute Digital News Report 2017 show the generational split. They
state that 64% of 18-24 year olds use the internet as their main source of news
(including social media platforms) compared to 29% who chose either television
or printed newspaper. Over half of those aged over 55 said television was their
primary news outlet.
Those who keep up to date with news through their social media feeds may
receive the same mainstream stories, but those will be dispersed amongst many
updates from more unusual sources. This is when the ability to distinguish fact
from fiction is essential.
Tabloidization of Online News
rely upon access to information as a fundamental guarantor of democracy;
assuming that more information means more accountability, fairness and transparency.
Information is referred to as a ‘public good’
because, generally, being well informed keeps a person safe. Social media and
online news allows for a constant flow of information; it enables citizens to
access a broad spectrum of types of informative material from across the globe,
in an instant. An issue arises, however, whenever people become so constantly
well informed that they believe everything they are exposed because it has been
circulated on their feed, but they know nothing about the reliability of its
original source. Pentina and Tarafdar expressed a negative view of this new
media news consumption. They stated that there is an ‘avalanche of information’
available from a ‘soaring number of (frequently unverified) sources’ and that
the consequences include ‘information overload, suboptimal knowledge formation,
and biased worldview’.
the use of online news outlets, there comes the issue of misleading headlines.
Blom and Hansen analyse headlines from news websites and discuss tabloidization
of online news, summarising that ‘if the readers click it does the trick, seems
to be the logic’.
Where media organisations embrace the internet as their opportunity to
distribute material to an immeasurable audience, the presence of ambiguous and
intentionally deceiving headlines is inevitable; they want readers to choose to
read (and then share) their articles over the many others on their feed. News
content posted online will not be likely to be read nor will it be shared
further if it does not have an element of sensationalism (misdirecting people
on the truth of the story), therefore there is a stronger tendency for using
these types of headlines with tabloid media on news websites, than in the
do people, especially those of younger generations, wish to share news stories
over social media? Lee and Ma address this question.
They discuss the ‘Uses and Gratifications’ theory (an old media studies theory,
originally developed in relation to radio and television, but applied to online
news by Diddi and LaRose in 2006)
and its attempt at explaining the psychological motivations. They also discussed,
with reference to Ames and Naaman,
status seeking and a need to gain attention as further driving factors.
Similarly, Rogers talked of how the sharing of credible and relevant content
makes the social media user appear credible and as an ‘opinion leader’ to those
who access it and find it interesting.
Generally, people want to be seen as influencers. Alongside the desire to be
well informed, there is the desire to be the one who is informing others. The
culture of the 21st century is one in which people depend upon the
approval and recognition which they receive online, in the form of ‘likes’ and
‘shares’. They meet this psychological need by posting updates of news that are
interesting or entertaining, but not necessarily important or even factual.
Echo Chambers & Filter Bubbles
and Fletcher found that Canadians were twice as likely to prefer reading the
news that was shared or recommended by their friends and family on social media,
than news from journalists or official outlets.
They described news as being a ‘shared social experience’ now, and said this
has become more enjoyable, however they questioned the potential outcome of
exposure to (only) news which is popular rather than important.
I would further question what aspects of those stories are making them popular;
it is unlikely to be their reliable factual basis but rather their
is entirely possible to alter your social media feed in a way that allows you
to view only the news updates that are relevant to your interests or opinions.
Nov and others referred to this news-filtering potential as an attractive
aspect, which is transforming users from ‘passive consumers of content’ into
There is a danger, however, where filtering is not recognised by the users as
being in place. A study by Elsami (from 2015) found that 63% of participators
were not aware that algorithms were filtering their Facebook News Feed, making
it personalised to include only information that is relevant to them.
Many internet users are not fully aware of the types of material they are being
exposed to or having hidden from them.
phenomena that are created by this filtering are known as ‘echo-chambers’ and
‘filter bubbles’. A book by Sunstein from 2007, focuses in on the idea of
echo-chambers with regard to blogging (at this stage, social media was only
developing, but the concepts still apply).
Sunstein describes the increasing power people have to decide when to ‘screen
in and screen out’ and he addresses the impacts upon a democracy of free
speech. He discusses the negative affects upon societies where the ideal of
‘The Daily Me’, whereby individuals read a form of newspaper entirely limited to
their own interests, becomes reality. This old concept originates from
He says, ‘…because it is so easy to learn about the choices of ‘people like
you’, countless people make the same choices that are made by others like them’
and explains the ability to ‘design something very much like communications
universe of their own choosing’. Interestingly, he speaks of this movement
towards an ‘apparently utopian picture’ as being very dangerous; it results in
bad citizenship because, ‘…in a democracy deserving of the name, lives should
be structured so that people often come across views and topics that they have
not specifically selected’.
are, of course, positive aspects of social media filtering. Sunstein does not
fail to mention the ‘quite wonderful’ benefit of finding recommendations on
Netflix or finding a new book through Amazon’s personalised suggestions. This
must be balanced, however, against the detriment of strengthening political
views (including extremism) by exclusively hearing and learning from those who
or ‘distorted understanding of some issue, person or practice’ gained from a
suggested YouTube clip taken out of context.
Pre-existing views are confirmed and exposure to challenging beliefs is
‘The Filter Bubble’, Eli Pariser discusses the subtle, but yet fundamental,
change of the digital world towards a tool for harnessing as much information
about the user as possible and providing a ‘custom-tailored’ space.
Pariser is the chief executive of ‘Upworthy’,
a website promoting meaningful viral content, but also the president of
a site for political campaigning. Despite this background, he discusses the
internet as a harmful reinforcing tool; rather than a useful campaigning one.
Although he does not mention (or perhaps even anticipate) issues of outright
fakery in news, he does enforce the danger of the confirmation bias that
happens in social media ‘bubbles’. These filtered spaces have a ‘powerful
allure’ but yet we not ever chose to enter them.
In line with the mission of ‘Upworthy’, Pariser questions whether an article
about child poverty would ever be considered personally relevant to many of us.
Although it may not be, it is still important to know about. This type of story
may, however, be hidden from a personalised feed and replaced by a ‘spicy
headline’ of dubious factual basis.
This leaves us with a misshapen sense of the world around us; one with no
perspective on what is important. Democracy is built upon shared knowledge and
shared experience, therefore filtered bubbles are threatening it.
‘Fake News’ Problem: Causes & Implications
Instantaneous Coverage & Going Viral
use of social media fuels an overload of information; users are immediately
exposed to an ever-increasing amount of news material when they log-on and view
their ‘news feeds’. Unfortunately, this material will inevitably include
‘unverified, anonymous and overwhelmingly subjective’ content.
As said by Pavlik, ‘we live in an age of ubiquitous information’,
but much of this information is unreliable and some is entirely false. The
concept of fake news has become a ‘hot topic’ recently, but it is more than
that; it is a global problem. So, how has this issue come about?
media creates a space for a constant flow of content generated by users. This
stream of information will, due to the accessibility of social networking
applications, include immediate coverage of events and developments. Amongst
those that are personal to the user, there will be those updates that involve
local, national and even worldwide news. These posts have the ability to be
distributed across societies and discussed by users from across the globe
within minutes; diffused within a worldwide virtual community.
Content is exchanged on a ‘many-to-many model’ and the publishing dynamics of a
system based upon the concept of a broadcast audience is altered significantly.
By facilitating this ‘viral’ sharing, social media platforms facilitate the circulation
of inaccuracies or altogether-fake stories on an extensive scale; far beyond
the retrieval of the original (often unrecognised) publisher. Mechanisms such
as Twitter’s ‘retweet’ button empower users to spread content of their choice
beyond the reach of their own ‘follower’ base. It is this feature which has made
Twitter a significant new medium for dissemination of information.
More (mis)Informed Citizens
UK Parliament began an inquiry into fake news in April 2017. Much of the
written evidence from this inquiry (which has been published online)
discussed the definition and scope of fake news. Dr Thorrington stated that the
definition should be broad; including any publication which is ‘intended to be
treated as an objective report but contains flaws that lessen its objectivity’.
Darren Parmenter’s evidence described a ‘meme’ image viewed on Facebook that
quoted the then Home Secretary (alongside her image) to have said, “Sex
offenders including paedophiles should be allowed to adopt”. An official
article by The Daily Telegraph clarified that these were not her words, but
something she had been told. A movement of punctuation marks had completely
changed the connotation of the image, which had subsequently been shared
thousands of times accompanied by ‘incredibly vile and personal comments’. In his
opinion, this was a type of fake news; it misinforms the viewer.
BBC defines fake news as being any information that has been circulated
intentionally by hoax news sites to misinform readers, usually for their own
political or commercial gain. They note the differentiation between fake news
and false news. Fake news, to them, is a term used at times by politicians to
dispute comments that they dislike or do not agree with.
The National Union of Journalists describe how the use of social media has
created a type of ‘eco-system’ in which news, or information pretending to be
news, is shared instantly within ‘belief-affirming bubbles’ and this is a
backdrop which some (notably politicians) can exploit.
POSTnote of the UK Parliamentary Office of Science and Technology on ‘Online Information
and Fake News’ lists various types of fake news. This list includes fabricated
content (completely fake), manipulated content (distorting the truth), imposter
content (impersonation of a genuine source), misleading content (for example,
by presenting an opinion as fact), false context or connection (for example,
where a headline does not reflect the material within the article), and satire
or parody (whereby something humorous and presented as if it was true). The
scope of fake news is extremely wide, as is the variety of online outlets for
these types of content.
Threat to Democracy
internet has been described as having the potential to be a positive force for
democracy and was celebrated in its infancy as such, however, it does not
achieve this goal.
Although fake news is not a newly emerging issue, it has become more prominent
and familiar in recent years (and particularly recent months). Ultimately, it
undermines democratic societies in which populations should be able to
understand and be informed on the events and decisions in the news that are
affecting their lives. Our economy is ‘information-dependent’ and any risk to
the standards of media industries and quality journalism will threaten it. The
National Union of Journalists states that this is the primary reason why fake
news is a problem.
democracies are committed to ensuring freedom of the press and media; many
journalists throughout history fought for it. It allows them to publish content
without restriction or interference by the state (generally), and is something
the UK prides itself on.
This does not have any relevance to the online publications of ordinary
citizens but their freedom of expression,
on the other hand, is certainly relevant. Although there are organisations who
have the power to limit what is posted across the internet (such as the UK
Internet Watch Foundation (IWF) or Ofcom communications regulator), it does
seem that very little gets banned from being posted online and free-speech via
the internet does appear to be unlimited.
the online engagement with stories in the critical final weeks of the US
presidential election, an analysis by the digital media company, BuzzFeed,
found that there was greater attention (shares, reactions and comments) paid to
false election stories by hoax sites than there was on the official articles by
major news outlets such as the New York Times.
This finding is shocking and is of major significance. The use of fake news for
political benefit is a dangerous matter; distortion and polarization are
inevitable and worrying outcomes for a democracy that is based upon the
importance of transparency and accountability.
to Sunstein, in a democracy people should not live in ‘echo chambers or
information cocoons’, but should be exposed to ideas that will challenge them
and inform them even if they have not chosen the material in advance. This is not
the case where internet users are designing their own daily newspaper.
People’s preferences, knowledge and decisions are the result of their exposure
to ‘excessively limited options’ and this, in turn, limits their freedom and capacity
I would add that this limitation is very much related to their reduced ability
to distinguish truth from lies, reliability from deception, legitimacy from
questionability. It is understandable that people have been found more likely
to believe a false claim if it is repeated, even if it contradicts their prior
We will believe what we read, if we read it enough times and it comes from
enough sources; this is ‘the psychology of rumor’ and although it is far from a
new concept, it is newly significant as a major reason why the spread of fake
news is palpable within the bubble of a social media feed.
believe, as a society in general, that the truth is a good and important thing.
We are people who want to know more all of the time, and want to have access to
an abundance of information regarding every aspect of our lives, so that we can
live as well-informed people and make good decisions based on that. For this to
be the case, such information must be the truth. Barendt refers to the
discovery of truth as ‘an autonomous and fundamental good’, which has
significant value ‘concerning progress and the development of society’. He
states that this is why it is one of the most durable arguments in support of
the freedom of speech.
Accuracy is a social goal and we should understand that it is a better thing,
for the world, if we have access to accurate news. We receive this news on an
hourly basis from the media, and the legal system has, therefore, placed an
emphasis on enforcing accuracy across the various platforms to which we are
exposed. In some circumstances, however, this is only a gentle emphasis. In
reality, the social goal of truthfulness and accuracy must be balanced against
others; namely the freedom of speech.
the UK prides itself on its free press, journalism is still restricted through laws,
broadcasting or regulatory standards and guidelines. War reporting, blasphemy,
contempt, information regarding minors or vulnerable individuals,
discriminatory content, incitement to crime and issues of national security are
all examples of the types of material that are restricted within journalism.
These types of restrictions are generally understood and well supported by the
general public, when they consider the moral and/or ethical reasoning for them.
Even those who actively support and promote free speech will generally accept
them as being reasonable.
The press and broadcasting media are, therefore, monitored, regulated and
controlled, to an extent, by different bodies and authorities. Truth and
accuracy do play an important role and are recognised in these areas; there are
incentives to be truthful in reporting and publishing, for example as a defence
to defamation allegations. Although the effectiveness of this regulation is
arguable, it does exist and the procedures are in place to attempt to enforce accurate
publication of the truth.
contrast to these other media platforms, there is, at present, very little
legislation allowing for any control of what is published on the internet. The
internet remains, for the most part, quite unregulated in terms of accuracy and
truth. Many are of the opinion that this has allowed freedom of speech to get
‘out of hand’ because there are only implied moral responsibilities of users
not to write anything online that will cause harm to others.
Besides the deterrent of the risk of a defamation case (as this doctrine
applies to internet material just as it does with other platforms) there is no explicit
legal requirement that what an individual choses to post online is accurate or
even truthful. Whilst entities described as ‘internet information gatekeepers’,
such as search engines and social networking sites are considered to have
particular responsibilities through their position of authority, general
internet users are not.
This has facilitated the spread of fake news.
the News of the World phone-hacking scandal of 2011, the Leveson Inquiry was
set up to examine the press and debate how it should be regulated. This was a
public inquiry, which heard from a number of high-profile witnesses and focused
upon the culture, practices and ethics of the press. Within the final report,
Lord Leveson recommended that a new, independent body should replace the
existing Press Complaints Commission and that it should be supported by new
legislation. This currently exists as the Independent Press Standards
Organisation (IPSO) and regulates both the print press and online press
content. Government declined, however, to enact new laws to recognise it, so it
remains as a non-statutory authority.
attempts to uphold high, professional standards in journalism by monitoring the
compliance of its members with an Editors’ Code of Practice and investigating
where there have been complaints and reports of serious breaches of the Code.
They also provide guidance and training for editors and journalists, and have a
‘Whistleblowing Hotline’. The Code is written by the Editors’ Code Committee
and sets out a framework of rules that all magazines and newspapers, who have
volunteered to be regulated by IPSO, have subsequently agreed to and committed
to following. The rules are set out under various headings such as Accuracy,
Privacy, Children, Reporting of Crime and Discrimination.
the heading of Accuracy, the Editors’ Code notes that there must be care taken
not to publish inaccurate, misleading, or distorted information and that where
this happens there must be prompt corrections and appropriate apologies.
Furthermore, there must be clear distinction between published comment,
conjecture and fact.
Such rules are obviously of relevance to the issue of fake news. This authority
aims to promote high standards of journalism by ensuring that its members are
committed to following detailed rules on accuracy, among other aspects. The body
is independent, however, and is not statutory; there is no legislation backing
its work, despite it having a huge number of members and therefore the potential
for significant influence.
statutory body, Ofcom (the Office of Communications), regulates all broadcasters
with wide coverage; all radio stations and television channels, including
satellite and cable. Its powers are set out in the Communications Act 2003 and
its ‘Broadcasting Code’ specifies its standards. Although they are not in
statute, they are required to be followed according to statute and are
conditions of retaining the licence that the broadcaster has been granted.
Section 5 of this document is particularly relevant to issues of truth as it
requires ‘news is reported with due accuracy and presented with due
It specifically defines what accuracy means and it includes the mention of
issues like misrepresentation and deception. Similarly, section 7 gives
requirements regarding elections. It orders fairness as a specific duty (which
is punishable by fines and by withdrawal of licenses) that broadcasters must
avoid any unjust or unfair treatment of an individual or an organisation. Broadcasters
cannot only present one side of a story; they must present a full picture and
allow for responses. This is a key aspect of truth-telling; it requires a story
to be shared with full coverage, rather than with a deceiving or misleading
perspective or single opinion.
fake news were to be broadcast on a television channel, it would be relatively
straightforward to bring a complaint to Ofcom. Ofcom would look back at this
television programme, make a decision on whether it had breached their code and
would act accordingly. This would involve publishing the decision to the public
who had been misinformed, ordering an apology from the broadcasters,
potentially issuing a fine and, in the most extreme of cases, acting to remove
their license. This is generally how the complaints procedure would work in
practice as set out under the framework of section 319 of the 2003 Act.
is a historic distinction between the press and broadcasting. Generally, when
we think of broadcasting media we are thinking about screens, computers and
higher levels of technology than would be involved with the press and
newspapers. The internet, however, is actually more similar to the press in
that it does not require a license. Barendt, in rejecting the analogy with
broadcasting, notes how the internet ‘does not invade the home in the same way
as radio and television; users do not come across indecent (or other) items by
Alongside this difference, the specific rules for broadcasting like, for
example: impartiality, regulatory bodies, obtaining licenses, controls on
advertising, do not apply to the internet. The internet allows, generally, for
open access, and operates without a system of centralized control, because it
was not set up to be subject to the same statutory requirement to control
anyone with access to the system; it does not have the same ‘gatekeepers’
determining what is transmitted.
As noted, the main thing that the internet has in common with the press is the
lack of a need for a license.
way in which the internet is regulated, to an extent, is through the work of a
self-regulatory body called the Internet Watch Foundation (IWF). The IWF is not
overly well-known and monitors limited types of content like images of child
sexual abuse or other obscene or violent content. This body is funded and
supported by the European Union as well as major ISPs, filtering companies and
search providers. One function is to provide a ‘hotline’ through which the
public can report accidental exposure to criminal content online.
Another crucial role they play is in providing block lists to ISPs, which they
can then use to ensure nobody reaches the inappropriate content. This function
really makes the difference to internet users. When the IWF get a report that
there is particular URL on which obscene images are found, they add this to the
block list which is given to ISPs to be filtered and made inaccessible. While
the material is still online, and could potentially be reached if accessed from
another country, it is blocked and found as a blank page to any UK internet
users. Takedown of images hosted abroad is dependent on the actions of local
IWF developed as an alternative to legal action. The government wanted ISPs to
have the duty to block images of this nature but the industry disagreed, based
on the huge expense and general feasibility. The response was going to be a
legal obligation to do it, and so the IWF was set up as means of self-regulation.
Although online exposure to child sexual abuse was the primary focus of the
IWF’s work, the system has also been used for the blocking of copyright
infringement. Under intellectual property law there is a system whereby an
injunction can be obtained against those who are in the position to prevent
harm, by copyright infringement. This injunction will order ISPs to block the
particular streaming site that is being used. Over the last decade, many
applications have been brought to the High Court from the creative industry
(including movie producers, record labels, publishers) and their primary
argument is that if this blocking can already be done for abuse images then
surely it could be used for this type of material, too.
present, the IWF cannot be said to be doing anything for fake news. It is
interesting to consider whether or not it would be capable of any monitoring or
regulation of this and if not, why not? The technology and procedures that are
used could be the same as those which are used for child abuse images or
copyright infringement content. The main issue, I would suggest, is that whilst
everyone can probably come to agreement on whether or not an image of a child
is of a pornographic and therefore inappropriate nature, it would not be so
black and white in deciding whether any statement online is true or false.
There could be too many grey areas and controversy over what should be blocked
and whether the IWF had the right to block specific pages on the basis of an
inaccurate statement, anyway.
in Defamation & Privacy
law is certainly relevant to the issue of fake news due to the importance of
the falsehood of the statement in question. The Defamation Act 2013 (England
and Wales, only) Section 2 clarifies the statutory defence of truth. It is the
current law that a statement is not defamatory if it is proven to be
‘essentially’ or ‘substantially’ true.
The prospect of a defamation case will be a significant deterrence to any
journalist and the pre-publication procedures involved in their role will
ensure that they will not often end up in this type of legal action. Where a
defamation case is brought against a newspaper, it will be them who must to
prove the truth rather than the complainant. This is one of the ways that truth
is enforced. This forces or at least encourages publishers to consider the
truthfulness of content. This is not the case, however, regarding defamation
online. A member of the public who publishes material on an online platform
does not face the same prospect and does not have the same reason to consider
what they are posting. They are not writing from a position of authority or
employment from which they can be removed, and do not have the responsibility
to represent a particular newspaper or magazine; they are only an individual
the defence of truth, there is also the qualified privilege for publication of
defamatory material that is in the public interest. This is also in statute for
England and Wales.
This statutory defence abolished the common law ‘Reynolds defence’, which was clarified
within the House of Lords case of Reynolds
and provided that a journalist can argue that they had a duty to publish
material, even if it turned out to be false, if it was in the interest of the
public for them to do so. In Reynolds,
Lord Nicholls set out 10 factors for the court to take into account in deciding
whether a defendant should be able to rely upon this defence. These included
matters such as the seriousness of the allegation, the nature of the
information, the steps taken to verify the information and the urgency of the
These factors became well-known standards of responsible journalism and further
guidance on their use as a test was given in the later case of Jameel .
This was a way in which the legal system tried to incentivize ‘good’ and careful
journalism by using a broad, liberal approach and considering a wide range of
The 2013 Act provides no list of standards that the publisher must meet and so
the courts need not use these as a benchmark any longer.
of defamation linked to the internet are on the rise due to the failure of
online platforms to put in place the same types of pre-publication controls
that are used in traditional media broadcasting.
The key case of Tamiz v Google Inc
sets out, with clarity, a message of positive inaction on the part of the UK
courts towards victims of online defamation. It demonstrates, essentially, that
a ‘host’ of online content does not have to do all that much. In this case it
was held that an Internet Service Provider (ISP) is merely a ‘host’ of the relevant
content and this means its role has been passive rather than actively
publicising or even authorising material. Similar reasoning was used in the
cases of Bunt v Tilley 
and Metropolitan International Schools 
regarding this ‘passive facilitation’ of defamatory content. Such ISPs are subsequently immune from defamation
actions according to Regulation 19 of the Electronic Commerce (EC Directive)
Regulations of 2002.
The exception here is where an ISP or ‘host’ has been notified of the existence
of defamatory material on their platform but have failed to take adequate steps
to remove it, as was the case in Godfrey
This exception is built into the Directive.
Defamation Act 2013 (Section 5) makes it even easier for hosts to deny
responsibility for content on their websites. If you are, as the Act says, the
operator of a website, you may only need to pass on the complaint (subject to
conditions) and you are very well protected from legal action under defamation.
This is the case, in particular, where the writer of the material is named and
traceable. While the Defamation Bill was debated within the House of Lords,
many members referred to the impact upon Mumsnet and similar famous websites.
Mumsnet online forum were lobbying to say that while many post on their very
influential discussion groups, they cannot check the truth or accuracy of each message;
all they are capable of doing is putting the writer and complainer in touch
with each other and allowing them to establish truth between themselves. The
website operator of Mumsnet cannot be in the position to judge the truth of
each individual statement and so they would find themselves exposed to far too
much liability if this was required of them.
would argue that the same can be said of social media platforms such as
Facebook; surely they cannot be expected to have the positive duty to be actively
looking for false material or inaccurate news sharing on their entire website?
At present, they only have the obligation to take particular types of material
down if it has been reported to them. In the Northern Irish case of CG v Facebook , Facebook failed on
this point and were held to be liable to the respondent in damages for misuse
of private information which they should have or ought to have known had been
published on their platform.
is extremely difficult for a person who has been a victim of online defamation
to know who they can take action against or to have any confidence that any
legal actions would be successful. Cyberspace has no legal borders as it
crosses national boundaries; any message or piece of information can be instantaneously
dispersed across many geographical locations but there are no legal barriers
across the internet to prevent this flow.
As was predicted by Negroponte back in the 1990s, we now ‘socialize in digital
neighborhoods in which physical space [is] irrelevant and time [plays] a
In these we can be anonymous and untraceable; we will write things we would
never have said, or at least would certainly have given more consideration,
aloud in a face-to-face discussion. As a result, in many cases of defamation
online, it is impossible to distinguish the physical location of the ISP, server
or the author and such material can be particularly damaging and harmful.
Although this anonymity does not stop law enforcement from attempting to track
down online breaches of the law, when regarding individual defamatory comments
this can be extremely difficult and time consuming, and therefore impractical.
an internet user has succeeded with legal action under defamation law, they may
be compensated by way of damages. Alternatively or in addition, an injunction
may be issued. I doubt the use of financial compensation if that post has been
shared across the online community and seen by hundreds or thousands of other
users. Although an injunction should prevent any further spread, the post may
have already gone far. The false information about that person has probably
already been easily published, easily accessed and easily shared, without any
hindrance to the many internet users who were involved. This is referred to by
Negroponte as ‘a new kind of fraud’ when he compares how easily he can send
something which is read on the internet, to ‘literally thousands of people all
over the planet’, compared to spreading a physical newspaper clipping. This can
seem completely harmless, because it is so easy, and this is where the problem
internet has become a type of ‘worldwide collective of free speech’ which has
reached the stage of being out of control.
While many defend this and argue that the internet is the one place where free
speech remains unfettered, it seems clear that the coming years of this digital
age are going to present legislators and courts with many challenges regarding
online content. While the internet has facilitated a global freedom to express
and exchange ideas and opinions, which is important in a democratic society, it
has also facilitated a significant rise in misinformation and, ultimately, the spread
of fake news.
truth is important to us, as a society. Effort has been made to encourage the
spread of the truth in television and radio broadcasting; it was recognised
that this was important and so a whole regulatory and license system was
developed for this reason. Similarly, individuals and companies are willing to
spend huge amounts of money on legal fees for libel/defamation cases because
they feel that it is important that the truth is told and spread about them.
The belief that the truth is important and good within society is expressed
through the combination of bodies, authorities and mechanisms that I have
discussed in this chapter. The area, in which this belief is not having any
impact, is in the context of the internet.
is plenty of regulation out there, which is appropriate to the press and to
broadcasting media. Although it is a small number, there are television and
radio channels that have lost their licenses for failure to properly fact-check
the content they are broadcasting.License
revocation is the ultimate sanction; more often they would only be fined, butthe problem is that these mechanisms
of monitoring and regulating are either impractical, inappropriate, or
insufficient if/when they are applied to the internet. The internet, as a
different platform of electronic media, has been left so far apart from other
platforms; it is regulated very informally and, aside from the IWF, has no
broad oversight by any agency which is controlling its content.
The internet is left as a gap which needs to be filled. Existing means of
regulation need to be adapted in order to suit the internet’s various
platforms, or new means need to be established and developed. Maybe its ‘global
reach and distributed architecture’ are what make it impossible for the
existing bodies to do this job.
The alternative is, perhaps, that new laws are the only outcome that will be
effective in ‘reining in’ the freedom currently enjoyed by internet users.
‘Fake News’: A Regulatory Solution?
it is possible to regulate the content individuals post online, and if so, what
type of regulation would be most effective, are extremely complex questions.
While the most obvious form of regulation is legislation, this is not
straightforward with regard to cyberspace. Which actions or types of content would
such a cyberlaw criminalise and how would that law be enforced effectively?
These are the questions legislators would need to consider and from the outset,
and when considering the issue of fake news, the answers are certainly not obvious.
Legislating, although it may seem like the most direct and impactful means of
taking control, takes huge amounts of time, effort and resources. Before these
are wasted on a new law that will not have the required impact, these difficult
questions must be answered.
takes the position that it is very possible for national laws to effectively
regulate cyberspace. Reidenberg took this view at an early stage.
While other cyberlaw theorists considered the internet to be inherently unregulable
by its design, and thought that it would be entirely independent and
unrestricted, Reidenberg saw the potential for unique types of control. At a
European Union level, the problem of fake news is recognised and attention is
being paid to the potential for a legislative solution. A public consultation
was launched by the European Commission in November 2017 (which ran until
February 2018), with the objective of assessing the effectiveness of current
actions addressing different types of fake news.
In a text adopted by European Parliament, the Commission was called upon to
analyse the current legal framework and verify the possibility of new
The idea of a legal solution is, therefore, not a new concept.
then, could legislation control internet content? Traditional
laws consist of three elements: a director, a detector and an effector.
The director is the standard that has been set and the detector is the means of
detecting any deviation from that standard. The effector is the mechanism used
to push the deviant behaviour back towards the standard. It is difficult or,
arguably, impossible for any of these elements to function effectively with
regard to cyberspace. Those who would be subject to such laws are no more than
‘digital personae’ who cannot always be identified or physically located let
alone fined or imprisoned. Furthermore, with the ability to move between zones
governed by different regimes, state sovereignty would be undermined.
It is significantly more difficult and more expensive to enforce laws where the
unlawful activities or communications can cross national borders. Cases would
involve complex questions; does it depend where, geographically, the unlawful
act can be sourced to have taken place or where it was eventually
received/detected? When dealing with the internet as a global medium, lawmakers
will face these problems.
discusses the way in which law does not operate as a system of control because
this would assume that it is only the fear of the law’s sanctions which forces
us to behave in a way which complies with them. In reality, humans conduct
their lives according to social norms and believe that the law reflects them.
Rather than being controlled and deterred by law enforcement, people live by
its normative effects.
Reed states that a successful law will, therefore, either entrench already
existing norms or reinforce a developing one, like the UK law mandating the
wearing of seat belts.
If we are aware that there are already deeply entrenched norms existing in
cyberspace regarding the free flow of information and news, how can a law be
introduced which would contradict or challenge them? An example of the
ineffectiveness of this approach is the attempt to curtail file sharing of
digital music through copyright laws. Murray describes this clash between
cyberspace norms and legal rules as ‘regulatory flux’.
legislation to be effective, it must be able to be taken seriously and be
considered as meaningful or necessary. A national law, which would attempt to
exert control over all cyberspace users, would be unlikely to meet these
criteria; it would be considered unfeasibly burdensome or unenforceable.
One individual internet user cannot be expected to investigate the laws of
every country which may apply to his/her activity online, then look closely at
those to decide whether their activity is lawful or not. Similarly, a law may
not be perceived as being applicable because it attempts to regulate a
technology that is no longer in date. This poses questions for the regulation
of cyberspace; the pace of legal change could never compare with the pace of
Furthermore, where a piece of legislation is going to be taken seriously and
considered to be meaningful, it must be understandable. An issue that is as
wide in scope as the spread of fake news, would require vast amounts of
explanatory information involving technical terminology and precise detail. It
would be extremely difficult to provide sufficient detail in order to avoid any
uncertainty regarding legal application, without causing complete confusion.
Lessig was greatly influenced by Reidenberg and the cyber-paternalist school.
In Code 2.0 he uses a dot to
represent a person or thing, which is being controlled or regulated.
He talks about four distinct, yet interdependent modalities of control: legal
constraints (threatening punishments), social norms (focusing on stigmas),
market (prices and supply) and technology or architecture (physical burdens).
These can support and enable, or undermine or oppose each other, but a complete
view of regulation will involve all four together.
Law is one mechanism of control over the dot, but it is not the only one or
even, necessarily, the most effective one. The law can have direct operation in
telling societies how to behave and controlling behaviour by threatening
punishment if they deviate from the lawful standard. On the other hand, a law
could have indirect operation in that it would aid one of the other modalities.
The same can be said regarding the regulation of actions online. While
legislation may play a part and contribute to control with indirect operation,
the other modalities need to play their part too and support the efforts, as a
law will not be effective on its own.
states that the area of content control is one in which the internet has
‘robbed the law of much of its power’, but that it is also an area in which
code has huge potential.
Due to the ‘decentralised design’ and ‘dispersed architecture’ of the internet,
it has often been considered to be immune from any censorship by the state.
This is not entirely true, however, as most internet users will have
encountered internet blocking and content filtering mechanisms which are used
to ensure that some control is in place. These techniques and technological
features are continuously being developed to do more to impose control over
access to particular types of content, webpages or websites. Lessig describes
code as ‘the instructions embedded in the software or hardware that makes
cyberspace what it is’.
Code is the architecture of cyberspace and it has the potential to provide
means of regulation.
architecture of the internet will always influence the way in which an
individual uses it; it enables certain activities and behaviours but, by
design, it will also constrain and restrict others. The environment with which
an internet user becomes familiar with and enjoys, exists because of coding. In
the same way, it can be limited and controlled with use of code. If this is the
case, the ones who regulate the coding of the internet and therefore control
its architecture, are the ones who can regulate the content. Similarly, areas
of the internet can charge for access or require paid subscriptions for their
use. If this does not restrict access, busy signals can do so. This is what
Lessig describes as the market modality of control. Just as with law,
architectural and market influences can contribute to control over a person or
Each of these modalities will have a cost and take time to introduce and so it
is important to consider which ways, or in which combinations, they will be
the evidence of the parliamentary inquiry into fake news there are mentions of
architectural mechanisms relating to changes made by browsers or social media
platforms. One common suggestion is that genuine news outlets are allowed to
display a mark/sticker of verification on their webpage or publications.
The issue with such a proposal is that a decision would need to be made on
which authority should be allowed to grant the verificatoin and this would,
potentially, be an authority derived from government. If this were the case there
would, inevitably, be debate on whether government should have the power to
decide what is truth or should be considered to be truthful. This would
contradict the expectation that the public should be allowed to make up their minds
on what is truthful or legitimate. Another fault with this recommendation is
that this would not tackle the issue of fake news spread by ordinary social
media users, it would only verify main news outlets; individuals could not be
verified as they could not be distinguished as a legitimate source. The news
pages that they share could be verified, but their own posts could not.
recommendation for architectural means of control is that major players such as
Google and Yahoo! have a ‘news toolbar’ which would appear on everyone’s search
pages. Using the toolbar, anyone could tag something that they consider to be
fake news and if there are multiple reports then the stories would be flagged to
be checked. Linda Greenwood compares this idea to an Amazon toolbar add-on
which allows her to add shopping to her basket from any website.
The issue I would suggest for this measure is that a group must be delegated as
responsible for the checking of flagged stories and there would be debate over
which group should have such power. Furthermore, it would not be enough to
introduce this add-on feature without promoting it and providing a great deal
of information on why it is vital that it is used (and used properly). Internet
users cannot be expected to embrace a feature and make use of it if they are
not aware of its importance.
Wisty, a computer consultant, recommended that social media sites should
display ‘traffic graphs’ for their trending posts, which would give users the
ability to check whether growth rates look to be reasonable or forced.
Although this measure would, potentially, be useful in evaluating the
truthfulness of a Tweet or shared story, it would require the user to go to the
effort of analysing a graph and attempting to understand it. The readers would
first have to doubt the accuracy of the news story and chose themselves that it
was worth going to this effort. It may not be overly beneficial if this feature
was not automatically visible to the reader, beside the post or tweet.
outline their attempts at introducing better mechanisms for controlling the
amount of misleading or inaccurate content.
They note that they have introduced fact-check labels to both their search and
news functions, and are taking steps to help users find fact-checked articles
with critical outlooks. They are working to improve the algorithms that produce
their search results, to reduce the number of sites that are able to reach the
top of the results through use of deceiving headlines or through
of the suggested technological solutions require more than just an
introduction; they must be promoted, supported and have their importance rigorously
explained to internet users. As argued by Lessig, the architectural modality of
regulation is unlikely to be sufficiently effective on its own. These measures
could make a difference in the short term but longer-term approaches would be
critical alongside the attempts of the social media platforms and browsers.
who are very familiar with use of the internet (Lessig refers to them as
‘Netizens’) become aware of the behaviours that are considered to be socially
acceptable and those that are not, within their online environments and the
online communities in which they interact. This is relevant to the final of
Lessig’s modalities of control; social norms. He mentions examples of behaviour
that would not be socially accepted on the net; posting about irrelevant
matters on a newsgroup forum or talking too much on a discussion group.
Social norms can be extremely effective in controlling particular types of
behaviour. This is because stigmas are attached and sanctions can be imposed whenever
individuals conduct themselves in a way that is not acceptable to their
community. The risk of the type of scrutiny a person may be subjected to by
society may be enough to deter them from acting in a particular way and may
even be a more significant deterrent than the risk of legal sanction. Online,
the result may be that the user is removed from the newsgroup, blocked from the
forum, reported and have their profile deleted from the social media site or
have their subscription to a platform cancelled.
norms exist because of the existence of legislation and therefore line up with
behaviour that is legal. The example Lessig discusses is the wearing of seat
belts. While legislation directly regulates this by mandating it, social norms
also regulate it in that there is a stigma against those who choose not to. The
norm lines up with the influence of law. There are, however, other ways to develop
or change social norms; education is the most obvious. The funding of public education
campaigns can be immensely effective in creating stigmas and developing the sense
that a particular conduct is not acceptable.
Regarding the issue of fake news, I think it is particularly important for
education to be funded, allowing younger generations to learn about the
importance of thinking critically about the sources from which they are reading
their news. This, in turn, should result in a change in social norms relating
to the sharing of news and an understanding that spreading material that is
inaccurate or entirely false is dangerous and socially unacceptable.
of the written evidence reports from the parliamentary inquiry directly
discussed awareness and education as being a long-term requirement if the fake
news issue is to be tackled from the source.
The contribution by the founder of Simple Politics, a website and social media
presence, discussed the importance of equipping young people with the skills
they need to identify fake news themselves and instilling within them the desire
to hunt for facts, rather than taking information at face value.
Education on these issues and development of critical thinking skills within
schools will play a fundamental role in the changing of social norms as a
method of control because the younger generations can bring a new awareness
into their jobs and futures.
Gray and Professor Phippen, academics and authors in the field, focused their
attention exclusively on children and the particular problems they have
regarding recognition of false information. They clarified that the issue lies
in the blurring of boundaries between information which is authoritative and
that which is purely for entertainment purposes.
They expressed concern that a generation with limited awareness of ‘digital
moralities’ will be likely to face an erosion of their rights alongside abuses
by industries and governments; they describe the growth of fake news as being
the critical ‘warning signal’ that urgent action is needed. They suggest that
this urgent action should involve development in education on the digital
world; digital issues should not be kept to a restricted section of the school
curriculum but should be infiltrated within other subjects because these issues
influence all areas of life.
type of educational development must stem from and be supported by policies
that have national acknowledgement; it must come hand-in-hand with these. It is
essential that changes and additions to the school curriculum are adequately
resourced and respected as being crucial for the young generations who face
these digital issues on a daily basis and must be equipped. Under this
objective, teachers must be adequately resourced and upskilled to be fully aware
and knowledgeable regarding the significance of these issues and the risks
associated with them. Alongside more technological, ICT-based lessons on
authoritative and reliable sources, and distinguishing factual information from
biased opinions or deceptive commentaries, children should be taught, more
generally, how to be critically minded and reflective when accessing news
example of relevant educational development is the new interactive game
launched by the BBC, which is giving young people the challenge of taking on
the role of a journalist and spotting fake news. The task involves
decision-making regarding the trustworthiness and legitimacy of sources,
political claims and social media comments. The BBC ‘iReporter’ game, which is
aimed at 11-18 year olds, is one aspect of an wider initiative which also
provides teachers and older students with classroom resources and lesson plans
on various topics surrounding the issue of fake news.
A BBC Live lesson was also streamed for schoolchildren on ‘sorting fact from
fiction’. This interactive session involved experts from ‘Full Fact’, an
independent fact-checking organization.
More platforms must begin to incorporate fake news into their initiatives for
education and awareness.
campaigns for education and awareness have the potential to influence attitudes
towards fake news and result in a vast understanding into the dangers of it,
this will not regulate the issue on its own. This cannot be a perfect solution
as it may, primarily, take time to inform people and change their views on the
importance of this issue, particularly with developments in education only
influencing the younger generations in the short term. It does remain, however,
that education and awareness must play a major role in attempts to tackle fake
and consumption of news has changed remarkably within the 21st
century with the growth in the use of social media platforms, especially among
younger generations, playing a key role. Amongst many positive aspects,
reliance upon online outlets for news has many downfalls. Individuals can,
before they are even aware of it, enter into an echo-chamber or filter bubble
in which they are only exposed to stories and viewpoints which will confirm and
reinforce their existing beliefs. This leads to them having a significantly
limited perspective of the world around them.
these restricted feeds of news coverage, internet users become more impressionable
regarding sensationalized news stories, misleading headlines and deceptive,
‘click-bait’ articles which, in turn, facilitates the instantaneous, viral
spreading of fake news. Individuals can be fooled into thinking their social
media platforms are allowing them to become better informed citizens but they are,
instead, being misinformed entirely. Fake news, which is a broad concept and
can encompass various types of material, threatens democratic societies in new
is gradually becoming recognised that the issue of fake news is a serious one
and its worrying implications must be tackled and brought under control. There
is plenty of regulation in existence regarding the press and broadcast media,
and these measures have the importance of truthfulness reflected within them;
however, the internet is one area for which control is limited. The internet is
a unique space for which many of the existing mechanisms for control are simply
insufficient. The freedom which internet users currently embrace and enjoy must
be reined in through new initiatives and developments which will suit its
come to these conclusions, I moved to discuss various proposals for combatting
the issue. Beginning with legislative developments, specifically the concept of
an ‘Anti-Fake News Law’ or ‘Accuracy Act’, I focused upon the difficulties of
legislating for cyberspace. These hurdles included the lack of geographical
borders, the complexities of defining types of unlawful behaviour on the
internet, and the issues with applicability and understanding. This type of law
would not be effective without increased awareness and education into the issue
of fake news and technological adaptations, which would encourage individuals
to recognise its applicability.
potential technological developments, I discussed a number of recommendations
made by those who contributed to the parliamentary inquiry. These included
suggestions for verification marks, news reporting toolbars and ‘traffic-graph’
displays. The main issue, regarding these architectural changes, is the
inevitable debate over the granting of authority to verify or not verify
stories or sources, or the granting of powers to fact-check reported material.
For the chosen organisations to be trusted and relied upon they must have
backing by government and for the mechanism to be acknowledged and used it must
be adequately promoted and supported with explanations/instructions. As with new
legislation, new technological mechanisms will not be effective in tackling
fake news alone.
I discussed the importance of new initiatives for education on digital issues
and funded campaigns into public awareness of the problem. This, in my opinion,
is the most crucial of all developments towards achieving a regulatory solution.
The public will not understand the applicability of a new law on online
accuracy if they are not educated regarding the risks associated with fake news
spreading. In the same way, an individual internet user cannot be expected to
acknowledge or make use of a new function on their social media feed if they
have no knowledge of the existence of a fake news problem. If, however, changes
in law or architecture are made in partnership with major improvements to the
levels of education and awareness, they can and will be worth the time, money
and resources spent.
conclusion, a ‘something must be done’ attitude is critical. It is inevitable
that the issue will present new challenges to democratic societies in the
coming months. This attitude, however, must not result in complacency with a
decision to enact a new law or, alternatively, to place the responsibility upon
social media websites or online news outlets to adapt their platforms. The
time, money and resources spent on these will not be worthwhile as they cannot
be sufficiently effective on their own. These changes must be made alongside
government-funded campaigns into promoting awareness of the issue, and major
initiatives for educating younger generations. This is what is required if
there is to be any hope of tackling the fake news issue in the long-term.
Barendt E, Freedom of Speech (2nd edn.,
Oxford University Press 2005).
& Villeneuve N, ‘Firewalls and Power: An Overview of Global State
Censorship of the Internet’ in Klang M & Murray A (eds), Human Rights in the Digital Age (Glasshouse
Laidlaw EB, Regulating Speech in Cyberspace (Cambridge
University Press 2015).
Lessig L, Code 2.0 (2nd edn., Basic
‘Child Abuse Images and Cleanfeeds: Assessing Internet Blocking Systems’ in
Brown I (eds), Research Handbook on
Governance of the Internet (Edward Elgar 2013).
Murray A, The Regulation of Cyberspace: Control in the
Online Environment (Routledge-Cavendish 2007).
Negroponte N, Being Digital (Vintage Books 1996).
Packard A, Digital Media Law (Wiley-Blackwell
Pariser E, The Filter Bubble: What the Internet is
Hiding from You (Penguin 2011).
Reed C, Making Laws for Cyberspace (Oxford
University Press 2012).
Rogers E, Diffusion of Innovations (5th
edn., Free Press 2003).
Smartt U, Media & Entertainment Law (2nd
edn., Routledge 2014).
Sunstein CR, Republic.com 2.0 (Princeton University
Ames M &
Naaman M, ‘Why We Tag: Motivations for annotation in mobile and online media’
 ACM Digital Library.
Blom J &
Hansen K, ‘Click bait: Forward reference as lure in online news headlines’
(2015) 76 Journal of Pragmatics.
Diddi A &
LaRose R, ‘Getting Hooked on News: Uses and Gratifications and the Formation of
News Habits Among College Students in an Internet Environment (2006) 50(2)
Journal of Broadcasting & Electronic Media.
Eslami M &
others, ‘Reasoning about Invisible Algorithms in News Feeds’  ACM Digital
Fletcher F & others, ‘Share, Like, Recommend’ (2012) 13(3-6) Journalism
Kaplan A &
Haenlein M, ‘Users of the world, unite! The challenges and opportunities of
social media’ (2015) 53(1) Business Horizons.
Knapp RH, ‘The
Psychology of Rumor’ (1944) 8(1) Public Opinion Quarterly.
Kwak H, Lee C
& others, ‘What is Twitter, a social network or a news media?’  ACM
Lee C & Ma
L, ‘News sharing in Social Media’ (2012) 28 Computers in Human Behaviour.
& Boyd D, ‘Twitter users, context collapse, and the imagined audience’
(2010) 13(1) New Media and Society.
‘The Nature of Responsible Journalism’ (2011) 3(1) Journal of Media Law.
Murray A &
Scott C, ‘Controlling the New Media’ (2002) 65 Modern Law Review.
Nov O &
others, ‘Analysis of participation in an online photo sharing community’ (2010)
61(3) Journal of the American Society for Information Science and Technology.
‘New Media and News: Implications for the future of journalism’ (1999) 1(1) New
Media and Society.
& Tarafdar M, ‘From ‘information’ to ‘knowing’: Exploring the role of
social media in contemporary news consumption’ (2014) 35 Computers in Human
Lesson, ‘Sorting fact from fiction’ (22 March 2018).
BBC News, ‘BBC
game challenges young people to spot ‘fake news’’, BBC Family & Education
(14 March 2018).
‘Mumsnet founders demand libel law reform’ The
Telegraph (19 November 2010).
‘Viral fake election news outperformed real news on Facebook’ (November 2016).
Consultation, ‘Public consultation on fake news and online disinformation’
(November 2017- February 2018).
Parliament (Text adopted), ‘Online Platforms and the Digital Single Market
(Resolution of 15 June 2017).
Deb 15 January 2013, vol 742.
& Raymer K, UK Parliament POSTnote: ‘Online Information and Fake News’
Press Standards Organisation, Editors’ Code of Practice.
Inquiry Report (November 2012).
Center: Global Attitudes & Trends, ‘Global Publics Embrace Social
& others, ‘Understanding the Participatory News Consumer’  Pew
Research Center Research Paper.
Digital News Report 2017.
International Forum for Responsible Media Blog (Inforrm).
Inquiry (Official National Archive).
The Office of
Communications (Ofcom): Revocation Notice (26 July 2017).
International AG v British Sky Broadcasting Ltd  EWHC 3354 (Ch).
v Facebook Ireland Ltd 
v News Group Newspapers Ltd  EWCA Civ 1772.
Entertainment Ltd v British Sky Broadcasting Ltd  EWHC 1152 (Ch).
v Demon Internet 
v Wall Street Journal Europe Sprl  1 AC 359.
International Schools Ltd v Designtechnica Corporation  1 WLR
1743, 1757;  EWHC 1765.
Home Entertainment International Ltd v British Sky Broadcasting Ltd  EWHC
v Times Newspapers Ltd 
2 AC 127.
(Payam) v Google Inc, Google UK Ltd  EWHC 448 (QB).
Convention on Human Rights.
Vehicles (Wearing of Seat Belts) Regulations 1982.
For the sake of convenience, ‘Fake News’ will not be appearing capitalised or
in quotation marks on every occasion, although it is a controversial term.
A. Kaplan & M. Haenlein, ‘Users of the world, unite! The challenges and
opportunities of social media’ (2010) 53(1) Business Horizons, 61.
Pew Research Center, ‘Global Publics Embrace Social Networking’ (2010)
accessed 10 April 2018.
K. Purcell & others, ‘Understanding the Participatory
News Consumer’ (2010) Pew Research Center Research Paper,
accessed 10 April 2018.
H. Kwak, C. Lee & others, ‘What is Twitter, a
social network or a news media?’  ACM Digital Library, 591.
A. Hermida, F. Fletcher & others, ‘Share, Like,
Recommend’ (2012) 13(3-6) Journalism Studies
<www.tandfonline.com/doi/abs/10.1080/1461670X.2012.664430> accessed 13
J. Singer & others, Participatory Journalism (Wiley-Blackwell 2011).
Reuters Digital News Report 2017,
10, accessed 11 April 2018.
C.R. Sunstein, Republic.com 2.0 (Princeton
University Press 2007), 107.
I. Pentina & M. Tarafdar, ‘From ‘information’ to
the role of social media in contemporary news consumption’ (2014) 35 Computers in Human Behaviour, 211.
J. Blom & K. Hansen, ‘Click bait: Forward reference
as lure in online news headlines’ (2015) 76 Journal of Pragmatics, 87.
C. Lee & L. Ma, ‘News sharing in Social Media’ (2012) 28 Computers in Human
A. Diddi & R. LaRose, ‘Getting Hooked on News:
Uses and Gratifications and the Formation of News Habits Among College Students
in an Internet Environment’ (2006) 50(2)
Journal of Broadcasting & Electronic Media 183, cited in Lee & Ma (n
M. Ames & M. Naaman, ‘Why We Tag: Motivations for annotation in mobile and
online media’  ACM Digital Library 971, cited in Lee & Ma (n 13).
E. Rogers, Diffusion of Innovations (5th
edn., Free Press 2003), cited in Lee & Ma (n 13).
O. Nov & others, ‘Analysis of participation in an online photo sharing
community’ (2010) 61(3) Journal of the American Society for Information Science
and Technology 555, cited in Lee & Ma (n 13).
M. Eslami & others, ‘Reasoning
about Invisible Algorithms in News Feeds’ 
ACM Digital Library, 156.
T.J. McIntyre, ‘Child Abuse Images and Cleanfeeds:
Assessing Internet Blocking Systems’ in I. Brown (eds), Research Handbook on Governance of the Internet (Edward Elgar
International AG v British Sky Broadcasting Ltd  EWHC 3354 (Ch), Dramatico Entertainment Ltd v British Sky Broadcasting Ltd 
EWHC 1152 (Ch), Paramount Home
Entertainment International Ltd v British Sky Broadcasting Ltd  EWHC
C. Beaumont, ‘Mumsnet founders demand libel law reform’ The Telegraph (19 November 2010)
accessed 17 April 2018.
CG v Facebook Ireland Ltd  NICA
R.J. Deibert & N. Villeneuve, ‘Firewalls and Power: An Overview of Global
State Censorship of the Internet’ in M. Klang & A. Murray (eds), Human Rights in the Digital Age (Glasshouse
Press 2005), 111.