
UK Media Law Pocketbook Second Edition published by Routledge 30th November 2022
By Tim Crook
Most new media law cases in the UK and European courts concern communications on social media and online publication.
The focus of this chapter is on how social media platforms generate new kinds of media law and regulation issues.
The risk of minimalist implication, liability for ‘likes’ and ‘emojis’, and hidden computer coded information such as tags, file names, and alternative text.
The ECtHR has ruled that the privacy right to be forgotten can be implemented by de-linking through tags removal in web postings.
New legislation is being enacted to determine legal responsibility for online stalking, intimidation and hate communication, and new case law is emerging seeking to preserve freedom of expression rights in these contexts.
This is a field of media law where public communication and liability operates within private corporately owned space with rights determined by contract law.
If you are reading and accessing this publication as an e-book such as on the VitalSource platform, please be advised that it is Routledge policy for clickthrough to reach the home page only. However, copying and pasting the url into the address bar of a separate page on your browser usually reaches the full YouTube, Soundcloud and online links.
The companion website pages will contain all of the printed and e-book’s links with accurate click-through and copy and paste properties. Best endeavours will be made to audit, correct and update the links every six months.

Video-cast on Social Media Law in a few minutes
Bullet points summarizing key aspects of social media law
11.1 THE CORPORATE PRIVATE LAW CONTEXT IN SOCIAL MEDIA LAW
Online Links Printed Book
Pages 257 and 258
Dennis Cooper fears censorship as Google erases blog without warning Guardian 14 July 2016
https://www.theguardian.com/books/2016/jul/14/dennis-cooper-google-censorship-dc-blog
Twitter. Permanent suspension of @realDonaldTrump
https://blog.twitter.com/en_us/topics/company/2020/suspension
BBC 9 January 2021 “Twitter ‘permanently suspends’ Trump’s account.”
https://www.bbc.co.uk/news/world-us-canada-55597840
Press Gazette Jan 5 2021 “Google reinstates TalkRadio’s YouTube channel after being accused of ‘censorship’”
https://pressgazette.co.uk/google-deletes-talkradio-youtube-channel-for-unspecified-violation-of-community-guidelines/
Facebook Community Standards. ‘The Facebook Community Standards outline what is and isn’t allowed on Facebook.’
https://transparency.fb.com/en-gb/policies/community-standards/
The Twitter Rules ‘Our rules are to ensure all people can participate in the public conversation freely and safely.’
https://help.twitter.com/en/rules-and-policies/twitter-rules
Twitter. ‘Our approach to policy development and enforcement philosophy.’
https://help.twitter.com/en/rules-and-policies/enforcement-philosophy
YouTube- Rules and policies. Community Guidelines
https://www.youtube.com/intl/ALL_uk/howyoutubeworks/policies/community-guidelines/
WordPress.com Support. Policies & Safety User Guidelines
https://wordpress.com/support/user-guidelines/
Google Content Policies
https://support.google.com/accounts/answer/147806
Requests to Google for delisting and reporting content for legal reasons
https://support.google.com/legal/answer/3110420?rd=2
Google. Right to be forgotten overview and submitting a request to remove material and delist it from Google search results.
https://support.google.com/legal/answer/10769224?hl=en-GB&ref_topic=4556931
YouTube. How Content ID works.
https://support.google.com/youtube/answer/2797370?hl=en-GB
TikTok Community Guidelines
https://www.tiktok.com/community-guidelines?lang=en
11.2 THE NORWICH PHARMACAL RULE ON IDENTIFYING ANONYMOUS SOCIAL MEDIA AUTHORS
Online Links Printed Book
Page 259
Pinsent Masons ‘Disclosure: a guide to seeking Norwich Pharmacal orders.’
https://www.pinsentmasons.com/out-law/guides/disclosure-guide-seeking-norwich-pharmacal-orders
Collier & Ors v Bennett [2020] EWHC 1884 (QB) (15 July 2020)
https://www.bailii.org/ew/cases/EWHC/QB/2020/1884.html
BW Legal Services Limited v Glassdoor Inc [2022] EWHC 979 (QB)
https://www.5rb.com/case/bw-legal-services-limited-v-glassdoor-inc/
Norwich Pharmacal Co v Customs and Excise Commissioners [1973] UKHL 6 (26 June 1973)
https://www.bailii.org/uk/cases/UKHL/1973/6.html
11.3 LIBEL AND PRIVACY RISKS IN SOCIAL MEDIA LAW
Online Links Printed Book
Pages 260 and 261
McAlpine v Bercow [2013] EWHC 1342 (QB) (24 May 2013)
https://www.bailii.org/ew/cases/EWHC/QB/2013/1342.html
Foster v Jessen (Rev1) [2021] NIQB 56 (27 May 2021)
https://www.bailii.org/nie/cases/NIHC/QB/2021/56.html
Monroe v Hopkins [2017] EWHC 433 (QB) (10 March 2017)
https://www.bailii.org/ew/cases/EWHC/QB/2017/433.html
‘“High drama, with the lowest stakes” – what really happened at the Wagatha Christie trial’ by Hadley Freeman, The Guardian 28 May 2022
https://www.theguardian.com/lifeandstyle/2022/may/28/wagatha-christie-vardy-v-rooney-celebrity-trial
Vardy v Rooney [2020] EWHC 3156 (QB) (20 November 2020)
https://www.bailii.org/ew/cases/EWHC/QB/2020/3156.html
Vardy v News Group Newspapers Ltd [2022] EWHC 946 (QB) (21 April 2022)
https://www.bailii.org/ew/cases/EWHC/QB/2022/946.html
Vardy v Rooney [2022] EWHC 2017 (QB) (29 July 2022)
https://www.bailii.org/ew/cases/EWHC/QB/2022/2017.html
Rocknroll v News Group Newspapers Ltd [2013] EWHC 24 (Ch) (17 January 2013)
https://www.bailii.org/ew/cases/EWHC/Ch/2013/24.html
11.4 CONTEMPT OF COURT RISKS IN SOCIAL MEDIA LAW
Online Links Printed Book
Page 262
British Broadcasting Corporation & Eight Other Media Organisations, R (on the application of) v F & D [2016] EWCA Crim 12 (11 February 2016)
https://www.bailii.org/ew/cases/EWCA/Crim/2016/12.html
CRAIG MURRAY AGAINST HER MAJESTY’S ADVOCATE [2022] ScotHC HCJAC_14 (25 March 2022)
https://www.bailii.org/scot/cases/ScotHC/2022/2022_HCJAC_14.html
Her Majesty’s Attorney General v Dowie [2022] EWFC 33 (13 April 2022)
https://www.bailii.org/ew/cases/EWFC/HCJ/2022/33.html
HM Attorney General v Hartley [2021] EWHC 1876 (Fam) (06 July 2021)
https://www.bailii.org/ew/cases/EWHC/Fam/2021/1876.html
11.5 HATE SPEECH AND OTHER CRIMINAL OFFENCES IN SOCIAL MEDIA LAW
Online Links Printed Book
Page 263
Crown Prosecution Service. Social Media – Guidelines on prosecuting cases involving communications sent via social media. Revised: 21 August 2018.
https://www.cps.gov.uk/legal-guidance/social-media-guidelines-prosecuting-cases-involving-communications-sent-social-media
Chambers v Director of Public Prosecutions [2012] EWHC 2157 (Admin) (27 July 2012)
https://www.bailii.org/ew/cases/EWHC/Admin/2012/2157.html
Miller, R (On the Application Of) v The College of Policing [2021] EWCA Civ 1926 (20 December 2021)
https://www.bailii.org/ew/cases/EWCA/Civ/2021/1926.html
11.6 ONLINE SAFETY LEGISLATION AND THE JOURNALIST EXEMPTION
Online Links Printed Book
Page 264
World-first online safety laws introduced in Parliament
https://www.gov.uk/government/news/world-first-online-safety-laws-introduced-in-parliament
Online Safety Bill Originated in the House of Commons, Sessions 2021-22, 2022-23
https://bills.parliament.uk/bills/3137/publications
News Media Association Welcomes Press Exemption For New Communications Offences
http://www.newsmediauk.org/News/nma-welcomes-press-exemption-for-new-communications-offences/278395
Press Gazette 17 March 2022 ‘Online Safety Bill introduced to Parliament, but industry seeks further protections for journalism.’
https://pressgazette.co.uk/online-safety-bill-introduced-to-parliament-industry-seeks-further-protections-for-journalism/
11.7 SOCIAL MEDIA LAW AT STRASBOURG
A downloadable sound file explaining social media law at Strasbourg 11.7 podcast downloadable
Online Links Printed Book
Page 265
DELFI AS v. ESTONIA – 64569/09 – Grand Chamber Judgment [2015] ECHR 586 (16 June 2015)
https://www.bailii.org/eu/cases/ECHR/2015/586.html
5RB law report and analysis of First Section ruling in Delfi AS v Estonia
https://www.5rb.com/case/delfi-v-estonia/
BIANCARDI v. ITALY – 77419/16 (Judgment: No Article 10 – Freedom of expression-{general}: First Section) [2021] ECHR 972 (25 November 2021)
https://www.bailii.org/eu/cases/ECHR/2021/972.html
11.8 UPDATES AND STOP PRESS IN SOCIAL MEDIA LAW
Erratum- proofing correction to the printed book
Page 260- line 4. The date reference for the libel trial of Rebekah Vardy and Coleen Rooney should be 2022 and not as stated 2012.
Secondary Media Law Codes and Guidelines
IPSO Editors’ Code of Practice in one page pdf document format https://www.ipso.co.uk/media/2032/ecop-2021-ipso-version-pdf.pdf
The Editors’ Codebook 144 pages pdf booklet 2023 edition https://www.editorscode.org.uk/downloads/codebook/codebook-2023.pdf
IMPRESS Standards Guidance and Code 72 page 2023 edition https://www.impress.press/wp-content/uploads/2023/02/Impress-Standards-Code.pdf
Ofcom Broadcasting Code Applicable from 1st January 2021 https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code Guidance briefings at https://www.ofcom.org.uk/tv-radio-and-on-demand/information-for-industry/guidance/programme-guidance
BBC Editorial Guidelines 2019 edition 220 page pdf http://downloads.bbc.co.uk/guidelines/editorialguidelines/pdfs/bbc-editorial-guidelines-whole-document.pdf Online https://www.bbc.com/editorialguidelines/guidelines
Office of Information Commissioner (ICO) Data Protection and Journalism Code of Practice 2023 41 page pdf https://ico.org.uk/media/for-organisations/documents/4025760/data-protection-and-journalism-code-202307.pdf and the accompanying reference notes or guidance 47 page pdf https://ico.org.uk/media/for-organisations/documents/4025761/data-protection-and-journalism-code-reference-notes-202307.pdf
IMPRESS Standards Code 2023 and its innovatory approach to journalism standards in the Internet and social media context
See: https://www.impress.press/wp-content/uploads/2023/02/Impress-Standards-Code.pdf
IMPRESS is a UK regulator for journalism publishers which has been recognised by the Press Recognition Panel established by Royal Charter following the Leveson Inquiry into press standards which reported in 2012.
Many of the publishers regulated by IMPRESS are online outlets only or hybrid in terms of online and print. The Standards code, therefore, addresses ethical and standards issues strongly in the context of news websites and social media.
IMPRESS says under the headings of scope and remit: ‘The scope of journalism is broad and includes publishing content on the publisher’s website and official social media accounts.’
At page 6 the Code document highlights the legal and ethical risks of ‘sourcing and publishing content from online or social media sources and UGC [user generated content].’
In its guidance for Clause 1- Accuracy, at page 17 IMPRESS advises:-
‘(a) be aware of the use of artificial intelligence (AI) and other technology to create and circulate false content (for example, deepfakes), and exercise human editorial oversight to reduce the risk of publishing such content;
(b) be aware of the use of AI by news distributors to generate, curate, rank and circulate news;
(c) exercise editorial oversight to ensure the accuracy of any content produced by an AI system;’
And at ‘(e) clearly label and provide hyperlinks where possible to corroborate sources that verify the content (see Clause 10: Transparency).’
IMPRESS appears to be the first UK journalism regulator to engage and advise directly in its Code on standards in respect of aritificial intelligence.
At page 22 in the guidance on posting accuracy corrections online the advice is very specific to web-based technology and platform architecture:
‘For online corrections, a publisher should consider where the story first appeared, the amount of time it was available online, and how many people had viewed the article. A story may sit as the lead on a website for many hours before moving to a less prominent position. The correction may be pinned or displayed on the news publisher’s homepage for a reasonable period to allow readers to see it. Similarly, if the error was widely shared in a post or tweet, it may be appropriate to promote the correction to reach the same audience or pin it to the social media account for a reasonable period.’
At page 24 the code guidance said proper attribution and plagiarism avoidance ‘extends to content taken or submitted from social media.’
At page 35 the IMPRESS code addresses Clause 4 ‘encouraging hatred or abuse of a person or group based on their characteristics’ and advises: ‘This may include subjecting them to abuse on social media, excluding them from online communities…’ It can be presumed that IMPRESS is conscious here of the harmful ‘pile-on’ phenomenon where, for example, a Twitter account holder with a very high number of followers, concatenates an overwhelming wave of abuse against an individual or group.
At page 40 the IMPRESS guidance on Clause 5 Harassment:
‘…journalists should use their professional emails and social media accounts, which clearly identify:
(a) themselves as a journalist; and
(b) how they can be contacted; and
(c) if applicable, the publisher they work for.’
IMPRESS also advises on the same page that ‘Journalists should normally keep records of all communications relating to their work for a minimum of 12 months; this includes timestamps, logs, written, email and messaging communications.’
Professor Tim Crook, author of UK Media Law Pocketbook 2nd Edition states in the printed text that retaining records should be stretch as far back as at least 6 years, perhaps even as long as a career lasts. See Addition to Chapter Two Guide to Court Reporting- ‘Keeping reporting files and the importance of accurate and professional shorthand as a journalist/reporting skill when covering the courts,’ and the analysis of the Tony Palmer High Court case ruled on at the end of 2022.
At pages 50-1 in the guidance for Clause 7 Privacy IMPRESS quite rightly reminds journalists that:
‘if a person posts images of themselves on social media with privacy settings in place, they would likely have a reasonable expectation that this would not be posted elsewhere. Therefore, this clause could be breached if a publisher takes and publishes images of a person from a private social media account without the individual’s consent. However, it is important for publishers to understand that although a reasonable expectation of privacy may be weaker where no privacy settings are in place, the absence of privacy settings will not necessarily remove any expectation of privacy from the individual concerned. Posting an image on an account without privacy settings does not mean that the individual is consenting to the publication of the images by journalists or publishers, which may reach a wider or entirely different audience than those usually viewing the content on the individual’s account.’
The IMPRESS code on protecting reasonable expectation of privacy not unexpectedly at page 54 defines unacceptable deceptive behaviour as ‘phone, computer or social media account hacking’ and ‘Additionally, publishers must refrain from using deceptive methods to obtain information. This includes using fake social media accounts to access individuals or groups online.’
The code of standards does offer the exeption of ‘using secret devices and deceptive methods where it is in the public interest to do so,’ though of course when such methods are deployed and challenged there are criminal offences that could be committed where no public interest defence is available in law.
Still on privacy at page 55 IMPRESS emphasises: ‘As explained in the guidance to Clause 7.1, journalists should not knowingly publish material that has been acquired by breaching a person’s social media privacy settings. This means that where a journalist obtains material from social media, and it is clearly sourced, they should take reasonable steps to gain consent before using it.’
At page 56, the IMPRESS guidance goes into some detail about acceptable gathering of journalistic information from social media accounts, for example on Facebook:
‘It will not always be evident whether a person has intended to restrict access to information on social media, as privacy settings will vary depending on the platform and the individual’s literacy on social media. In some cases, however, it will be evident that a person used restricted privacy settings to limit the audience from viewing their material or contacting them. For example, Facebook allows users to select who can view their posts and ‘look them up’ using their email address or phone number. A person may select ‘friends’ from the list of options provided, which also includes ‘only me’, ‘friends of friends’ and ‘everyone’. In that case, it may be a breach of this clause for a journalist to extract and use material posted without their consent.’
IMPRESS is repeatedly emphatic on the standard of privacy consideration it expects of journalists in respect of sourcing from social media:
Page 57: ‘..journalists should respect the privacy settings of the deceased’s social media accounts.’
Page 58: ‘Contacting people to gain first-hand details can also be problematic for journalists, particularly when the story relates to a traumatic event or death. When gathering information from social media sources, journalists should follow the guidance provided..’
For the guidance on Clause 9 in respect of Suicide and Self Harm, IMPRESS advises at page 66:
‘Publishers should signpost sources of support such as helplines when reporting on suicide. This could be in the article by-line, footer or a pinned comment on social media.’
And- [publishers] ‘should be cautious when re-publishing content from social media, such as comments on Facebook tribute walls, as such messages can inadvertently glamorise suicide, particularly for vulnerable young people.’
The updated IMPRESS Standards Code and Guidance released in February 2023 is a welcome and progressive contribution to guiding professional journalists on ethics when working online and with social media. Even if you are a journalist regulated by IPSO, the IMPRESS resource is informative and enlightening.
It should also be appreciated that the courts are obliged by the Human Rights Act under Section 12 to take into account journalism codes of ethics when ruling on the balancing of freedom of expression and privacy rights.
The IMPRESS code is very much part of the journalism standards topography that could be consulted and cited in legal argument; even where an individual journalist or publisher is not specifically subject to IMPRESS regulation.
March 2023 High Court ruling in an alleged ‘Twibel’- libel case arising from a Tweet.
Versi v Husain (aka Ed Husain) (Rev1) [2023] EWHC 482 (KB) (03 March 2023)
See: https://bailii.org/cgi-bin/format.cgi?doc=/ew/cases/EWHC/KB/2023/482.html
Between:
MIQDAAD VERSI Claimant
- and -
MOHAMED HUSAIN (AKA ED HUSAIN) Defendant
Ruling by His Honour Judge Lewis. Outcome of the trial of preliminary issues concerning the meaning of a Tweet, whether it is fact or opinion and whether Tweet is defamatory of the claimant.
Paragraphs 1 to 7 set out what the case was about:
1 The claimant is the former director of media monitoring at the Muslim Council of Britain and describes himself as a campaigner in his own right against Islamophobia, particularly with regards to the representation of Muslims.
2 The defendant is an author, academic and an adviser to western governments on Islamist extremism, terrorism and national security.
3 The claimant has sued the defendant for libel in respect of a tweet posted by the defendant on 21 November 2020 (“the Tweet”).
4 The claimant issued proceedings on 17 November 2021, a few days before the expiry of the limitation period. He seeks damages of at least £25,000 and an injunction preventing republication of the words complained of, or similar words defamatory of the claimant.
5 On 28 April 2022, Nicklin J directed that there be a trial of the following preliminary issues pursuant to CPR 3.1(2)(i) and (j) and CPR PD 53B para 6: (i) the natural and ordinary meaning of the statement complained of; (ii) whether the statement complained of is (or includes) a statement of fact or opinion; and (iii) whether the statement is defamatory of the claimant at common law.
The Tweet
6 The Tweet was a “quote tweet” in which the defendant republished an earlier tweet of the claimant, with his own comment added.
7 A copy of the Tweet as it would have appeared to readers is set out in the schedule to this judgment. The text was as follows:
“Pipe down, you
pro-Hamas
pro-Iran
pro-gender discrimination
pro-blasphemy laws
pro-secretarian
anti-Western
‘Representative’ of an Islamist outfit.
[Embedded tweet in box] Miqdaad Versi – 1h
Why does Fraser Nelson – a man who as editor
is accountable for so much anti-Muslim hate
propagated in the Spectator – think it is
appropriate to explain Islamophobia to a Muslim woman?…
Show this thread”
The court’s decision is set out in paragraphs 48, 49 to 53 and 57 to 61
Paragraph 48 on meaning.
I am satisfied that the natural and ordinary meaning of the Tweet is as follows:
a. The claimant has expressed views that are supportive of the repressive regime in Iran, gender discrimination, blasphemy laws and sectarianism and which are anti-Western.
b. The claimant has expressed views that are supportive of Hamas, a militant Islamist group with known links to violence.
c. The claimant holds extremist, Islamist views. His endorsement of such views is so objectionable that he has no place participating in this public debate.
Paragraphs 49 to 53 on Fact or opinion?
49 The relevant law was summarised by Nicklin J in Koutsogiannis at [16].
50 I agree with both parties that the Tweet contains statements of fact, namely that the claimant has expressed views that are supportive of Hamas, Iran, gender discrimination, blasphemy laws and sectarianism and that he has been the representative of an organisation.
51 Whilst the term “anti-Western” could be taken in some contexts as a value judgement on the claimant’s views, in this case it was included within a list of factual matters, and I agree with the parties that it would be understood to be a statement of fact.
52 Stating or implying that someone holds extremist views may be a statement of fact, or of opinion. It will depend on context. It is important to remember that the Tweet was sent as part of a debate on Twitter on the politics of the Middle East in which participants were expressing their opinions on the views of others. The Tweet would have been understood by the ordinary reader as being the author’s evaluation of the claimant’s public statements. It was a comment, or expression of opinion, on the views expressed by the claimant.
53 I am satisfied, therefore, that the statement complained of comprised factual statements about the claimant, and an opinion in respect of them. If looked at in terms of the natural and ordinary meaning, limbs (a) and (b) are statements of fact, whereas (c) is an expression of opinion.
Paragraphs 57 to 61 on whether the Tweet was defamatory.
‘57 We live in a modern, diverse society which recognises the importance of freedom of thought, and of expression. Whilst there is a broad consensus within society on matters such as the rule of law, on many issues of public policy there is not. Our democratic process relies on robust debate and discussion and allowing the free expression of views. Not all views will be mainstream, and at every election there are candidates who stand on platforms that reflect the range of views in society, including from both ends of the political spectrum. Ordinarily, right-thinking members of society generally would not think less of someone for simply expressing their views on a matter, or disagreeing with another.
58 A statement about someone’s views is only defamatory if it attributes views that would lower a person in the estimation of “right-thinking people generally”, and a statement is not defamatory if it would only tend to have an adverse effect on the attitudes to the claimant of a certain section of society, see Monroe at [50]. In Monroe, Warby J explained that the judge’s task is to determine whether the behaviour or views that the offending statement attributes to a claimant are contrary to common, shared values of our society [51].
59 The defendant says that the meaning is not defamatory at common law because the defendant is advancing a criticism as to the effect of the claimant’s views. The same point was raised in Mughal v Telegraph Media Group Limited [2014] EWHC 1371 (QB). Tugendhat J considered that the claimant’s views were not violent views but were ones which tended nevertheless to have dangerous consequences. That was not defamatory of the claimant since the criticism was as to the effect of his views, and not of his character. This is not the position here. The criticism being made is of the claimant’s views, not the effect of those views. Furthermore, it was a criticism of the claimant having expressed those views.
60 In this case I am satisfied that the natural and ordinary meaning conveyed by the Tweet was defamatory by the standards of the common law.
61 Whilst stating that a person holds some of the views identified in the Tweet would not in itself be defamatory, the Tweet needs to be looked at in its entirety. Right thinking members of society generally would deplore those who express views in support of Hamas, as a militant Islamist group with known links to violence. It is also contrary to the common or shared values of our society to express extremist views that are so objectionable as to undermine the legitimacy of the claimant’s own participation in public debate. Attributing such views to the claimant would lower a person in the estimation of “right-thinking people generally”. The imputation is one that would tend to have a substantially adverse effect on the way that people would treat the claimant, and their attitude towards him.’
Artificial Intelligence and Media Law- the implications of ChatGPT and other systems of replicating human communications production robotically 30th April 2023
Artificial intelligence has been used in journalism and many other forms of professional work for many years. The spelling and grammar checking software in word processing programmes is a form of AI. So are data processing online platforms such as Google and Bing.
Data Journalism software which ‘scrapes’ data online to create visual representations of data and information is also AI as are the programmes which vocalise text- some of which provide sound presentation that can sometimes and rather eerily appear as though this has been performed by a professional broadcast journalist or actor. AI programmes can absorb prior recordings of an individual’s voice and be used by a fraudster to imitate convincingly the voice of this individual in sound only communications.
Siri provided by Apple and Alexa provided by Amazon are examples of the rapidly developing forms of AI which are becoming close to providing representation of human robots which used to be the stuff of Science Fiction.
ChatGPT’s engagement with journalism and digital intelligence and programmes like it raise a range of legal and ethical issues. As already reported on this companion website page, IMPRESS is the first UK journalism regulator to recognise professional ethical obligations in the use of AI.
In April 2023, the first reported dismissal of a journalism editor for using CGTGPT to construct a fake interview with seven-times F1 world champion Michael Schumacher emerged from Germany. Die Aktuelle had published a mock interview. Mr Schumacher’s family said they were considering taking legal action against the magazine which is owned by Funke media group.
The managing director of the group Bianca Pohlman apologised and said: ‘This tasteless and misleading article should never have appeared. It in no way meets the standards of journalism that we and our readers expect from a publisher like Funke. As a result of the publication of this article, immediate personnel consequences will be drawn. Die Aktuelle editor-in-chief Anne Hoffmann, who has held journalistic responsibility for the paper since 2009, will be relieved of her duties as of today.’
See: Observer and Reuters ‘Magazine editor sacked over AI-generated Schumacher interview’ at: https://www.theguardian.com/sport/2023/apr/22/michael-schumacher-formula-one-interview-die-aktuelle-editor-sacked and Mail Online: ‘The editor of German magazine Die Aktuelle has reportedly been fired,’ at: https://www.dailymail.co.uk/sport/formulaone/article-12002195/Editor-Die-Aktuelle-fired-producing-fake-interview-Michael-Schumacher.html
AI and copyright/intellectual property law is already being addressed, largely because sophisticated forms of AI such as ChatGPT depends on sourcing information. If the information has IP rights issues then image rendering and/or textual production using source materials beyond fair dealing/fair use discretion could lead to litigation.
For example, Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content. (See: https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit) Sui generis and copyright database laws protect the intellectual property of subscription protected and open source database providers.
This is still active and fully applying in the UK. Copyright database and Sui generis database laws are summarized by the government at https://www.gov.uk/guidance/sui-generis-database-rights#sui-generis-database-rights-in-the-uk
In addition, contractual law where users have to agree terms and conditions provides further protection.
Robotic systems of human style production, like animals, do not have legal personality in law. The legal responsibility and liability in litigation, as well as criminal law actus reus and mens rea lies with individuals, government bodies or private corporate bodies who/which use the AI for communications and publication.
Consequently, where AI generates inaccuracy, libel, contempt of court, breach of privacy, and breach of professional ethics codes, the legal trail in prosecution and litigation will be to the person or persons who deployed and used it.
There could be an interesting test case in the courts where an individual created a publication and an outside and intervening form of AI changed the content without permission to create a risk of media law infraction. This set of circumstances would need evidential forensic analysis and explanation. The issues of consent, permission and human agency in the operation and impact of the AI programmes/systems would have to be legally explored.
Test cases are emerging. Allegations are being pursued that ChatGPT is fashioning defamatory online conduct and actually faking articles which were never written. A libel action is being prepared in Australia. Arab News and Reuters reported: ‘Australian mayor readies world’s first defamation lawsuit over ChatGPT content.’ See: https://www.arabnews.com/node/2281861/media Chris Moran has written for the Guardian that: ‘ChatGPT is making up fake Guardian articles. Here’s how we’re responding.’ See: https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article. At the same time the Mail Online has been reporting ‘ChatGPT falsely accuses a law professor of a sex attack against students during a trip to Alaska that never happened – in shocking case that raises concerns about AI defaming people.’ See: https://www.dailymail.co.uk/sciencetech/article-11948855/ChatGPT-falsely-accuses-law-professor-SEX-ATTACK-against-students.html
One of the gravest injustices caused by over-reliance on AI in recent times is undoubtedly the Post Office ‘Horizon’ computer scandal. Between 2000 and 2014, the Post Office prosecuted 736 sub-postmasters and sub-postmistresses – an average of one a week – based on information from a recently installed computer system called Horizon.
A court would later rule that the system contained ‘bugs, errors and defects, and that there was a ‘material risk that shortfalls in branch accounts were caused by the system. The consequences were appalling and tragic. Hundreds of people had been wrongly prosecuted and convicted for fraud/theft crimes they had not committed, but were the fault of the artificial intelligence generated by the Horizon programme.
The use of digital algorithms that set a template decision making process for much more complex circumstances than the task imagined and designed for can result in discrimination and injustice. Pierluigi Bizzinie explains such a case which ‘blew up Italy’s school system.’ It was supposed to save time by allocating teachers on short-term contracts to schools automatically. Failures in the code and in the design severely disrupted teachers’ lives. See: https://algorithmwatch.org/en/algorithm-school-system-italy/
Mainstream news publishers have frequently complained that Google and other ‘search engines’ and digital platforms can disadvantage their present and referral rate. The simple issue here is that digital information technology and artificial intelligence is constructed by codification that is different to the language appearing before you in terms of reading written text.
By way of example, if you are using a Macbook, ctrl right click and then click on ‘inspect’ and the representation of hidden html code will be apparent. Without computer code education and training, how are you to know what it means and how it works along with all the icons and layouts used?
It is the hidden language behind or ‘inside the screen’ as it were. Consequently, the very concept of ‘media literacy’ has changed into something which is much more complicated and technical in the 21st century. The maxim ‘knowledge is power’ has an enhanced significance in this context.
Should AI generate accurate and reliable journalistic communications which do not breach primary and secondary media law, the question of whether AI as authorship should be transparent to the audience is a matter of ethics and is now being debated.
The issue of plagiarism in all forms of education is certainly being debated and investigated. Should universities and schools abandon digital online assessment and submission and revert wholly to the old-fashioned method of unseen invigilated examination restricted to pen and paper and quarantined from all mechanical/machine aids? Universities use artificial intelligence to check for plagiarism. An example is the Turnitin programme.
Plagiarism in journalistic professional work is already an issue for commissioning editors and their receipt and payment for freelance work. Can they be sure features and articles they are buying are original and authentic to the contributors? Does it actually matter if AI such as ChatGPT is a significant part of, or most of the source of the work?
The quality of voice activation programmes is such that a single journalist working in a radio station could use one to provide an alternative reporter’s voice for a report of a court case in a news bulletin. But how should the report be cued and what would be the impact on listener’s trust if it was not possible to tell the difference between an AI report and actual radio journalist’s voiced report? More ethical issues arise if high quality digital production techniques are deployed to pretend that a report has been originated on location.
AI image enhancement software can transform an in image into something which is not seen by the naked eye. This is certainly true of images shown in online news sites and television news of the Aurora Borealis or Northern Lights. A news story might say why go to Reykjavik when you can see it in Ramsbottom or Biggleswade? But isn’t there an obligation to explain that the photograph has been filtered and enhanced.
The potential for AI digital photography to create and masque analogue authenticity has been recently tested by photographer and German artist Boris Eldagsen who admitted that a prize-winning image to the Sony world photography awards was AI-generated. He refused the prestigious award after admitting to being a “cheeky monkey” in order to provoke debate. See: https://www.theguardian.com/technology/2023/apr/17/photographer-admits-prize-winning-image-was-ai-generated and ‘‘AI isn’t a threat’ – Boris Eldagsen, whose fake photo duped the Sony judges, hits back’ at: https://www.theguardian.com/artanddesign/2023/apr/18/ai-threat-boris-eldagsen-fake-photo-duped-sony-judges-hits-back
We are now very much aware of social media programmes which can improve physical appearances, remove winkles and look younger. The notorious online deception practice of catfishing is often perpetrated through voice distortion and transformation and visual masking. The immoral and unlawful utilisation of such digital intelligence devices raise questions about integrity, trust, honesty and notions of originality and authenticity.
The use of such technology does engage criminal and civil legal liability and while the prospect of such developments may cause apprehension about the extent to which honest people could be tricked and deceived, digital technology does leave a fingerprint evidentially and should in theory make investigation, prosecution and conviction easier.
General ChatGPT systems reliant on global data mining cannot be trusted to overcome and apply the varying IP laws across national legal jurisdictional boundaries where copyright duration varies widely, and information can be legally private and covered by reporting restrictions in the UK and European countries but not so in the USA. However, AI can be specifically tailored in journalistic production by the selection of specific sources and pre-moderation and checking for accuracy.
And, of course, there is nothing to prevent the development of an AI Comparative Media Law Bot that substantially replaces the function and employment of media lawyers.
Angus McBride, News UK’s General Counsel, has argued in The Times ‘News conjured by rogue algorithms must be avoided.’ This is because the AI chatbot depends on scraping content from online sources to generate and inform its intelligence.
Mr McBride thinks the new regulator, the Digital Markets Unit, should urgently grapple with the issue of AI-written news. (See: https://www.thetimes.co.uk/article/news-conjured-by-rogue-algorithms-must-be-avoided-hp7fhlptn)
Donna Lu’s informative article in the Guardian sets out ‘Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can – and can’t – do.’ See: https://www.theguardian.com/technology/2023/apr/01/misinformation-mistakes-and-the-pope-in-a-puffer-what-rapidly-evolving-ai-can-and-cant-do
In August 2023 the BBC news and current affairs progrmme Panorama devoted an edition to the issue of AI and its impact on human society. See: ‘Beyond Human: Artificial Intelligence and Us.’
“Machines are getting smarter. Much smarter. Now they are becoming so powerful, they could pose an existential threat to the human race. That’s the warning from some of the greatest minds behind the development of artificial intelligence.
For Panorama, reporter Lara Lewington speaks to some of the so-called ‘godfathers’ of AI about their hopes and fears, and she meets researchers developing technology that allows computers to read our emotions and even our minds.” See: https://www.bbc.co.uk/iplayer/episode/m001ph7q/panorama-beyond-human-artificial-intelligence-and-us
The UK government published a white paper ‘AI regulation: a pro-innovation approach’ 29th March 2023. ‘This white paper details our plans for implementing a pro-innovation approach to AI regulation. We’re seeking views through a supporting consultation.’
The Office of Information Commissioner providing the following response 11th April 2023. See: https://ico.org.uk/media/about-the-ico/consultation-responses/4024792/ico-response-ai-white-paper-20230304.pdf
London School of Economics: ‘The JournalismAI Report- New powers, new responsibilities. A global survey of journalism and artificial intelligence.’ See: https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities/
La rédaction de Heidi.news prend position sur l’usage des intelligences artificielles. See: https://www.heidi.news/cyber/la-redaction-de-heidi-news-prend-position-sur-l-usage-des-intelligences-artificielles
Persbureau ANP stelt ‘leidraad met vangrails’ op voor inzet van kunstmatige intelligentie. See: https://www.villamedia.nl/artikel/persbureau-anp-stelt-leidraad-met-vangrails-op-voor-inzet-van-kunstmatige-intelligentie
DPA: ‘Offen, verantwortungsvoll und transparent – Die Guidelines der dpa für Künstliche Intelligenz.’ See: https://innovation.dpa.com/2023/04/03/kuenstliche-intelligenz-fuenf-guidelines-der-dpa/
The Google translate artificial intelligence facility online has been used to translate five recommended DPA guidelines into English:
‘1.The dpa uses AI for various purposes and is open to the increased use of AI. AI will help to do our work better and faster – always in the interest of our customers and our products.
2.The dpa only uses AI under human supervision. The final decision about the use of AI-based products is made by a human. We respect human autonomy and the primacy of human choices.
3. dpa only uses legitimate AI that complies with applicable law and statutory provisions and that meets our ethical principles, such as human autonomy, fairness and democratic values.
4. dpa uses AI that is technically robust and secure to minimize the risk of errors and misuse. Where content is generated exclusively by AI, we make this transparent and explainable. A person is always responsible for all content generated with AI.
5. The dpa encourages all employees to be open and curious about the possibilities of AI, to test tools and to make suggestions for use in our workflows. Transparency, openness and documentation are crucial.’
Prior to the freely available use of such AI provided by Google this kind of translation work would have been commissioned from a professional linguist most likely educated to degree and postgraduate level in German and modern languages. The technology here clearly replaces human agency, reduces employment opportunity and effectively standardises a method and style of interpretative translation.
Bayerischer Rundfunk: ‘Ethics of Artificial Intelligence’- See: https://www.br.de/extra/ai-automation-lab-english/ai-ethics100.html
Here are some recent articles exploring and debating Artificial Intelligence media law and ethics issues:
Guardian and AFP report: ‘AI generated news presenter debuts in Kuwait media. Kuwait News introduced Fedha, promising that it could read online news in the future.’ See: https://www.theguardian.com/world/2023/apr/11/ai-generated-news-presenter-debuts-in-kuwait-media
Guardian reports: “‘I didn’t give permission’: Do AI’s backers care about data law breaches? Regulators around world are cracking down on content being hoovered up by ChatGPT, Stable Diffusion and others.” See: https://www.theguardian.com/technology/2023/apr/10/i-didnt-give-permission-do-ais-backers-care-about-data-law-breaches
Guardian reports: “‘I’m terrified’: what does AI Tom Brady mean for the future of media? The hosts of the Dudesy podcast were shocked when their robot companion created an hour-long standup special.” See: https://www.theguardian.com/technology/2023/apr/10/tom-brady-standup-ai-dudesy
New York Times reports (behind paywall) ‘Can We No Longer Believe Anything We See? Human eyes — and even technology — often struggle to identify images created by artificial intelligence.’ See: https://www.nytimes.com/2023/04/08/business/media/ai-generated-images.html
Marco Giannangeli writes for Sunday Express: ‘The terrifyingly real risk of AI with China now leading the robot march.’ See: https://www.express.co.uk/news/science/1756313/artificial-intelligence-china-threat-robot-data
Press Gazette- ‘Journalists: ChatGPT is coming for your jobs (but not in the way you might think’ 9th March 2023, See: https://pressgazette.co.uk/media_law/journalists-chatgpt-jobs-ai-copyright/
Press Gazette- ‘ChatGPT, AI and journalism: Legal and ethical pitfalls’ 2nd March 2023, See: https://pressgazette.co.uk/comment-analysis/ai-journalism-legal-ethical-considerations/
The London School of Economics hosts JournalismAI. See: https://www.lse.ac.uk/media-and-communications/polis/JournalismAI
‘JournalismAI is a global initiative that empowers news organisations to use artificial intelligence responsibly. We support innovation and capacity-building in news organisations to make the potential of AI more accessible and to counter inequalities in the global news media around AI. JournalismAI is a project of Polis – the LSE’s journalism think-tank – and is supported by the Google News Initiative.’
Reuters Institute, Oxford University. ‘UK media coverage of artificial intelligence dominated by industry, and industry sources.’ See: https://reutersinstitute.politics.ox.ac.uk/news/uk-media-coverage-artificial-intelligence-dominated-industry-and-industry-sources & https://reutersinstitute.politics.ox.ac.uk/our-research/industry-led-debate-how-uk-media-cover-artificial-intelligence & https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2018-12/Brennen_UK_Media_Coverage_of_AI_FINAL.pdf
For a perspective on what Artificial Intelligence represents in the long-term, Bill Gates believes there is significance in appreciating that ‘The Age of AI has begun.’ He argues: ‘Artificial intelligence is as revolutionary as mobile phones and the Internet.’ And he concludes: ‘Finally, we should keep in mind that we’re only at the beginning of what AI can accomplish. Whatever limitations it has today will be gone before we know it.’ See: https://www.gatesnotes.com/The-Age-of-AI-Has-Begun
John Naughton writes for Observer: ‘You wait ages for an AI chatbot to come along, then a whole bunch turn up. Why?’ See: https://www.theguardian.com/commentisfree/2023/mar/25/you-wait-ages-for-an-ai-chatbot-to-come-along-then-a-whole-bunch-turn-up-chatgpt
Guardian: “Elon Musk joins call for pause in creation of giant AI ‘digital minds.’ More than 1,000 artificial intelligence experts urge delay until world can be confident ‘effects will be positive and risks manageable.’” See: https://www.theguardian.com/technology/2023/mar/29/elon-musk-joins-call-for-pause-in-creation-of-giant-ai-digital-minds
The Open Letter of AI ‘Pause Giant AI Experiments: An Open Letter We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.’ See: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Alex Hern writes for Guardian: ‘My week with ChatGPT: can it make me a healthier, happier, more productive person?’ See: https://www.theguardian.com/technology/2023/apr/06/my-week-with-chatgpt-can-it-make-me-a-healthier-happier-more-productive-person
Press Gazette reports: ‘The ethics of using generative AI to create journalism: What we know so far
The use of generative AI tools can impact trust, accuracy, accountability and bias in newsrooms.’ See: https://pressgazette.co.uk/publishers/digital-journalism/ai-news-journalism-ethics/
BBC News 2nd May 2023 “AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google.” See: https://www.bbc.co.uk/news/world-us-canada-65452940
BBC News 16th March 2023 ‘AI: “How ‘freaked out’ should we be?” See: https://www.bbc.co.uk/news/world-us-canada-64967627
BBC Online feature By Richard Gray 19th November 2018. ‘The A-Z of how artificial intelligence is changing the world.’ See: https://www.bbc.com/future/article/20181115-a-guide-to-how-artificial-intelligence-is-changing-the-world
The Information Commissioner fined the social media platform TikTok 12.7 million pounds 4th April 2023 for misusing children’s data.
The ICO found that more than one million UK children under 13 estimated by the ICO to be on TikTok in 2020, contrary to its terms of service.
Personal data belonging to children under 13 had been used without parental consent and TikTok “did not do enough” to check who was using their platform and take sufficient action to remove the underage children that were. See: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/
The initial fine had been set at £27 million. See: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/09/ico-could-impose-multi-million-pound-fine-on-tiktok-for-failing-to-protect-children-s-privacy/ Taking into consideration the representations from TikTok, the regulator decided not to pursue the provisional finding related to the unlawful use of special category data. Special category data includes: ethnic and racial origin, political opinions, religious beliefs, sexual orientation, Trade union membership, genetic and biometric data or health data.
UK data protection law says that organisations that use personal data when offering information society services to children under 13 must have consent from their parents or carers. Companies who breach the UK GDPR and/or the Data Protection Act can be fined up to £17.5 million or 4% of the company’s annual global turnover, whichever is higher.
Information Commissioner, John Edwards said: ‘I’ve been clear that our work to better protect children online involves working with organisations but will also involve enforcement action where necessary. In addition to this, we are currently looking into how over 50 different online services are conforming with the Children’s code and have six ongoing investigations looking into companies providing digital services who haven’t, in our initial view, taken their responsibilities around child safety seriously enough.’ See ICO’s Children’s code at: https://ico.org.uk/childrenscode
This is just one of several setbacks in state regulation of TikTok across the global sphere. See Arab News ‘TikTok hit with UK fine, Australia government ban’ at: https://www.arabnews.com/node/2281316/media Italy’s competition watchdog had opened an investigation into TikTok for failing to enforce its own rules on removing “dangerous content” related to suicide and self-harm.
Australia joined a list of Western nations banning the Chinese-owned apps from government devices. The United States has been urging TikTok to split from its Chinese parent company, Bytedance. See the Guardian’s analysis by Kevin Rawlinson: “How TikTok’s algorithm ‘exploits the vulnerability’ of children.” See: https://www.theguardian.com/technology/2023/apr/04/how-tiktoks-algorithm-exploits-the-vulnerability-of-children
ICO fines TikTok £12.7 million for misusing children’s data
ICO could impose multi-million pound fine on TikTok for failing to protect children’s privacy
ICO Children’s Code for Digital online services
https://ico.org.uk/for-organisations/childrens-code-hub/
Arab News: ‘TikTok hit with UK fine, Australia government ban’
https://www.arabnews.com/node/2281316/media
Guardian’s analysis by Kevin Rawlinson: “How TikTok’s algorithm ‘exploits the vulnerability’ of children.”
Libel ruling in social media case where the judge finds for a defendant alleging sexual assault by blog and Facebook postings.
King’s Bench Division. William Hay v Nina Cresswell. Ruling by Mrs Justice Heather Williams 26th April 2023 in favour of the defendant.
Hay v Cresswell [2023] EWHC 882 (KB) (26 April 2023) See: https://www.bailii.org/cgi-bin/format.cgi?doc=/ew/cases/EWHC/KB/2023/882.html
The judge set out a summary of the case in paragraphs 1 to 9:
‘1.Mr Hay brings a claim for libel against Ms Cresswell in relation to her June and July 2020 publication of allegations that he had sexually assaulted her on the night of 27 – 28 May 2010 after the two had met in a nightclub in Sunderland.
2.Whilst there is some dispute about the precise meaning of the defendant’s publications, it is accepted that she alleged a violent sexual assault on the part of the claimant and that these words bore a defamatory meaning. It is also admitted that the claimant sustained serious harm. However, Ms Cresswell relies upon defences of truth and/or that the publications were on a matter of public interest. To a more limited extent she also relies upon a defence of qualified privilege.
3.The claimant is a tattoo artist. He says that the publications caused him great embarrassment, distress and damage to his reputation. He seeks general damages and also injunctive relief. A claim for aggravated damages is not pursued and no claim is made for financial loss.
4.The publications that form part of the claim are as follows:
i) On 4 June 2020 the defendant published a blog on the telegra.ph website (“the telegra.ph publication”);
ii) On 29 June 2020 the defendant contacted the claimant’s girlfriend and business partner, Emma Sweeney, by way of a Facebook message, attaching the telegra.ph publication (“the FB message publication”);
iii) On 3 July 2020 the defendant emailed Ms Sweeney (“the email publication”)
iv) On 22 July 2020 the defendant published two posts on Facebook (“the FB posts publications”);
v) On 22 July 2020 the defendant published a post on Instagram and shared the post to an Instagram story (“the Instagram publications”).
5.The Amended Particulars of Claim also relied upon the defendant’s Twitter post of 22 July 2020. However, this post did not name the claimant and the pleading did not rely upon extraneous material from which it was said that the claimant would have been identified as its subject. During the course of the trial, Mr Coulter indicated that he did not pursue the claim in relation to this post.
6.The defendant says that her primary intention in publishing these materials was to alert women who could otherwise become victims of sexual assault at the hands of the claimant, in particular in the context of his work as a tattooist. In summary, she says that in May 2010, when she was a 20 year old student, she met the claimant in ‘Passion’ nightclub, via a mutual friend, Richard Beston, and that he seriously sexually assaulted her as he was walking her home.
7.Further or alternatively, the defendant relies upon the defence in section 4 of the Defamation Act 2013 (“the 2013 Act”) that the publications complained of were or formed part of statements on a matter of public interest and she reasonably believed that publishing them was in the public interest, given that she reasonably believed the claimant had assaulted her and given the prevalence of sexual abuse within the tattoo industry and the need to protect women from this.
8.In relation to the FB message publication and the email publication only, the defendant also avers that the publications are protected by qualified privilege as she had a duty to communicate these matters to Ms Sweeney as the claimant’s employer or business partner, and Ms Sweeney had a duty to receive them, given the claimant would routinely come into intimate contact with unaccompanied female clients in the course of his tattooing work.
9.The claimant does not admit that the defendant was sexually assaulted on her way home from the nightclub and he maintains that if such an assault occurred, he was not the perpetrator and the defendant’s allegation in this regard is a deliberate fabrication on her part. Accordingly, he says that her truth defence and her public interest defence must fail. Furthermore, that the publications to Ms Sweeney were not on occasions of qualified privilege and, in any event the defence fails as the defendant acted maliciously in publishing knowingly false allegations.’
The judge concluded at paragraphs 215 to 217:
‘Overall conclusion and outcome
For the reasons that I have identified above I conclude that:
215) The natural and ordinary meaning of the defendant’s publications in relation to the claimant is that the claimant had violently sexually assaulted her;
216) This imputation was substantially true; the defendant has proved that the claimant sexually assaulted her in that manner in the early hours of 28 May 2010 in the circumstances that I have described. Accordingly, the statutory defence of truth provided for by section 2(1) of the Defamation Act 2013 is established;
217) Additionally, the defendant has established the defence in section 4 of the 2013 Act as she has shown that: the statements complained of were on a matter of public interest; that she believed this to be the case at the time of publishing them; and that her belief was reasonable in all the circumstances that I have discussed.
In light of these conclusions the claim fails and it is unnecessary for me to determine the further defence of qualified privilege. The question of remedy does not arise.
The parties will have the opportunity to address consequential matters by way of written submissions.’
Much attention has been given to the successful defence provided by Nina Cresswell under the public interest provision of Section 4.
The case has been reported journalistically by the Times (behind a paywall) ‘MeToo accuser wins landmark libel ruling after sex assault claim; at: https://www.thetimes.co.uk/article/metoo-accuser-wins-landmark-libel-ruling-after-sex-assault-claim-pr06kc27d and the Guardian ‘Sexual assault victim who named her attacker in blog defeats his libel action’ at: https://www.theguardian.com/law/2023/apr/26/nina-cresswell-named-sexual-assault-attacker-blog-defeats-billy-hay-libel-action and ‘My healing can start, says sexual assault victim after libel win’ at: https://www.theguardian.com/society/2023/apr/29/nina-cresswell-sexual-assault-libel-win
The Guardian reported her solicitor Tamsin Allen: ‘Allen said it was the first case of an abuser suing their victim for libel in which a public interest defence under the Defamation Act had succeeded. This applies when the statement complained of was on a matter of public interest and the defendant reasonably believed that publishing it was in the public interest, even if it turns out to be untrue.’
Legal analysis has been provided by The Good Law Project which supported Nina Cresswell. See: ‘WIN: Nina Cresswell wins libel case against William Hay who sexually assaulted her and then tried to silence her in court’ at: https://goodlawproject.org/update/win-nina-cresswell/, her counsel Jonathan Price at Doughty Stret Chambers ‘“Tattoo MeToo” libel claim won by sexual assault survivor’ at: https://www.doughtystreet.co.uk/news/tattoo-metoo-libel-claim-won-sexual-assault-survivor and her solicitors Bindmans ‘Bindmans client and sexual assault survivor wins libel action’ at: https://www.bindmans.com/knowledge-hub/news/bindmans-client-wins-libel-action/?utm_content=buffer5dce6&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
To what extent does this case provide a useful precedent for journalists and news publishers wishing to engage the public interest defence in terms of ‘reasonable belief’ on the part of the defendant?
The issue has been analysed in detail by Hold The Front Page media lawyer Sam Brookman. See: ‘Law Column: An interesting victory for truth and public interest defences’ at https://www.holdthefrontpage.co.uk/2023/news/law-column-an-interesting-victory-for-truth-and-public-interest-defences/
Sam Brookman concludes with a note of caution:
“In terms of the public interest defence, whilst Mrs Justice Williams makes some assertions which might be helpful to publishers, it must be remembered that it is the publisher’s reasonable belief that publication is in the public interest which is key.
In this case Ms Cresswell was the publisher, but if the publisher is a media organisation it is the Editor’s and journalist’s belief which will be key, and in those cases there will likely still be the expectation that the subject of the accusation is approached for comment, or that any relevant balance is added to an article.”

One thought on “Chapter Eleven – Social Media Law”