News
Leave a comment

Live tracker of major mishaps

Live tracker of major mishaps


AI is being widely used in journalism and can lead to reputation-killing scandals and mistakes if not monitored closely. Here Press Gazette rounds up some of the main examples of where AI has gone wrong.

The Mississippi Free Press has admitted to being the latest news outlet caught out by publishing an AI column written by a fake author.

The non-profit outlet said the journalist did not seem suspicious until they submitted an invoice that did not match their name.

This was the same way that purported freelance journalist Margaux Blanchard was caught out by Wired last year.

There have been numerous similar cases of AI work mistakenly published by major news outlets in the past year as the technology grows more sophisticated. Sometimes the ‘journalists’ are caught out pre-publication.

There have also been examples of real writers getting caught out using AI in unsanctioned ways. Australian news website Crikey took down a series of articles because a writer had used ChatGPT in the editing process against its strict AI policy.

In this new page, Press Gazette will keep track of such incidents to help publishers to learn from these mistakes. Last updated: April 2026

If you spot anything that we have missed, please email charlotte.tobitt@pressgazette.co.uk.

Skip to:

Mississippi Free Press AI fake author (April 2026)

The Mississippi Free Press has announced it discovered an opinion column published on 7 April was written using AI and “the ‘author’ was not who they claimed to be”.

The column was headlined: “The gig economy is affecting our communities.”

The page remains live but the text of the article has been removed and replaced with an editor’s note stating: “This column did not meet MFP’s standards and has been removed.”

Kevin Edwards, editor of the Voices opinion section, told readers the author has been “purged from our system” and admitted “the mistake was mine”.

He explained: “The AI column submitted by this author didn’t seem out of the ordinary. In fact, it wasn’t until they submitted an invoice that didn’t match their name that I grew suspicious. Not of the column itself, but of the author. Was this person who they claimed to be?

It turns out that, no, they weren’t. I looked back at our email correspondence and checked out the various social media links they had provided in their email signature. All were dead or nonexistent.

“I searched their name with a company listed on their résumé and found an editor who had already gone through the same song-and-dance with the writer, though he figured out the ruse before he published a fake article. On closer inspection, it turns out that the headshot the writer sent us for his bio picture was also generated with AI.”

Edwards added that other columns he had recently been sent from new authors also came “from fake authors with other names that all appeared to come from outside the country. Thankfully, we didn’t publish any of those.”

He said he has pulled three columns planned for future publication “because I noticed similar signs”.

Edwards continued: “It’s unfortunate that I have to treat new writers with this level of suspicion, but that is the world we live in and the adjustment I will make. It’s easy to suggest just throwing a column in an AI detector, but AI detectors aren’t very reliable.”

The MFP, he said, is now working on a formal AI policy that will be made public and is organising AI training for staff so they can better spot it when it has been used.

New York Times AI book review plagiarism (March 2026)

The New York Times has ended its relationship with a freelance journalist who admitted to using AI to help write a book review.

A reader got in touch with the NYT to suggest a January review of “Watching Over Her” by Jean-Baptiste Andrea included “language and details similar” to an earlier review of the same book in The Guardian.

An editor’s note added to the top of the NYT review now states: “We spoke to the author of this piece, a freelancer reviewer, who told us he used an AI tool that incorporated material from the Guardian review into his draft, which he failed to identify and remove.

“His reliance on AI and his use of unattributed work by another writer are a clear violation of The Times’s standards. The reviewer said he had not used AI in his previous reviews for The Times, and we have found no issues in those pieces.”

Alex Preston, the journalist involved, told The Guardian: “I made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in.

“I am hugely embarrassed by what happened and truly sorry. I took responsibility immediately and apologised to the New York Times, and I also want to apologise to [Guardian review writer] Christobel Kent and to the Guardian.”

Crikey AI policy breach (March 2026)

Australian news website Crikey has taken down an article after discovering AI had been used against its editorial guidelines.

The article about “using ethical influence to create change” was bylined to Jo Tarnawsky, formerly chief of staff to Australia’s deputy prime minister.

Three earlier articles in the same series, all published in February and March by the same author, were also removed.

Concerns were first raised by AI professor Toby Walsh who said on Linkedin: “It would have been ethical to have declared that AI wrote much of this article. Shame on you, @crikey_news.”

Crikey editor-in-chief Sophie Black said the next day: “Yesterday, we published an article by a contributor who later confirmed they used AI in some aspects of its production.

“This goes against our editorial policies.”

Black reported that Tarnawsky said she had used ChatGPT in the production of the article, not to write it but to “sense check… proofread it, spell check, ask for alternative subheadings and in some cases, ask for better phrasings.

“Our editorial guidelines prohibit the use of AI, which is why we’ve taken the stories down.”

Black said Crikey did not send Tarnawsky a link to its editorial guidelines before the article was submitted, which it should have done.

“We need to be clearer with new contributors about these expectations, including where AI is used in a limited capacity. We need to have better fallback measures so that we can catch issues like this before the story is published.”

An article by Tarnawsky about the security partnership between the UK, US and Australia for The Saturday Paper published in February now has a note at the bottom that states: “The author of this article made limited use of ChatGPT for research and as a thesaurus. Schwartz Media does not allow usage of AI to produce its journalism.”

Gaming websites replace human staff with writers (February 2026)

A network of prominent gaming sites has fired multiple human staff in recent days and misleadingly replaced them with AI writers, complete with fake pics and biogs.

UK-based The Escapist, Videogamer and Esports Insider were taken over by SEO agency Clickout Media in recent months, with up to 20 staff believed to have been fired.

The sites then began to be loaded with AI-written stories about casinos. Read the full Press Gazette story here.

The Guardian and others remove suspicious ‘AI’ articles (November 2025)

Articles by a freelance journalist called Victoria Goldiee were taken down by four publications after an investigation by The Local Toronto.

The Local’s executive editor Nicholas Hune-Brown looked into Goldiee after he commissioned a pitch from her about “membership medicine” and then became suspicious about whether she was actually in Toronto, due to the geographical spread of publications she said she had written for, and the fact she claimed to have done several interviews for the piece already.

He could not find any trace of Goldiee among the Canadian publications she claimed to have written for and a doctor she claimed to have interviewed said they had not spoken.

Hune-Brown then realised Goldiee’s email response to his questions and her original pitch had “rote phrasing… all the hallmarks of an AI-generated piece of writing”. He contacted people quoted in previous pieces she had published who denied having spoken to Goldiee. He later spoke to her on the phone and “suspected [she] was lying to me with each and every response”.

Hune-Brown said he believes the author is from or still lives in Nigeria, possibly explaining an economic reason for the deception.

An article with Goldiee’s byline on The Guardian published a month before The Local’s investigation has been “removed for editorial standards reasons”. The piece was a first-person essay about how music is shared around the UK, which said: “The future of our music is not written by algorithm.”

Hune-Brown said: “It was a good ChatGPT piece. It was impressive, and I could see why anyone would be fooled by it. I could see why they would enjoy it. But it has no value to me if it’s not created by a person.”

A 2024 article about climate change memes by Goldiee was also taken down from non-profit outlet Outrider, which stated: “Upon review, this article did not meet Outrider’s editorial standards and it has been removed. We regret the error.” A professor quoted in the story told Hune-Brown she had “not spoken with any reporter about that piece of research”.

An article was also removed by architecture title Dwell, which said: “An investigation concluded that the article, ‘How to Turn Your Home’s Neglected Corners Into Design Gold,’ did not meet Dwell’s editorial standards, and as such, we’ve retracted it. Our apologies to our readers and the sources previously cited within.”

The Journal of the Law Society of Scotland had published an article by Goldiee as part of a series about the future of law on Scotland’s high streets in September.

The article was removed on 31 October and editor Joshua King said the article contained quotes that were “disputed and otherwise problematic”.

“On the balance of the evidence available, it is now my belief these quotations were falsely attributed to the interviewees and are likely to be fabricated. This is in breach of our editorial guidelines and the author’s contractual obligations.

“As editor and on behalf of the Journal, I wholeheartedly apologise for what has happened. I hold myself, the Journal and all our contributors to the highest editorial standards and on this occasion we have fallen well below those standards.

“This is professionally embarrassing and this apology is an article I am disappointed I have to publish when we should be discussing and celebrating all that is happening in this great profession.”

King said he had also contacted those who were quoted in the piece directly to “offer my sincere apologies and to confirm that we are urgently reviewing our editorial processes to ensure this does not happen again”.

Business Insider removes 38 essays over fake author concerns (September 2025)

Business Insider removed 38 essays in total after Press Gazette reporting on Margaux Blanchard prompted a wider investigation of its output (see below).

Editor-in-chief Jamie Heller told staff in a memo: “We recently learned that a freelance contributor misrepresented their identity in two first-person essays written for Business Insider. As soon as this came to light, we took down the essays and began an investigation.

“As part of this process, we’ve removed additional first-person essays from the site due to concerns about the authors’ identity or veracity. No news articles or videos were found to have this issue.

“We’ve bolstered our verification protocols to help prevent anything like this from happening again. We care deeply about the integrity of our work, and we will always do what it takes to make things right.”

The Washington Post reported that the author pages of 19 individuals had been removed and replaced with editor’s notes.

Wired, Business Insider and others pull articles by author ‘Margaux Blanchard’ (August 2025)

Wired and Business Insider were among several UK and US online publications that removed articles written by freelance journalist ‘Margaux Blanchard’ after concerns they were likely AI-generated works of fiction.

Press Gazette revealed Blanchard’s pattern of behaviour after being alerted to the author by Dispatch editor Jacob Furedi who had received a suspicious freelance pitch.

Most of the published stories bylined to Blanchard contained case studies of named people whose details Press Gazette was unable to verify online, casting doubt on whether any of the quotes or facts contained in the articles are real.

Wired took down its story soon after publication in May 2025 after receiving an unusual request for payment from Blanchard.

It later said that a “closer look at the details of the story… made it clear to us that the story had been an AI fabrication.”

It added that the story “did not go through a proper fact-check process or get a top edit from a more senior editor. First-time contributors to Wired should generally get both, and editors should always have full confidence that writers are who they say they are.”

After Press Gazette began looking at Blanchard’s published articles and spotted elements that did not appear to exist, articles by Blanchard were removed by titles including Business Insider, SFGate and art and culture title Cone Magazine.

The Grind delays issue due to AI ‘freelance journalist’ scammers (June 2025)

Toronto politics and culture print magazine The Grind delayed its food-themed issue in June after greenlighting several article pitches they later realised were “AI slop”.

Editors Fernando Arce and Saima Desai first became suspicious after realising one writer that had written about two immigrant-run Toronto restaurants with direct quotes and descriptions of the interiors was based outside Canada.

When challenged, the writer admitted that the “characters and places in my article are fictional composites… based on real themes” and the article was scrapped.

The editors checked the other commissioned articles and identified seven they said they “strongly suspect were written by AI.

“They all had a similar feel: too neat, too vague. We learned to read the signs: U.S. instead of Canadian spelling, double-barreled article headlines that didn’t quite match the drafts, the same author writing an eloquent pitch and then awkward follow-up emails, and drafts riddled with em-dashes.”

They challenged the writers on their sources and some provided fake phone numbers and addresses for people they had supposedly interviewed, as well as broken website links.

The editors said they have strengthened their processes for catching AI “garbage” including for vetting new writers, fact-checking drafts early on and using AI detection software.

Chicago Sun-Times and Philadelphia Inquirer publish summer reading list with books that don’t exist (May 2025)

A summer reading list published in both the Chicago Sun-Times and Philadelphia Inquirer contained books that do not exist.

The list was produced by syndicated content partner King Features, owned by Hearst, and used by a “handful” of US titles. A freelance journalist used an AI agent to create it.

The Chicago Sun-Times said: “It was inserted into our paper without review from our editorial team, and we presented the section without any acknowledgement that it was from a third-party organisation.”

The newspaper removed the section from its e-paper edition and updated its policies so third-party licensed editorial content must comply with its editorial standards and is explicitly identified.

The Philadelphia Inquirer’s editor Gabriel Escobar said using AI to produce content was a “violation of our own internal policies and a serious breach” and that they were “looking at ways to improve the vetting of content in these supplements going forward”.

King Features told the Sun-Times it has “a a strict policy with our staff, cartoonists, columnists, and freelance writers against the use of AI to create content. The Heat Index summer supplement was created by a freelance content creator who used AI in its story development without disclosing the use of AI.

“We are terminating our relationship with this individual. We regret this incident and are working with the handful of publishing partners who acquired this supplement.”

The journalist who created the piece, Marco Buscaglia, confirmed to The New York Times that it was partially created by AI.

He said: “It was just a really bad error on my part and I feel bad that it has affected The Sun-Times and King Features, and that they are taking the shrapnel for it.”

Dozens of stories were removed or amended by leading publishers after Press Gazette revealed a trend of fake and dubious experts being widely quoted in UK and US media.

Companies selling CBD oil, sex toys, vapes and essay writing services are among those apparently seeking to game the Google algorithm to achieve higher rankings in search by using AI to associate themselves with ‘expert’ voices and gain links from bona-fide news outlets.

Many of the fake experts use journalist response services to answer journalist queries, with the speed with which they provide quotes suggesting they are using AI.

David Higgerson, chief content officer at Reach which is the UK’s largest commercial publisher and was among those to have published experts that do not appear to be real, later said: “It is deeply upsetting and concerning when our journalists – or any journalists across the industry – are misled by people creating fake experts. It is clear that this is becoming a bigger issue, with more sophisticated efforts to mislead being deployed.

“Our readers deserve better and we will continue to tighten our controls around this and work with our newsrooms on training and protocols.

“At the same time the industry will need to work together to develop new ways to manage these growing threats.”

Sports Illustrated publishes AI-generated writers (November 2023)

Sports Illustrated was accused of publishing AI-written articles after several authors were discovered to have AI-generated headshots and fake names.

One product review author’s photo was found for sale on a website selling AI-generated headshots. His author bio described him as someone who “spent much of his life outdoors, and is excited to guide you through his never-ending list of the best products to keep you from falling to the perils of nature”.

Sports Illustrated denied that the content itself was generated by AI but Futurism, which first exposed the story, said it had spoken to two sources who said it was.

A spokesperson for Sports Illustrated publisher Arena Group told Futurism a third-party company that produced e-commerce content was to blame and that the articles had been removed.

“Today, an article was published alleging that Sports Illustrated published AI-generated articles. According to our initial investigation, this is not accurate.

“The articles in question were product reviews and were licensed content from an external, third-party company, AdVon Commerce. A number of AdVon’s e-commerce articles ran on certain Arena websites. We continually monitor our partners and were in the midst of a review when these allegations were raised.

“AdVon has assured us that all of the articles in question were written and edited by humans. According to AdVon, their writers, editors, and researchers create and curate content and follow a policy that involves using both counter-plagiarism and counter-AI software on all content.

“However, we have learned that AdVon had writers use a pen or pseudo name in certain articles to protect author privacy — actions we don’t condone — and we are removing the content while our internal investigation continues and have since ended the partnership.”

The Sports Illustrated Union said staff were “horrified” by the reporting, adding: “If true, these practices violate everything we believe in about journalism.”

CNET removes tranche of AI-generated articles (January 2023)

Tech news website CNET removed articles written by AI after it had been revealed it had been quietly publishing AI-generated content for months and some errors were identified.

CNET then issued corrections on 41 out of 77 stories written with its AI tool.

Editor-in-chief Connie Guglielmo explained: “In November, one of our editorial teams, CNET Money, launched a test using an internally designed AI engine – not ChatGPT – to help editors create a set of basic explainers around financial services topics.

“We started small and published 77 short stories using the tool, about 1% of the total content published on our site during the same period. Editors generated the outlines for the stories first, then expanded, added to and edited the AI drafts before publishing. After one of the AI-assisted stories was cited, rightly, for factual errors, the CNET Money editorial team did a full audit.”

Guglielmo said a “small number” of the articles needed “substantial correction” and “several” others had “minor issues such as incomplete company names, transposed numbers or language that our senior editors viewed as vague”.

“We’ve paused and will restart using the AI tool when we feel confident the tool and our editorial processes will prevent both human and AI errors.”

The byline for articles compiled by the AI engine was altered to CNET Money with an AI disclosure more clearly displayed.

Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our “Letters Page” blog



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *