Abstractions
Leave a comment

AI workslop is taking over our offices – here’s how to spot who is using it

AI workslop is taking over our offices – here’s how to spot who is using it


Over the past year or so, Gina* has noticed a new and frustrating burden seeping into her working day. As a communications specialist, helping fellow colleagues to write statements and emails, articles and LinkedIn posts has always been a big part of her job. But now the drafts that she receives from her co-workers all seem to possess the same “weird, inhuman tone that just jars when you read it”.

That’s because instead of just having a go at setting out their thoughts themselves, as they would have done in a pre-AI era, they’ve fed a few ideas into chatbot, then sent the results over to Gina – often without checking it over themselves. She now has to spend hours amending – and humanising – this AI-generated work, “and it’s way harder than editing a person’s writing”. She has had to start “sending email and article drafts back to people and asking them to ‘de-AI’ it because it sounds so weird. And I’ve had people get mad at me for doing that”.

Gina is drowning in AI “workslop”. This material seems just about plausible and passable on the surface, but probe a little further and you soon realise that it’s either utterly lacking in substance, or littered with errors (perhaps both). It’s polished but empty, slick but soulless. “The problem is that there’s not a lot of awareness of how AI slop reads, how obvious it is and how bad it makes both people and companies look,” she says. “I’m not against AI at work at all, but I think people need to do more to educate themselves about using it more intelligently and not relying on it for absolutely everything.”

She is certainly not the only one that’s fighting a tide of slop. Last year, a survey of US workers reported that 40 per cent had received workslop from their colleagues in the past month. Another UK-based report estimated that employees are spending almost as much time verifying, checking or redoing work produced by AI as they do using it throughout the week. The same report found that 32 per cent of employees are suffering with AI burnout, mental fatigue resulting from constantly checking AI output.

The irony, of course, is that we’ve been told again and again that AI is going to revolutionise the working world for the better. And yet for some, these advances only seem to be making work more difficult, more arduous. In a recent survey from the software company WalkMe, 66 per cent of respondents said that AI doesn’t actually help them to work faster or give them more free time. Towards the end of last year, the Harvard Business Review went so far as to declare that workslop is “destroying productivity”.

Some workers report feeling burned out by the burden of checking and correcting AI-generated work
Some workers report feeling burned out by the burden of checking and correcting AI-generated work (Getty/iStock)

When employees are “under constant pressure”, exhausted by “juggling multiple demands”, AI can seem to “offer a shortcut that feels ‘good enough’”, says Fred Funck, director of executive coaching programmes at the Centre for Creative Leadership. If expectations are sky high and deadlines are tight, it’s easy to see why a worn out worker might turn to ChatGPT or Copilot in their hour of need. For many, Funck adds, “AI feels like a kind of digital magic wand. It removes friction, reduces cognitive load, and delivers instant results. In a world that increasingly values convenience, this is incredibly seductive”.

But while using AI might seem efficient in theory, often the slack still needs to be picked up elsewhere – by other colleagues. “The cost of poor AI output does not always sit with the person who generated it,” says Nick Renner, skills forecasting partner at training provider BPP. “A manager may quietly rewrite something that reads oddly to avoid delays, or a colleague may step in to fix a presentation before it reaches a client.”

This “corrective work”, he adds, tends to happen “informally as part of everyday work” – and so it can be hard to quantify its impact. “When an hour is saved producing a first draft, but 40 minutes is then spent verifying, correcting or rewriting it,” Renner says, we are simply shifting the “bottleneck” in the workflow from production (the time spent writing) to review (the time spent editing). So AI hasn’t necessarily reduced the workload, it has just moved it around a little. This, he notes, “raises a fundamental question about efficiency: Is time genuinely being saved, or is the constraint just moving downstream?”

Rosie Wilkins, brand strategist and founder of Brand by Design, is familiar with these frustrations – in fact, she’s often the one taking on this “corrective work”. She works with small businesses to help them craft a more authentic presence online, but recently, she says, “I feel like I’ve massively hit this wall of AI”, with some clients sending her Google Docs filled with AI-generated copy for social media posts. “You didn’t write this, you definitely haven’t read it back, and it’s taking me longer and longer to make revisions and changes.”

The cost of poor AI output does not always sit with the person who generated it

Nick Renner

She often ends up slogging through “paragraphs of nothing” – and has become all too aware of the tell-tale signs that hint that a piece of writing is the work of ChatGPT or Claude rather than a living, breathing human. “I think the one that we all know is the em dash (—), which is frustrating because I have always used that a lot in my writing,” she says. “And the idea that everything is done “quietly”, that’s one that really irritates me – the word quietly is used a lot.” Indeed, while writing this piece, I received a good few emails from experts telling me about how workslop is “quietly” taking over our inboxes – oh, the irony.

AI copy, she adds, “seems to be churn out short little sentences rather than flowing” (see all those long LinkedIn posts laid out like a modernist poem). But “as soon as people start noticing these things, they start saying to their AI, ‘don’t use this,’” she notes. “Then something else will crop up. So in a couple of months’ time, we’ll probably be pinpointing different things.”

Workslop is not just irritating, it can have real, tangible consequences. In April, a Wall Street law firm, Sullivan & Cromwell, had to apologise to a New York federal judge after filing court documents that included AI-generated “hallucinations”, such as incorrect citations and misquoting US bankruptcy code. And in 2025, the consulting firm Deloitte had to partially refund its $440,000 AUD fee after a report it produced for the Australian government was found to feature AI errors.

All of this raises one glaring question. Why are people still submitting workslop, when it is so obviously detectable, and there is such potential for reputation-damaging errors? “Some genuinely can’t see the problem,” says Renner. “They have stopped reading their own output critically, because the act of generating it [with AI] felt like work.” And, he adds, “if nothing happens when poor work is submitted, the behaviour persists”.

Is AI saving us time at work – or just shifting the burden to others?
Is AI saving us time at work – or just shifting the burden to others? (Getty/iStock)

Wilkins, meanwhile, thinks we have almost reached a point where AI is so lauded that “it feels like the original thought that we had maybe isn’t good enough”, even though “there’s nothing wrong with the way your brain wanted to put that sentence together or the way you wanted to communicate that idea. Because AI is so prevalent now, we’re made to feel like [our work] could be better, so we need to run it through AI. But it was fine in the first place”.

In the longer term, there is a risk that, just as muscles weaken over time without use, we’ll see “a gradual erosion of core skills like thinking, writing and structuring ideas”, says Funck. And it’s not just those at the start of their careers who risk missing out.

Our professional judgement, Renner says, gets built up “through the act of struggling with the work, not just completing it”. If younger workers are “consistently shortcutting that struggle”, he explains, “those capabilities may never fully develop”. And for those with more experience, “the removal of learning-by-doing work creates a different risk”, that of “expertise atrophy. Skills that are not exercised will decay, even when they were once well established”.

It’s a depressing picture – although Renner does note that the human capability to recognise “when an AI output is good enough, when it is subtly wrong, and when it is confidently wrong in ways that will cause real problems” is likely to become an “increasingly valuable” skill.

So next time you’re tempted to try and placate that colleague that keeps “circling back” about that report by sending over some badly tossed word salad? Just know that said colleague is entirely aware of what you’ve done – and is probably seething with resentment. The least you can do is get rid of those em dashes.

*Name changed for anonymity



Source link

Filed under: Abstractions

by

Avatar photo

I studied medicine in Brighton and qualified as a doctor and for the last 2 years been writing blogs. While there are are many excellent blogs devoted to the topics of faith, humanism, atheism, political viewpoints, and wider kinds of rationalism and philosophical doubt, those are not the only focus here.Im going to blog about what ever comes to my mind in a day.

Leave a Reply

Your email address will not be published. Required fields are marked *