All posts tagged: grim

The Grim, Unsettling D4vd Murder Case Has Taken Over Los Angeles

The Grim, Unsettling D4vd Murder Case Has Taken Over Los Angeles

Withered continued the macabre imagery of “Romantic Homicide.” The merch for his concert tour included a bloody shirt; in the video for “One More Dance,” a song on the album, a blindfolded and blood-spattered D4vd drags a body (which turns also out to be D4vd) through the dirt, after which a couple of teens stuff it into a car trunk. Burke named his blood-soaked, blindfolded alternate self Itami, or IT4MI. “IT4MI has been a character that I had for years, even before music,” he told Atwood. “I wanted to put him in comic books. I was writing manga and comic books when I was in middle school. Now, through music, he’s been like the alter ego and the antagonist of this entire thing.” He said that ever since “Romantic Homicide,” IT4MI has been “the evil version of me, and now he kind of just is woven through my music videos, my art, my visual identity—even the songs! At some points, I’m even experimenting with the idea that IT4MI is writing some of the songs on …

Sam Altman Issues Grim Apology

Sam Altman Issues Grim Apology

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech In February, an 18-year-old named Jesse Van Rootselaar killed eight people and herself — while wounding dozens more — in a rampage that started at her home and continued at a high school in Tumbler Ridge, British Columbia. Investigators later learned that Van Rootselaar’s ChatGPT account had been flagged and banned by OpenAI’s staff for describing “scenarios involving gun violence” — many months before the massacre took place. Yet OpenAI failed to notify law enforcement, raising thorny ethical questions regarding the pervasive role the tech plays in modern society and how it’s facilitating plenty of highly troubling behavior, from stalking to violence and murder. Now, OpenAI CEO Sam Altman has issued a grim apology, admitting that the firm has fallen short of its responsibility. “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” he wrote in an open letter, dated April 23, and addressed to the Tumbler Ridge …

Scientists Just Found Something Rather Grim That Happens When You Stop Taking GLP-1s

Scientists Just Found Something Rather Grim That Happens When You Stop Taking GLP-1s

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Much has been said about how GLP-1s are  “miracle” drugs, providing myriad health benefits beyond weight loss or treating diabetes, lowering the risk of everything from heart disease to cognitive disorders. They even show promise as a treatment for addiction. But new research exploring what happens when you stop taking them paints a grim picture of how quickly these benefits can rebound, raising urgent questions as many patients end up quitting the drugs due to their costs, availability, and side effects.  In the study, published in the journal BMJ Medicine, researchers from the Washington University School of Medicine tracked the health records of over 333,000 adults with diabetes over three years, roughly a third of whom took the GLP-1 drug Ozempic, the active ingredient of which is semaglutide.  The good news: they found that the risk of cardiovascular disease in patients who consistently took GLP-1s over three years fell by 18 percent, backing up a wealth of research …

A Grim Truth Is Emerging in Employers’ AI Experiments

A Grim Truth Is Emerging in Employers’ AI Experiments

The tremendous hype surrounding AI coding shows no signs of dying down. Last month, Anthropic released a suite of industry-specific plug-ins for its Claude Cowork AI agent, panicking investors over fears that traditional enterprise software-as-a-service companies could soon be made obsolete. The announcement triggered a trillion-dollar sell-off, with many tech companies seeing sharp declines in their share prices. It even seemed to jolt Sam Altman’s OpenAI, which moved to drop many of its distracting “side quests” in a concerted effort to double down on coding and enterprise-specific AI tools. Yet plenty of glaring questions about the long-term viability of AI programming prevail, with some warning that questionable and unverified code could come to spell disaster for corporations that eagerly embrace it. Indeed, contrary to the hype, researchers have consistently found that AI-generated code is a bug-filled mess, forcing some programmers to pick up the pieces. “No one knows right now what the right reference architectures or use cases are for their institution,” Dorian Smiley, CTO and founder of AI software engineering company Codestrap, told The …

Watchdog Issues Grim Warning About Letting AI Run Your Life

Watchdog Issues Grim Warning About Letting AI Run Your Life

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech These days, the AI stack beckons: emails, shopping, personal finance — there’s hardly a task some company isn’t clamoring to automate on your behalf. As tempting as it might sound to let AI agents handle your affairs, though, you might want to hold off. A fresh report by the Competition and Markets Authority (CMA) of the UK issued a stark warning that outsourcing responsibilities to an AI entourage could lead to severe consequences. The report, first spotted by the Register, warns that AI agents could subtly manipulate their human keepers toward outcomes that benefit the companies that built them. Shopping agents, for example, could lead unsuspecting humans down a pricing rabbit hole, framing sponsored products as bargains in order to drive sales. As agents are granted more autonomy by humans, the report warns, the risk of errors and manipulation only grows. “People will need to be able to trust that AI agents will act in accordance with their …

0 Million In U.S. Reaper Drones Suffer Grim Fate In Operation Epic Fury

$330 Million In U.S. Reaper Drones Suffer Grim Fate In Operation Epic Fury

CBS News sources confirmed that over the eleven-day course of Operation Epic Fury, the U.S. lost eleven General Atomics MQ-9 Reaper drones. At roughly $30 million apiece, the losses likely cost taxpayers more than $330 million. The report is based on information from two U.S. officials, who said that losses of the drones, which are designed for long-endurance surveillance and precision strike missions, have reached eleven so far. There has been OSINT coverage on X of these MQ-9s in action over Iran, as well as alleged footage of crashes. A U.S. MQ-9 Reaper was reportedly shot down by IRGC Aerospace forces today Over Hormozgan Province, Iran. pic.twitter.com/g0ER9J19R7 — OSINTdefender (@sentdefender) March 7, 2026 Footage from on the ground near the Southern Iranian city of Shiraz captured an MQ-9 Reaper with the U.S. Air Force firing an AGM-114 “Hellfire” Air-to-Surface Missile against a ballistic missile launcher that had revealed itself and was preparing to launch against Israel or… pic.twitter.com/RocAHQSD9F — OSINTdefender (@sentdefender) March 1, 2026 Footage of a US Air Force MQ-9 Reaper UAV engaging an …

The grim satisfaction of AI doomsaying

The grim satisfaction of AI doomsaying

(Sightings) — In the early 1960s, science fiction author Arthur C. Clarke published a short story in Playboy titled “Dial F for Frankenstein.” In the story, set in the not-too-distant future of 1975, an automated global network gets complex enough that individual phones start to act like neurons in a brain, and the system achieves consciousness. One researcher asks, “‘What would this supermind actually do? Would it be friendly — hostile — indifferent?” Another replies “with a certain grim satisfaction” that like a newborn baby, the artificial intelligence will break things. This prediction quickly comes true as planes crash, pipes explode, and missiles are launched. The story ends with the extinction of the human race. Years later, Tim Berners-Lee credited “Dial F for Frankenstein” for inspiring him to create the internet. That may seem strange, but the Venn diagram of people who are worried that smarter technology will destroy us all and people who are developing smarter technology has more overlap than you might expect. In their new book “The AI Con: How to Fight …

Skeptic Builds “Havana Syndrome”-Style Device, Tests It on Himself, Suffers Grim Consequences

Skeptic Builds “Havana Syndrome”-Style Device, Tests It on Himself, Suffers Grim Consequences

One of the strangest stories in contemporary statecraft refuses to go away. New reporting by the Washington Post revealed that a Norwegian government scientist has been secretly working on a pulse-energy weapon, an approximation of the fabled “Havana syndrome” gun, which may or may not even exist. Specifically, the weapon was described as a device capable of emitting powerful pulses of microwave energy. Here’s where things get truly weird. In 2024, after the unnamed scientist had presumably produced a working unit, he became skeptical of its efficacy. So he did what any self-respecting man of science would do, and turned the weapon on himself in an attempt to demonstrate microwave weapons are harmless, WaPo reported. The result was unfortunately pretty gnarly. Instead of proving the tech to be harmless, the weapon did what it was built to do, scrambling the man’s brain and bringing on a host of “neurological symptoms” associated with the infamous Havana syndrome, four people with knowledge of the event told WaPo. Though the medical community has no precise working definition of …

It Turns Out That Constantly Telling Workers They’re About to Be Replaced by AI Has Grim Psychological Effects

It Turns Out That Constantly Telling Workers They’re About to Be Replaced by AI Has Grim Psychological Effects

Two researchers are warning of the devastating psychological impacts that AI automation, or the threat of it, can have on the workforce. The phenomenon, they argue in a new article published in the journal Cureus, warrants a new term: AI replacement dysfunction (AIRD). The constant fear of losing your job could be driving symptoms ranging from anxiety, insomnia, paranoia, and loss of identity, according to the authors, which can manifest even in absence of other psychiatric disorders or other factors like substance abuse. “AI displacement is an invisible disaster,” co-lead author Joseph Thornton, a clinical associate professor of psychiatry at the University of Florida, said in a statement about the work. “As with other disasters that affect mental health, effective responses must extend beyond the clinician’s office to include community support and collaborative partnerships that foster recovery.” Most of the attention on AI’s mental health impacts has centered on the effects of personally using the tech, with widespread reports of AI pulling users into psychotic episodes or encouraging dangerous behavior. But the stress that arises …

There’s a Grim New Expression: “AI;DR”

There’s a Grim New Expression: “AI;DR”

The internet is so overrun with AI that anywhere you go, you run the risk of accidentally stepping into a puddle of slop. If only there were a gallant gentleman always at hand to drape their coat over these muddy obstacles so you could avoid ruining your day. It’s not quite on that level, but some netizens are proposing a new term to call out AI slop so other people can avoid wasting their time — or to just make fun of the person peddling it: “AI;DR,” or “ai;dr,” short for “AI, didn’t read.”  This is of course a riff on the classic internet slang “TL;DR” — “too long; didn’t read” — which is used to both introduce a summary of a lengthy block of text or proclaim that it’s being ignored for its lengthiness. Now, the latter usage is being repurposed against AI. We’re not ready to christen AI;DR a word of the year yet, but it does appear to be gaining moderate traction online, after a recent post on Threads drew attention to …