Rogue Group Gains Access to Anthropic’s Dangerous New Mythos AI
Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Remember Claude Mythos, Anthropic’s new AI model that it hyped as being so powerful that it was too dangerous to release to the public? Well, it’s already been broken into, according to new reporting from Bloomberg. A small group of Discord users gained access to a preview version of Mythos, a source told the outlet, on the same day Anthropic announced it would be exclusively releasing the model to a select ring of companies. “We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” a spokesperson for Anthropic told Bloomberg in a statement. The company added that it hasn’t found any evidence of unauthorized access to Mythos. The group supposedly doesn’t have any nefarious intentions. It has been regularly using Mythos since gaining access to it, according to Bloomberg, though only for non-cybersecurity related purposes. The source described the group as being interested in “playing around” with new models, rather …








