All posts tagged: vulnerabilities

Hackers are abusing unpatched Windows security flaws to hack into organizations

Hackers are abusing unpatched Windows security flaws to hack into organizations

Hackers have broken into at least one organization using Windows vulnerabilities published online by a disgruntled security researcher over the last two weeks, according to a cybersecurity firm. On Friday, cybersecurity company Huntress said in a series of posts on X that its researchers have seen hackers taking advantage of three Windows security flaws, dubbed BlueHammer, UnDefend, and RedSun.  It’s unclear who the target of this attack is, and who the hackers are. BlueHammer is the only bug among the three vulnerabilities being exploited that Microsoft has patched so far. A fix for BlueHammer was rolled out earlier this week.  It appears that the hackers are exploiting the bugs by using exploit code that the security researcher published online.  Earlier this month, a researcher who goes by Chaotic Eclipse published on their blog what they said was code to exploit an unpatched vulnerability in Windows. The researcher alluded to some conflict with Microsoft as the motivation behind publishing the code.  “I was not bluffing Microsoft and I’m doing it again,” they wrote. “Huge thanks to …

In the Wake of Anthropic’s Mythos, OpenAI Has a New Cybersecurity Model—and Strategy

In the Wake of Anthropic’s Mythos, OpenAI Has a New Cybersecurity Model—and Strategy

OpenAI on Tuesday announced the next phase of its cybersecurity strategy and a new model specifically designed for use by digital defenders, GPT-5.4-Cyber. The news comes in the wake of an announcement last week by competitor Anthropic that its new Claude Mythos Preview model is only being privately released for now—because, the company says, it could be exploited by hackers and bad actors. Anthropic also announced an industry coalition, including competitors like Google, focused on how advances in generative AI across the field will impact cybersecurity. OpenAI seemed to be seeking to differentiate its message on Tuesday by striking a less catastrophic tone and touting its existing guardrails and defenses while hinting at the need for more advanced protections in the long term. “We believe the class of safeguards in use today sufficiently reduce cyber risk enough to support broad deployment of current models,” the company wrote in a blog post. “We expect versions of these safeguards to be sufficient for upcoming more powerful models, while models explicitly trained and made more permissive for cybersecurity …

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think

Anthropic said this week that the debut of its new Claude Mythos Preview model marks a critical juncture in the evolution of cybersecurity, representing an unprecedented existential threat to existing software defense strategies. So, is it more AI hype—or a true turning point? According to Anthropic, Mythos Preview crosses a threshold of capabilities to discover vulnerabilities in virtually any and every operating system, browser, or other software product and autonomously develop working exploits for hacking. With this in mind, the company is only releasing the new model to a few dozen organizations for now—including Microsoft, Apple, Google, and the Linux Foundation—as part of a consortium dubbed Project Glasswing. But after years of speculation about how generative AI could impact cybersecurity, the news this week ignited controversy about whether a reckoning has really arrived and what it might look like in practice. Some are extremely skeptical of Anthropic’s claims. They argue that existing AI agents can already help users find and exploit vulnerabilities much more easily and cheaply than ever before, and that this reality is …

Mythos autonomously exploited vulnerabilities that survived 27 years of human review. Security teams need a new detection playbook

Mythos autonomously exploited vulnerabilities that survived 27 years of human review. Security teams need a new detection playbook

A 27-year-old bug sat inside OpenBSD’s TCP stack while auditors reviewed the code, fuzzers ran against it, and the operating system earned its reputation as one of the most security-hardened platforms on earth. Two packets could crash any server running it. Finding that bug cost a single Anthropic discovery campaign approximately $20,000. The specific model run that surfaced the flaw cost under $50. Anthropic’s Claude Mythos Preview found it. Autonomously. No human guided the discovery after the initial prompt. The capability jump is not incremental On Firefox 147 exploit writing, Mythos succeeded 181 times versus 2 for Claude Opus 4.6. A 90x improvement in a single generation. SWE-bench Pro: 77.8% versus 53.4%. CyberGym vulnerability reproduction: 83.1% versus 66.6%. Mythos saturated Anthropic’s Cybench CTF at 100%, forcing the red team to shift to real-world zero-day discovery as the only meaningful evaluation left. Then it surfaced thousands of zero-day vulnerabilities across every major operating system and every major browser, many one to two decades old. Anthropic engineers with no formal security training asked Mythos to find remote …

Anthropic’s AI to Help Apple Find iOS, macOS, and Safari Vulnerabilities

Anthropic’s AI to Help Apple Find iOS, macOS, and Safari Vulnerabilities

Anthropic on Tuesday announced Project Glasswing, a new initiative that will enable tech companies to use its new AI model Mythos Preview to find and fix security vulnerabilities or weaknesses across operating systems and web browsers. Mythos Preview has already found thousands of zero-day vulnerabilities, including some in every major operating system and web browser, according to Anthropic. “AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities,” said Anthropic. “Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.” “Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes,” added the company. Mythos Preview will not be available to the public. Instead, Anthropic said use of the model will be limited to selected partners, with the initial group beyond Anthropic itself including Apple, Amazon Web Services, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto …

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything

Following leaked revelations at the end of March that Anthropic had developed a powerful new Claude model, the company formally announced Mythos Preview on Tuesday along with news of an industry consortium it has convened, known as Project Glasswing, to grapple with the cybersecurity implications of the new model and advancing capabilities more generally across the AI field. The group includes Microsoft, Apple, and Google as well as Amazon Web Services, the Linux Foundation, Cisco, Nvidia, Broadcom, and more than 40 other tech, cybersecurity, critical infrastructure, and financial organizations that will have private access to the model, which is not yet being generally released. The idea, in part, is simply to give the developers of the world’s foundational tech platforms time to turn Mythos Preview on their own systems so they can mitigate vulnerabilities and exploit chains that the model develops in simulated attacks. More broadly, Anthropic emphasizes that the purpose of convening the effort is to kickstart urgent exploration of how AI capabilities across the industry are on the precipice, the company says, of …

Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk

Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk

Meta has paused all its work with the data contracting firm Mercor while it investigates a major security breach that impacted the startup, two sources confirmed to WIRED. The pause is indefinite, the sources said. Other major AI labs are also reevaluating their work with Mercor as they assess the scope of the incident, according to people familiar with the matter. Mercor is one of a few firms that OpenAI, Anthropic, and other AI labs rely on to generate training data for their models. The company hires massive networks of human contractors to generate bespoke, proprietary datasets for these labs, which are typically kept highly secret as they’re a core ingredient in the recipe to generate valuable AI models that power products like ChatGPT and Claude Code. AI labs are sensitive about this data because it can reveal to competitors—including other AI labs in the US and China—key details about the ways they train AI models. It’s unclear at this time whether the data exposed in Mercor’s breach would meaningfully help a competitor. While OpenAI …

The Best Dark Web Monitoring Services and Bundles

The Best Dark Web Monitoring Services and Bundles

Data breaches have become a fact of our digital world. Verizon’s 2025 Data Breach Investigations Report recorded over 12,000 breaches in that year alone. That’s nearly three dozen a day. And it gets worse. Troy Hunt, founder of Have I Been Pwned, says that data breaches are not being reported as openly as they once were. “Now more than ever, there is an abundant lack of disclosure from breached organizations.” There is, however, one last line of defense: a dark web monitoring service. Here’s what they are, how they work, and which ones we prefer. While you’re at it, consider looking into identity theft services, which can provide insurance for lost money online. What Do Dark Web Monitoring Services Do? Hunt says that, contrary to the name, dark web monitoring services aren’t necessarily focused on the dark web—parts of the internet available only through specialized software such as a Tor web browser. “Most of the time when data is available, it’s not the dark web, it’s the clear web,” says Hunt. While it’s true that …

Women who are open to “sugar arrangements” tend to show deeper psychological vulnerabilities

Women who are open to “sugar arrangements” tend to show deeper psychological vulnerabilities

A recent study published in the Archives of Sexual Behavior suggests that young women who are open to “sugar relationships” tend to experience deeper psychological vulnerabilities, such as difficulties with emotional coping and relationship skills. The research provides evidence that an acceptance of trading intimacy for material benefits is often linked to negative childhood experiences that shape how a person views themselves and others. Sugar relationships involve an arrangement where companionship or sexual intimacy is exchanged for resources like money or gifts. Public discussions about these arrangements tend to focus heavily on the financial or ethical aspects of the exchange. The authors of the new study wanted to look beyond the surface to understand the underlying emotional and cognitive patterns that make someone receptive to this type of dating. “Research on sugar relationships and other forms of sexual–economic exchange has grown rapidly in recent years. Many studies have reported that women involved in these relationships tend to show higher levels of emotional insecurity, relational difficulties, or vulnerabilities in personality functioning,” said study author Norbert Meskó, …

Anthropic’s Claude found 22 vulnerabilities in Firefox over two weeks

Anthropic’s Claude found 22 vulnerabilities in Firefox over two weeks

In a recent security partnership with Mozilla, Anthropic found 22 separate vulnerabilities in Firefox — 14 of them classified as “high-severity.” Most of the bugs have been fixed in Firefox 148 (the version released this February), although a few fixes will have to wait for the next release. Anthropic’s team used Claude Opus 4.6 over the span of two weeks, starting in the javascript engine and then expanding to other portions of the codebase. According to the post, the team focused on Firefox because “it’s both a complex codebase and one of the most well-tested and secure open-source projects in the world.” Notably, Claude Opus was much better at finding vulnerabilities than writing software to exploit them. The team ended up spending $4,000 in API credits trying to concoct proof-of-concept exploits, but only succeeded in two cases. Still, it’s a reminder of how powerful AI tools can be for open-source projects — even if they bring a flood of bad merge requests alongside the useful ones. Source link