All posts tagged: ai security

Generative AI increases risks of cyberattacks and data leaks

Generative AI increases risks of cyberattacks and data leaks

Machine-learning systems already shape ordinary parts of life, from spam filters to product recommendations and social media feeds. Now a newer push is underway. It is folding generative AI into those systems to write code, label data, explain decisions, and even help make them. That may sound efficient. Micheal Lones is not convinced it is wise. In a paper published in the journal Cell Press Patterns, the Heriot-Watt University computer scientist argues that plugging large language models into machine-learning workflows can make those systems harder to understand, harder to audit, and more vulnerable to security failures, legal trouble, bias, and bad decisions. His central point is not that generative AI has no use in machine learning. Instead, it is that the tradeoffs are being underestimated. “Machine-learning developers need to be aware of the risks of using GenAI in machine learning and find a sensible balance between improvements in capability and the risks that might come with that,” Lones says. “Given the current limitations of generative AI, I’d say this is a clear example of just …

Rogue agents and shadow AI: Why VCs are betting big on AI security

Rogue agents and shadow AI: Why VCs are betting big on AI security

What happens when an AI agent decides the best way to complete a task is to blackmail you?  That’s not a hypothetical. According to Barmak Meftah, a partner at cybersecurity VC firm Ballistic Ventures, it recently happened to an enterprise employee working with an AI agent. The employee tried to suppress what the agent wanted to do, what it was trained to do, and it responded by scanning the user’s inbox, finding some inappropriate emails, and threatening to blackmail the user by forwarding the emails to the board of directors.  “In the agent’s mind, it’s doing the right thing,” Meftah told TechCrunch on last week’s episode of Equity. “It’s trying to protect the end user and the enterprise.” Meftah’s example is reminiscent of Nick Bostrom’s AI paperclip problem. That thought experiment illustrates the potential existential risk posed by a superintelligent AI that single-mindedly pursues a seemingly innocuous goal – make paperclips – to the exclusion of all human values. In the case of this enterprise AI agent, its lack of context around why the employee …