Generative AI increases risks of cyberattacks and data leaks
Machine-learning systems already shape ordinary parts of life, from spam filters to product recommendations and social media feeds. Now a newer push is underway. It is folding generative AI into those systems to write code, label data, explain decisions, and even help make them. That may sound efficient. Micheal Lones is not convinced it is wise. In a paper published in the journal Cell Press Patterns, the Heriot-Watt University computer scientist argues that plugging large language models into machine-learning workflows can make those systems harder to understand, harder to audit, and more vulnerable to security failures, legal trouble, bias, and bad decisions. His central point is not that generative AI has no use in machine learning. Instead, it is that the tradeoffs are being underestimated. “Machine-learning developers need to be aware of the risks of using GenAI in machine learning and find a sensible balance between improvements in capability and the risks that might come with that,” Lones says. “Given the current limitations of generative AI, I’d say this is a clear example of just …

