Simple printed signs can hijack self-driving cars and robots
Automatic, robotic systems that operate in our physical environment, also known as embodied AI systems, are continually learning and adapting to their surroundings through sensor-based observations of their environment. Researchers from the University of California, Santa Cruz, and Johns Hopkins University have identified new vulnerabilities with embodied AI by investigating how these systems may misperform and or create unsafe situations due to being misled or intentionally misdirected by their operators via the environment. In a recent study, the researchers discovered that place-based texts, such as those on signs or posters placed in the environment to be read and acted upon by humans, can be misinterpreted by AI as authoritative commands that override the machine’s internal safety protocols. The authors found that in many cases this type of command text was enough to compel the machine to act in ways that were contrary to its original programming and design. Alvaro Cardenas, a computer science and engineering professor at UCSC, and Cihang Xie, an assistant professor of computer science and engineering, led this research. The findings represent …

