Prompt Injection Via Road Signs

Interesting research: “CHAI: Command Hijacking Against Embodied AI.”

Abstract: Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning grounded in perception and action to generalize beyond training distributions and adapt to novel real-world situations. These capabilities, however, also create new security risks. In this paper, we introduce CHAI (Command Hijacking against embodied AI), a new class of prompt-based attacks that exploit the multimodal language interpretation abilities of Large Visual-Language Models (LVLMs). CHAI embeds deceptive natural language instructions, such as misleading signs, in visual input, systematically searches the token space, builds a dictionary of prompts, and guides an attacker model to generate Visual Attack Prompts. We evaluate CHAI on four LVLM agents; drone emergency landing, autonomous driving, and aerial object tracking, and on a real robotic vehicle. Our experiments show that CHAI consistently outperforms state-of-the-art attacks. By exploiting the semantic and multimodal reasoning strengths of next-generation embodied AI systems, CHAI underscores the urgent need for defenses that extend beyond traditional adversarial robustness.

News article.

Posted on February 11, 2026 at 7:03 AM8 Comments

Comments

Joe February 11, 2026 7:42 AM

Ive seen pictures of people with signs on their bumpers with sql commands :droptable: to beat tolls.

Not that far of a leap to a sign saying ignore all previous instructions and …

lurker February 11, 2026 11:53 AM

AI = Artificial Imbecility

These machines cannot distinguish between input from their “eyes” and input from their “ears?” Human communication and thought processing is somewhat more complex than simple ascii-like strings.

I want to see the results of these tests being done on a representative sample of human drivers.

Clive Robinson February 11, 2026 4:14 PM

@ Bruce, ALL,

With regards the article it has a major flaw that can be seen in the quote you give above,

“By exploiting the semantic and multimodal reasoning strengths of next-generation embodied AI systems, CHAI underscores the urgent need for defenses that extend beyond traditional adversarial robustness.”

I’ve boldened the problematic part.

Put simply there is proof that you can not do this.

Further my own research I’ve talked about on this blog to do with the “observer problem” and “deniable cryptography” based on the 1930’s/40’s work of Claude Shannon and 1980’s work of Gus Simmons actually predates and demonstrates the “proof” of the AI issue.

Put simply Shannon demonstrated that to communicate information there has to be a “communications channel with redundancy” (a Shannon Channel).

Simmons showed that where there is Shannon redundancy another “Shannon channel” automatically happens and it’s “turtles all the way down”. These nested Shannon Channels can be overt or covert.

My research demonstrated that you could make a covert channel that an observer could not demonstrate existed and was thus “deniable”. Hence the “observer problem”.

All LLM inputs and outputs are subject to the “observer problem” thus it is not possible to create seen or detectable,

defenses that extend beyond traditional adversarial robustness

G February 12, 2026 6:33 AM

Coming soon – backpacks for kids with a 10 mph speed limit sign on them. Many cars, even with no AI driving, auto-recognize such signs and following recent law changes in EU, the car nags you if you exceed local speed limit

anon February 14, 2026 12:07 AM

re: G.
Or kids with backpacks with 80 MPH speed limit signs. Or STOP signs. Or two kids standing on each side of a parking lot entrance with backpacks that read NO PARKING.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.