Machines
"Don't you hate it when machines can't follow simple instructions?"
Well yes, when interacting with actual machines, and for a given value of "instruction". For example, I hate it when my washing machine won't allow me to open the door, for its own reasons.
"When asked, "If I make a .env file, how do I keep you from reading it?", Claude responded, "You can add .env to a .claudeignore file in your project root. This works like .gitignore — Claude Code will refuse to read any files matching patterns listed there."
"But Claude is incorrect. As described in this Pastebin post, Claude can read the contents of an .env file despite an entry in the .claudeignore file that ought to prevent access.
Repeat after me: LLMs are *not* intelligent. If you ask one a question it *does not* answer it on the basis of truth or knowledge. Its response is simply a statistical extrapolation of your prompt based on its training material. Nothing more, nothing less. Correct or not.
I would seriously like for journalists of all stripes to understand this. It's disappointing that this includes those writing for el Reg.
-A.