You got it backwards
Or maybe upside down.
Re: "only to see his premise undermined the next day by hapless security robot tumbling into a fountain."
That is solid support for his premise. Something went wrong with a robot and something bad happened. His premise is that AI will have more control over more resources and when things go bad they could go very, very bad. In this instance, 'not supposed to fall in fountain' went wrong and turned into 'fall into fountain'. If that had been 'do not launch nuclear missiles' and it went wrong, well... Somebody is telling you not to put that power into the hands of an AI system without appropriate safeguards. The only wrinkle is that he is saying you cannot effectively put the safeguards into place after the fact with AI, you have to anticipate unknown problems in advance and put safeguards up *before* things go wrong.
Here is a tip from an old programmer (moi):
The crucial thing about the unexpected is that you don't expect it. In the case of AI, an 'assert()' statement is not going to cut it as error handling (not that it ever does).
A corollary is Murphy's Law -- "Anything that can go wrong will go wrong".