Re: This raises a question over how we treat tech doing things instead of us.
I think that's too narrow a lens.
The problem is not so much that humans also make driving mistakes, but, arguably, that we excuse those mistakes more readily for motoring than for other potential causes of injury or death. It took best part of a century to have reasonable safety standards for vehicles and in many places you're still held to a lesser standard of responsibility if you kill someone with a vehicle through your avoidable conduct than by other means. That historical laxity doesn't mix well with transferring whatever responsibility might remain onto the shoulders of software. You have to look at the social context in which the technology operates.
And there's a similar problem with AI. It's of course true that people have double standards and prejudices and make mistakes. However, ultimately, they can be called upon to account for their actions and explain them. AI may make fewer mistakes and have fewer prejudices (though the evidence for that, so far, is not promising) but it can't display its working. That would be fine if there was an implicit assumption that you would need robust review procedures by qualified people - but if you need those people, the economic case for the technology is suddenly rather weaker so there's an incentive to "believe" the technology is infallible.
I think it's wrong to imagine we can dispassionately reduce this to Benthamite principles: for better or worse we are not utilitarian creatures and we will only willingly adopt technology that can accommodate our natural contradictions. It's not a coincidence that some of the most enthusiastic proponents of both self-driving cars and AI have been accused of having sociopathic tendencies. The only way you can remove the human element from technology is to remove the humans.