When a self-driving car crashes, one just has to wonder about those robots. Are they really all they're cracked up to be? Or might they be just as cracked as the rest of us?
Should you have, this morning, been unreasonably detained by aggressive machines, may I tell you that Google's famed, futuristic, liberating, and ultimately superhuman machine, the self-driving Prius, was involved in a fender bender.
What seems evident from shots of the scene is that Google's robot machine ran into the back of another Prius. You might think that it was on robotic autopilot and this was some sort of mating ritual.
You might also think that a Google representative rushing to the defense of our future controllers by issuing a statement to Business Insider that a human had been driving might smack of the convenience of being chauffeur-driven.
Google claims that a real, live human being was driving the car. When it comes to robot machines, what does "driving" really mean? With all things that are supposedly autonomous, there is still a human being somewhere in the system. But where, exactly?
When it comes to unmanned military drones, for example--or even Rupert Murdoch's-- there is still the question of human action or inaction.
iRobot CEO Colin Angle once explained to CNET's Jonathan Skillings: "You know, AI is not to the point where a machine should be making a life-or-death decision, and I wouldn't even be able to tell you when I thought that might actually come to pass."
So here we have a robotic Prius hitting a nonrobotic Prius. The news came out only because a passer-by happened to take a photograph. Is it unreasonable to imagine that the human who was "driving" stepped in because the robot was, well, having a moment?
And, though Google might--in a left-brained manner--want us to believe that is human error, its deftly phrased spokes-quote didn't suggest there was any error at all. Please bathe in the words of the Google rep: "One of our goals is to prevent fender benders like this one, which occurred while a person was manually driving the car."
So the "person" was "manually driving the car." But no word on whether the "person" made a mistake. Or whether the car did.
The implication is, though, that humans make errors, machines don't. This, some might imagine, ought to have been the Google motto from the very beginning. Others might imagine that it always has been.
However, as a weak and worried human, might I appeal to Google to release all recorded evidence of this accident--every log, every piece of data? That way, we can all be entirely clear that a careless human brought needless embarrassment to one of the world's most progressive companies. Or not.
I have e-mailed Google with that request. Just because, well, I've always been on the side of the humans, and we've been blamed for far too much lately.
Updated 3:51 p.m. PST: Google would only give me a further one-line statement. A spokesman said: "The car was in manual mode at the time. We have confirmed it in our logs." The spokesman however, has not been willing to explain what actually happened.
However, an NBC Bay Area report featured a woman called Tiffany Winkelman, who claimed that this was a five-car accident and that hers was the third car to be hit. It allegedly involved three Priuses and two Honda Accords, with the self-driving Prius being the one that caused the shunt.
Some might wonder even more now what really might have occurred here.