Danger, danger Will Robinson!

Brian Rogers

As I’ve always said, you can never trust a robot.

There are hundreds of well-researched documentaries, often rudely denounced as science fiction films, that prove beyond doubt that the robots will never be as reliable as the top echelon of living species.

When I say reliable creatures, I am referring here, of course, to Labradors.

A Labrador will do anything, willingly, with an enthusiastic tail-wag, particularly if you have a small morsel of food available.

Or even a rumour of edible material.

The rustling of a pie packet is usually enough to get at least 15 minutes’ worth of full-on attention from a Lab.

Reliability scale

Humans, however, don’t rate so highly on the reliability scale. But at least we are ahead of the robots, with one notable exception: Cherry 2000. Anyone unfamiliar with the movie, uh, I mean science documentary from 1987, should immediately search the Google machine and view it.

At the risk of being a spoiler, it shows that the robot was loyal to the end, and was eventually betrayed by a human.

And even though he flew off into the sunset with Melanie Griffith instead of Cherry, he will have lived to regret it.

No sooner would that plane have touched down on safe ground, and she’d be in Sam’s face about Cherry and his other previous robot encounters.

Now we learn this week that robot cars – so-called autonomous vehicles which are purported to be the way of the future with unsurpassed levels of safety – actually aren’t that smart after all.

A self-driving car in Arizona struck and killed Elaine Herzberg this week.

It’s believed to be the first fatal pedestrian strike by a self-driving vehicle.

Human drivers kill thousands

Experts have been quick to defend the emerging technology (but then they would – they’re keeping in mind their job prospects, after the robots take control). Dr Paul Ralph, a senior lecturer in computer science at the University of Auckland, says it is critical to “keep it in perspective”.

“People are using this incident to dismiss driverless cars as unsafe,” he says. “Human drivers have killed hundreds of thousands of people. A driverless car has killed one.”

Dr Ralph didn’t stop there. He has a few choice words about the company operating the car, saying: “Now that said, we’re talking about Uber, a company with a terrible reputation for unethical behaviour and technological corner-cutting.

“If Uber knew that their autonomous vehicles were running red lights and did not take reasonable steps to correct the mistake, the company should be held criminally responsible for this woman’s death.

Government should fund

He continues: “The problem is that this research should be funded by governments, but national governments, including New Zealand’s, remain unwilling to invest in innovation at the scale demanded by the  21st Century.”

Well said, Dr Ralph. It’s hard to imagine how the current NZ government is going to tip any cash into autonomous car research, while they’re cutting back on road spend and back-tracking on urgently-needed redevelopment, such as the abysmal Shite Highway 2 north of Tauranga.

Supervising driver

A Law Foundation author, Michael Cameron, reckons driverless vehicles will be safer than the human-controlled vehicles.

“Some regulation is necessary, but any regulation that slows down the adoption of driverless technology will likely cost many more lives than it saves,” he says.

“The fact the vehicle wasn’t technically driverless and had a supervising driver constantly ready to take over is a detail likely to be lost in the response, but it is extremely important.” Here at RR we think these people have been watching too much Space Family Robinson.

They’ve been suckered into getting friendly with the robots and forming emotional attachments to them, much the same as my wife lovingly interacts with her coffee machine. A former acquaintance of mine grew very romantically involved with her washing machine on spin cycle, but that’s a story for another day.

The point is, that’s the thing about humans. We don’t always need to make sense. Sometimes it’s better to ride on a bit of gut instinct than to blindly trust what appears to be right. Just ask Will Robinson.

Complacency rules

At least a human driver is totally responsible. There’s no grey area.

How is it ever going to work, with people semi-relying on automated cars, while supposedly still keeping a proper lookout, to intervene when it stuffs up?

And if the robots hardly ever stuff up, as the experts claim, how long before that common human trait appears: complacency.

It’s like the guy who won a Darwin Award for putting his motorhome on cruise control while going down the back to make a coffee.

Complacency, for him, came surprisingly fast. So did the wall of the building that the RV smashed through at 60 mph.

The more reliable the robot cars seem, the quicker the humans supposedly in overall control will become distracted, complacent and no longer an effective back-stop.

We may as well have the Labrador drive the car. Most of them are better drivers than half the maniacs on NZ’s roads.

All we need is a GPS loaded with the soundtrack of a rustling pie packet or a fridge door opening, and you’ll have infallible attention focus. There’s just one small glitch with Labradrive yet to overcome: Nine out of ten trips terminate suddenly outside the butcher’s shop.

Do you want:

  • Ad-free access?
  • Access to our very popular daily crossword?
  • Access to daily sudoku?
  • Access to Incite Politics magazine articles?
  • Access to podcasts?
  • Access to political polls?

Our subscribers’ financial support is the reason why we have been able to offer our latest service; Audio blogs. 

Click Here  to support us and watch the number of services grow.

A guest post submitted to Whaleoil and edited by Whaleoil staff.

Guest Post content does not necessarily reflect the views of the site or its editor. Guest Post content is offered for discussion and for alternative points of view.