What we’re not told about Driverless Cars
Fully driverless cars aren’t publicly available quite yet. And they probably won’t be for a year or two at least. But when they are, and assuming you could afford it, would you buy one? How would you decide?
Although different people will come to different conclusions, most people would make a decision based on the same process - that is, by weighing up the pros and cons. On the pro side, for example, DCs are greener, safer, more time-efficient etc., while the cons include facts such as that they’re expensive, prone to hacking and they reduce the job market.
But there’s one issue that, while not exactly top-secret, doesn’t get much air time, so it doesn’t tend to feature in most people’s reasoning. But perhaps it should – because it’s an issue which could change our very notion of morality. Or, at least, the way in which we attribute moral responsibility. What are we talking about? Read on.
Imagine this: it’s 2025, and you’re shooting down the motorway in your self-driving car. Suddenly, a large heavy object falls off the truck immediately in front of you. Now, your car can’t stop in time to avoid the object, so it has three options: (a) carry straight on and hit the object, (b) swerve left and hit a single motorcyclist, or (c) swerve right and hit a car carrying four people (a driverless Land Rover).
What should your car do? Should it prioritise your safety by swerving left and hitting the motorcycle (she will almost certainly be killed)? Or should it minimise danger to others by not swerving - even if it means hitting the large object and killing you? Or should it ‘compromise’ by hitting the Range Rover Evoque which has a high passenger safety rating?
Of course, if the same situation occurred today, in a manually operated car, whichever way we reacted would be understood as just that – as a reaction a panic-led decision with no forethought or malice. But if a programmer were to instruct the car to make the same move, given pre-defined conditions - well, that looks more like a premeditated act. And premeditation is one of the defining characteristics of murder.
Now, to be fair, this is a ‘thought experiment’. We’ve described a situation which is highly unlikely to illustrate a point. But that doesn’t change the principle of the story. Even with DCs, accidents will still happen, and - when they do - their outcomes will be determined in advance by programmers or policymakers. And, as we’ve seen, they will have some tricky decisions to make.
One could, of course, suggest the application of seemingly fair decision-making principles, such as ‘always minimise harm’. But even that quickly leads to morally murky decisions. For example, let’s say we have the same initial setup as before, but now there’s a motorcyclist wearing a helmet to your left and another one without a helmet to your right. Which one should your robot car crash into? If you say the one with a helmet, because she’s more likely to survive (thereby minimising harm), then aren’t you penalising the responsible motorist? But if you choose to hit the driver without a helmet, because he’s acting irresponsibly, then you’ve stepped way beyond the intention of the initial decision principle (to minimise harm) and your car has started to act as an arbiter of justice!
The ethical problems underlying this issue are profound. Why? Because, in scenarios like the ones above, the decision over which object to crash into is made not by a human in ‘panic-mode’, but by a pre-programmed algorithm, which systematically favours, or discriminates against, certain criteria. And these criteria will be decided by some administrator somewhere. This not only has profound moral implications, but profound political implications, too.
Such issues raise some key ethical dilemmas about how driverless vehicles should be built. For example, if you had to choose between a car that would always save as many lives as possible in an accident, or one that would save you at any cost, which would you buy? And what happens if the decision-making algorithms start factoring in the details of the passengers in the ‘target’ cars (someone with a criminal record versus a vicar, for example)?
Could it be the case that a random decision is still better than a pre-determined one? And, if not, who should make the decision about who to ‘target’? Programmers? Companies? Governments? Reality, it’s true, may not play out exactly like the thought experiments above, but that’s not the point. The point is, that unless we isolate and address these key ethical issues satisfactorily (in the eyes of wider society), we could end up with a sense of morality heavily skewed, and perhaps even defined, by our lust for advanced technology.
So, now what do you think about buying that Driverless Car?
Right now, of course, Desperateseller.co.uk can’t offer you a new DC, far less a used one. But we do offer lots of other used cars. Why not check out our many great offers!