The future of driving is something that we often wonder about. What exactly will cars look like in five, ten or fifty years time? The rapid development and change that cars underwent in the 20th century has slowed somewhat since the turn of the millennium, at least on the outside. It’s all about crumple zones and more efficient technology. But the future? The answer is likely to be driverless cars.
The thought of just getting inside your car and telling it where to go, before you sit back and relax and wait to be delivered to your destination, is something that many of us would love to happen. Even if you enjoy driving, driverless cars have the potential to increase safety, decrease congestion, and provide a more comfortable experience. Think about it; instead of having to focus on the road, you can catch up on some work, watch a movie, or just spend some quality time with the family.
But does autonomous automatically mean safer? Can we really trust a computer to look after our car, ourselves and our family?
The idea behind driverless cars is that the onboard computer will navigate for you, using a variety of cameras, sensors and GPS systems to keep a safe distance from all other traffic on the road. There is a vision of having a connected network in place so that cars can communicate with each other, ensuring they are not going to have a collision. It could even be that the cars could work together so that there is no congestion at all, with cars moving seamlessly from A to B, giving you an almost to the second arrival time.
But without all cars being driverless, an autonomous car should still be able to transport you safely to your destination. The computer can react far quicker than the human brain, so is able to spot potential dangers and hazards that we wouldn’t even register until it was too late.
One of the concerns some drivers have about autonomous vehicles is how they assess danger. The ethical question has been in the news quite a bit too. Should the onboard computer prioritise the driver or a pedestrian? Should it make a judgement based on how old the potential casualty is?
This is often seen as the train track dilemma. A train is fast approaching a fork in the tracks. However there are obstacles on both routes. One will cause it to hit a child. The other will cause it to hit two adults. You have a lever to choose which track the train takes. How do you decide?
The ethics behind these decisions are a big question when it comes to self-driving cars. Would you get into a car that would not necessarily prioritise your safety? If someone runs out into the road in front of you, should you be injured or even killed to save their life? The value of someone’s life cannot be calculated. So this dilemma is going to run and run, even after self-driving cars are commonplace.
Another big question is whether self-driving cars can work if there are other drivers on the road who are not in a self-driving vehicle. If your car is assessing the road ahead, it will try and predict the different dangers and factors ahead of them. But you can’t accurately assess the human factor. Can you see if a driver is drunk? Or tired? Or distracted? Self-driving cars will also be assessing the performance of their vehicle. And, almost by definition, self-driving cars will have the latest technology and engines. Older cars on the road will be less predictable and less responsive. Evasive actions that a self-driving car might make, may not be replicated by an older vehicle.
Another potential safety issue is how secure your self-driving car is. If it is driving autonomously, there’s a possibility that someone with malicious intent could hack into your car’s computer and take control. This could be something like causing a traffic jam or changing your route but it could also be used to do something more dangerous, like causing an accident.
Self-driving cars are reliant on live networks to operate. If the technology doesn’t work can you still drive a self-driving car?
Cars designed to be driven by network-enabled computers will operate differently from cars today. You can see that in the current self-driving technology, such as adaptive cruise control or automatic parking, as these are somewhat awkward and still need to have a human behind the wheel, ready to take over if the system fails. The infrastructure to support self-driving cars will need to be developed further before they are a viable alternative to traditional cars.
There are many challenges facing the implementation of self-driving cars. But that doesn’t mean they cannot be overcome. While we're already experiencing some self-driving tests taking place on UK roads, and the UK Government has commissioned its own trials, we are still a long way away from a fully-autonomous national fleet. But as the technology develops, driving will become safer. It’s just a long road ahead until that happens.