The original developer of Google’s self-driving car, Anthony Levandowski, famously quipped, “The fact that you’re still driving is a bug, not a feature.” However, many of us balk at the idea; we cannot imagine a vehicle – even one packed with image, radar, camera sensors and advanced software - being a better driver than a human being. However, current prototypes of autonomous cars from leaders in the field such as the Ford/Lyft partnership, GM’s Chevy Bolt and Renault-Nissan’s ProPILOT system, come so packed with sensors (radar, laser, cameras, ultra-sonic and lidar) that on paper they are able to sense, see and register more hazards, more rapidly than a human driver. Secondly, their brains are bigger. The leader in the field of autopilot computer systems is Nvidia’s Drive PX2, a powerful platform, enhanced with significant machine-learning capabilities, that translates all the data from the sensors into a 3D model of the car’s environment.
Not only will autonomous cars have as good, if not better, sensory and computational skills as humans, there’s a host of things they won’t have: distracting thoughts, compulsion to text or call, enraged emotions against other drivers, a need to speed, boredom and fatigue – to name a few.
Increasingly legislators are recognizing that humans are not great drivers, and have begun to insist on certain technological enhancements as a way to reduce the risks, accidents and facilities on the world’s highways. By example, in 2014 the US Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) announced that it will require all new vehicles under 4,500 kg to have rear view cameras by May 2018.
Cars today already include several semi-autonomous features such as self-braking systems and assisted parking. However, automobile manufacturers are still facing a sudden and sustained technological disruption with respect to self-driving technology, underpinned by advancements in the remote sensor and cloud technology (aka the Internet of Things). Much of this disruption is coming from tech (rather than car) companies like Uber, Lyft and Google, who have grasped the significance of self-driving cars arguable far sooner than the likes of Honda and BMW. The question remains, “what and when is the future of self-driving cars?”
McKinsey estimates that the first fully autonomous vehicle will be available in 2020, and other analysts claim that by 2030 as much as 60% of new car sales could be autonomous vehicles. The precursor, in terms of technology, is ADAS (Advanced Driver Assistance Systems) – essentially our cars are already very smart.
ADAS, currently available as either built-in or optional add-ons in some vehicles, is primarily a safety-enhancement system that alerts the driver to potential problems, and avoids collisions by implementing safeguards and taking over control of the vehicle. Adaptive features may automate lighting, provide adaptive cruise control, automate braking, incorporate GPS/ traffic warnings, connect to smartphones, alert the driver to other cars or dangers, keep the driver in the correct lane, reveal what is in blind spots, and of course the ubiquitous parking assist system.
ADSA systems are seen as a necessary precursor to legislative and quality standard protocols that that governments and manufacturers would need to adopt in an era of fully autonomous vehicles. With ADSA technology being one of the fastest growing segments within automotive manufacturing already, industry-wide quality standards such as ISO 26262, and IEEE 2020 for Image Sensor quality, as well as communications protocols such as the Vehicle Information API are already in place.
The next horizon for ADSA would be the increased use of WiFi and 4G connectivity to collect vehicle-to-vehicle (V2V) and car-to-infrastructure (V2X) data. This would further enhance a car’s autonomous functions, feeding the car site-specific data about adjacent cars, buildings and obstacles as well as surface conditions and road infrastructure, notionally enabling decision-making processes by the car itself to monitor and respond to the surrounding operating environment. Algorithms within the car’s own software system can increasingly govern functions such as braking, steering, route and speed guidance.
It is fair to say that a lack of technological innovation is not what currently dampens or limits the development of the autonomous car, but increasing concerns around privacy and information security. Autonomous cars would need to be inter-connected via an internet-based cloud network, rendering them vulnerable to hackers. Two hackers, Charlie Miller, a security researcher at Twitter and Chris Valasek, director of Vehicle Security Research at IOActive famously hacked a Jeep Cherokee two years ago, via a security glitch in the car’s entertainment system. Designed as a controlled experiment, the white-hat hackers were subsequently able to take over the car’s steering, brakes and transmission – not to mention controlling the music on the radio. All of this was possible because Chrysler (and a host of other car manufacturers), in a bid to turn cars into smart phones, have already installed a range of connectivity and entertainment enhancements that result in each car having its own IP address.
The realities of increased urbanization, climate change, road safety and the changing trends in the transportation market (people are looking for transportation services, rather than simply cars) make self-driving cars inevitable. Revolutionary advances in software programming, connectivity, sensor design and the emergence of “smart” cities hold the potential to transform our entire concept of mobility, while managing negative outputs such as road accidents, congestion and carbon emissions. The ultimate goal is to improve our quality of life and safety. The race to be the first manufacturer to produce a legal, safe and secure autonomous vehicle is most certainly on.