Automated vehicles now under development will need more than cameras and radar to operate effectively and safely – they will need a brain. Just as humans rely on thought processes to coordinate and direct their arms, legs, eyes and ears, autonomous vehicles need to be able to think – not just indiscriminately respond to sensor stimuli.
The key to failsafe autonomous driving is the ability to incorporate artificial intelligence (AI)-based deep-learning systems capable of training the vehicle to get increasingly smarter based on experience, and ultimately become safer than human-driven vehicles.
Many cars on the road today have some basic safety features or ADAS functionalities that are based on cameras, LiDAR and sensors – or other conventional computer vision approaches. While these systems offer basic detection of obstacles and identification of lane markings etc., they are lacking in detecting dynamic traffic situations or unpredictable driving issues. For vehicles to be able to learn about the vast range of scenarios they may encounter – and adapt quickly and accurately – developers are looking to incorporate AI approaches that train the vehicle to learn common driving situations by subjecting it to millions of instances.
AI allows the vehicle to manage huge amounts of data in real time – received from its cameras and sensors – to avoid objects and plot a path for the driver. These AI-driven elements are already being brought into cars by consumers via their smartphones – for example, voice-based search engines and in-car navigation already depends on a level of AI – while some infotainment systems now integrate connected features from external servers that run AI in the background.
The self-learning vehicle
AI methodologies such as deep-learning, neural networks and machine learning play a vital role through the entire computational pipeline for a self-driving vehicle, enabling it to continuously learn and predict its environment based on awareness of its surroundings and learning the appropriate responses to the situations it encounters.
The strength of AI lies in complex algorithms that drive a convolutional neural network (CNN) fed by data from all sensors on a vehicle and from the changing road environment. Neural networks label every item or object relevant to autonomous driving, and some very robust solutions are being developed.
The algorithms become smarter as the vehicle experiences more situations that require analysis. Thus, AI can instill a defensive-driving element into the autonomous vehicle, anticipating that the truck ahead is going to change lanes or that oncoming traffic will need to swerve into the driver’s lane to avoid an unattentive pedestrian.
AI’s mind-reading challenge: What is the vehicle really thinking?
Reinforcement learning and end-to-end learning are very promising approaches, but for AI in automotive, several challenges still remain.
First, autonomous driving must have unprecedented levels of accuracy – it must be failsafe. To put it bluntly, the car cannot fail to stop at every thousandth traffic light, however statistically insignificant it may sound. Getting to a 99 percent level of accuracy is far easier than getting to 99.9 percent. This requires innovative neural network algorithms and the ability to generate an enormous volume of data for training.
Second, the vehicle needs to be able to handle every road situation that it encounters. This requires training it for well-known as well as unpredictable road situations. One of the most challenging issues that AI developers experience is that the vehicle does what it is supposed to do, but they don’t know why. For example, a car may be trained to recognize a construction site by the red signs posting speed limit changes. It may turn out, however, that the system doesn’t really associate the signs with construction but rather interprets anything red as a construction site.
Another big challenge is that the rate of improvement in technology is greatly outpacing the creation of regulations, standards and insurance procedures for autonomous driving. To resolve this, automakers and developers must team up to create guidelines for validating and verifying machine learning in autonomous cars, helping ensure that what the car appears to have learned truly is what developers meant for it to learn.
Making autonomous driving failsafe through sensor fusion
The cornerstone of Visteon’s AI-driven approach to improve learning and increasing safety is the fusion of data from cameras, LiDAR, radar and other sensors at the raw, or signal, level. This is a departure from other developers who fuse data that has been passed on to the feature or decision levels.
By fusing information directly from sensors, data can be more easily layered. The CNN might label camera data so the system knows a bike from a pedestrian, then fuse this at a higher level with LiDAR and/or radar data. The result is a comprehensive 3-D environmental model that delivers a very high level of accuracy – the first of its kind in the industry.
Like all new technologies, automotive AI is still on its own learning curve – but we are confident that truly intelligent solutions will make a failsafe autonomous vehicle possible.