Technical publications – Visteon

Using Artificial Intelligence to Create the Brain for Failsafe, Intelligent Autonomous Driving

Automated vehicles now under development will need more than cameras and radar to operate effectively and safely – they will need a brain. Just as humans rely on thought processes to coordinate and direct their arms, legs, eyes and ears, autonomous vehicles need to be able to think – not just indiscriminately respond to sensor stimuli.

The key to failsafe autonomous driving is the ability to incorporate artificial intelligence (AI)-based deep-learning systems capable of training the vehicle to get increasingly smarter based on experience, and ultimately become safer than human-driven vehicles.

Many cars on the road today have some basic safety features or ADAS functionalities that are based on cameras, LiDAR and sensors – or other conventional computer vision approaches. While these systems offer basic detection of obstacles and identification of lane markings etc., they are lacking in detecting dynamic traffic situations or unpredictable driving issues. For vehicles to be able to learn about the vast range of scenarios they may encounter – and adapt quickly and accurately – developers are looking to incorporate AI approaches that train the vehicle to learn common driving situations by subjecting it to millions of instances.

AI allows the vehicle to manage huge amounts of data in real time – received from its cameras and sensors – to avoid objects and plot a path for the driver. These AI-driven elements are already being brought into cars by consumers via their smartphones – for example, voice-based search engines and in-car navigation already depends on a level of AI – while some infotainment systems now integrate connected features from external servers that run AI in the background.

The self-learning vehicle

AI methodologies such as deep-learning, neural networks and machine learning play a vital role through the entire computational pipeline for a self-driving vehicle, enabling it to continuously learn and predict its environment based on awareness of its surroundings and learning the appropriate responses to the situations it encounters.

The strength of AI lies in complex algorithms that drive a convolutional neural network (CNN) fed by data from all sensors on a vehicle and from the changing road environment. Neural networks label every item or object relevant to autonomous driving, and some very robust solutions are being developed.

The algorithms become smarter as the vehicle experiences more situations that require analysis. Thus, AI can instill a defensive-driving element into the autonomous vehicle, anticipating that the truck ahead is going to change lanes or that oncoming traffic will need to swerve into the driver’s lane to avoid an unattentive pedestrian.

AI’s mind-reading challenge: What is the vehicle really thinking?

Reinforcement learning and end-to-end learning are very promising approaches, but for AI in automotive, several challenges still remain.

First, autonomous driving must have unprecedented levels of accuracy – it must be failsafe. To put it bluntly, the car cannot fail to stop at every thousandth traffic light, however statistically insignificant it may sound. Getting to a 99 percent level of accuracy is far easier than getting to 99.9 percent. This requires innovative neural network algorithms and the ability to generate an enormous volume of data for training.

Second, the vehicle needs to be able to handle every road situation that it encounters. This requires training it for well-known as well as unpredictable road situations. One of the most challenging issues that AI developers experience is that the vehicle does what it is supposed to do, but they don’t know why. For example, a car may be trained to recognize a construction site by the red signs posting speed limit changes. It may turn out, however, that the system doesn’t really associate the signs with construction but rather interprets anything red as a construction site.

Another big challenge is that the rate of improvement in technology is greatly outpacing the creation of regulations, standards and insurance procedures for autonomous driving. To resolve this, automakers and developers must team up to create guidelines for validating and verifying machine learning in autonomous cars, helping ensure that what the car appears to have learned truly is what developers meant for it to learn.

Making autonomous driving failsafe through sensor fusion

The cornerstone of Visteon’s AI-driven approach to improve learning and increasing safety is the fusion of data from cameras, LiDAR, radar and other sensors at the raw, or signal, level. This is a departure from other developers who fuse data that has been passed on to the feature or decision levels.

By fusing information directly from sensors, data can be more easily layered. The CNN might label camera data so the system knows a bike from a pedestrian, then fuse this at a higher level with LiDAR and/or radar data. The result is a comprehensive 3-D environmental model that delivers a very high level of accuracy – the first of its kind in the industry.

Like all new technologies, automotive AI is still on its own learning curve – but we are confident that truly intelligent solutions will make a failsafe autonomous vehicle possible.

A New Reality for Autonomous Driving

Drivers have always been instructed to keep eyes on the road, hands on the wheel and foot ready to brake. Improving a driver’s perception of the vehicle’s surroundings while performing the driving task, or when delegating these tasks in part or in full to a highly automated or autonomous vehicle, requires new ways of interaction – especially when drivers and their passengers may be uneasy or concerned about their route and safety when not being in control of the vehicle.

Alleviating these concerns is a primary goal of developers who are bringing augmented reality (AR) into vehicles. While AR is capable of addressing all human senses through visuals, sound and movement, the primary focus in the vehicle is to overlay information that augments the driving situation and surrounding objects in the line of sight of the driver. This informational, visual layer highlights objects on and near the road, reports the condition of the vehicle, and can significantly improve safety margins when drivers are in charge of a vehicle. At the same time, the very intuitive nature of AR helps build confidence in the vehicle’s automated features by keeping the driver informed and aware.

Autonomous vehicles will benefit from augmented reality solutions.

The real benefit of AR is that it operates intuitively and in real time. Therefore, if vehicle sensors detect an object infringing on the vehicle’s lane, AR technology can instantly alert the driver and display the safest path around it. From a convenience perspective, AR also offers many enhancements – supporting park-assist features by indicating available spaces or working with the vehicle’s infotainment system to overlay entertainment or points of interest tailored to the driver’s preferences – among a list of many features.

A dynamic difference for drivers

In particular, coupling AR with navigation holds particular promise for improving safety and driver confidence. Drivers will no longer need to shift their eyes from the road to a navigation screen; the route will be projected on the road ahead with clear markings indicating turns, directions or which lane to follow.

For increased safety, AR can also be coupled with driver and passenger monitoring systems. For example, sensors will evaluate the driver’s head position so that the system knows on which surface to display the information and, if the driver appears distracted, the system will issue a visual or audio alert. For advanced applications, AR can present a combined view of the road ahead with audio, light and video to create a multi-modal AR experience designed to enhance a driver’s awareness of the driving situation by alerting them to potential risks that would require action.

With vehicle-to-infrastructure communications, AR can also display a countdown indicating the number of seconds until a traffic light ahead will change, allowing the driver to decide whether to stop or slow down.

The importance of AR in autonomous vehicles

Today, no automobile on the road is equipped with AR. Many employ a head-up display (HUD), but these systems use projected images, rather than a dynamic information overlay. For autonomous vehicles, however, AR will be a prominent feature.

New automotive AR features have the potential to be a standard approach for Level 3 and 4 autonomous driving when drivers are able to delegate control to the vehicle and may not observe its operations. Displaying everything the driver needs to pick up where autonomous controls left off, AR will be critical when it is time for a driver to resume control after a period of autonomous driving – drawing immediate attention to the vehicle’s surroundings and any actions that need to be taken. Importantly, AR acts in a timely and contextual fashion, displaying the relevant information needed at any point in time.

To provide accurate information in autonomous driving situations, AR will need to be used in conjunction with ADAS technologies, such as short- and long-range radar and LiDAR. ADAS systems will, for example, have the ability to enable the AR system to pinpoint the location of traffic lights, walkways and roadsides.

Visteon is developing an AR solution that addresses automakers’ cockpit electronics consolidation needs – enabling them to easily integrate branded human machine interaction (HMI) models and views with dynamic data from the SmartCore™ cockpit domain controller.

Incorporating driver monitoring technology, the system has the ability to observe the driver state and react dynamically to the situation. This contextual information is taken into account when fusing and synchronizing the incoming data, enabling the system to supervise the situation and decide the best output modality to respond.

Separating software from hardware, the solution can also integrate a software stack into different electronic control units to generate the content of an AR head-up display. Visteon’s complete AR solution is one of the first applications integrating AR capability into vehicles.

Share this article:

New Architectures for Intelligent Mobility

In today’s digital cockpit, electronic content, power demands, networking expansion and sensor requirements are accelerating at a tremendous pace, and the pressures on electrical/electronic (E/E) architectures will become even greater in autonomous vehicles. Legacy E/E architectures will simply not be able to manage the needs of autonomous driving, so those architectures must evolve to become much more flexible, powerful and – importantly – centralized.

Currently, vehicle E/E components and CANs are unable to be easily changed to meet the new, increased demands upon them. Consequently, developers are looking at ways to enable automakers to build new architectures from scratch. By separating software from hardware, engineers can create proprietary architectures for each vehicle design, providing distinctive differentiation in the marketplace while speeding the course toward fully autonomous vehicles.

Centralized computing for three domains

These new architectures will be built on centralized computing concepts. The electronic control units (ECUs) in today’s vehicles are each designed to perform a single function and offer just enough computing power to do that task. Visteon is among the first companies working to centralize large numbers of ECUs into three categories of comprehensive domain controllers:

  • A cockpit domain controller consolidating separate cockpit electronics products on a single, multi-core chip, accessible through an integrated HMI, controlling the display of the instrument cluster, infotainment and head-up display (HUD).
  • An autonomous driving domain controller, addressing Level 3 to 5 autonomous driving functions like automated driving on highways and country roads, inner city driving, as well as autonomous parking.
  • An I/O sensor network computer that will handle all the signals from cameras, LiDAR, radar and ultrasonic sources. This controller distributes sensor signals to the cockpit computer and manages the steering of the vehicle.
New Architecture

Driving the need – and automaker preference – for a centralized computing approach, the following trends are fast becoming widespread:

  • The need for more and scalable computing power: Centralized computing, using domain controllers, provides unprecedented flexibility, power and speed for handling sensor data, graphics and multiple software domains. These controllers need to be scalable as – depending on usage and configuration – computing power needs can scale from 1TFLOP up to 20 TFLOP.
  • Increased electronics content: Cockpit functionality will continue to expand and place further demands on computing power. Upcoming features include augmented reality, driver monitoring, e-mirror functionality, Android and enlarged display concepts for the next generation of the digital cockpit.
  • Fast time-to-market means success: Domain computing is complex. Therefore, a new approach introducing easily configurable and adaptable middleware, configuration tooling and “heart beat analysis,” which can be used by customers and development partners, is required.
  • Domain controller to supercomputer: Over time, the domain controllers being developed today will be fused into a single supercomputer with massive computing power. Moreover, during the next few years, as we learn more about the safety and cybersecurity required for domain controllers, more functionality will move to the cloud, where maximum flexibility and power will reside.
  • Visteon Platforms and Technology

    How does Visteon approach this shift toward centralized computing? The trend for electronic control unit (ECU) consolidation for cockpit electronics and autonomous driving has accelerated automaker demand for the SmartCore™ cockpit domain controller – the first scalable controller with a modular architecture. SmartCore ™ is the base platform for a new era of centralized computing that will extend to autonomous driving controllers for Level 3 to 5 autonomous driving, an approach the automotive community recognizes as a critical enabler to achieve fully autonomous driving.

    Visteon’s product portfolio, underlying platforms and technologies provide the building block for a scalable, centralized computing approach.

    Visteon’s centralized computing approach

    Share this article:

    The Digital Cockpit: Powering the Future of Mobility

    Cockpit electronics has become the new competitive battleground for automakers, as it is increasingly the most important aspect of the vehicle from the viewpoint of consumers. Rich infotainment capabilities that offer web services such as smart assistant and streaming media, larger and higher resolution displays, all-digital instrument clusters, driver monitoring, and windshield head-up displays are defining the digital cockpit of today.

    Powering this new era of mobility are new centralized computing approaches consolidating previously separate domains in the vehicle cockpit.

    Cockpit domain controller architecture that integrates all of these different and disparate capabilities onto a single multi-core system-on-chip and electronic control unit is rapidly becoming the “cockpit computer” of the vehicle. Software complexity, which was already growing rapidly in each of the more complex cockpit systems, is expanding exponentially as a result.

    Digitization offers a new view from the cockpit

    Instrument clusters, head-up displays, and information and entertainment systems with larger and higher resolution displays are all elements of the emerging digital experience for drivers and passengers. The key to making all these digital functions practical is to stop considering them as discrete systems, developed in isolation. Instead, these systems must be viewed as a single canvas – the digital cockpit.

    Digitization offers a new view from the cockpit

    Instrument clusters, head-up displays, and information and entertainment systems with larger and higher resolution displays are all elements of the emerging digital experience for drivers and passengers. The key to making all these digital functions practical is to stop considering them as discrete systems, developed in isolation. Instead, these systems must be viewed as a single canvas – the digital cockpit.

    One of the most important aspects of the digital cockpit is its ability not just to support a wide range of embedded functionality but also to enable occupants to bring an increasing amount of digital content into the vehicle from personal devices, or the cloud, via the vehicle’s computerized systems. The consumer sector has far outstripped the auto industry in bringing digital technology to market and in constantly updating and upgrading its capabilities. If a device has a display, consumers are conditioned to expect to be able to download information – and automakers are now welcoming this consumer-focused approach in the cockpit.

    The digital cockpit becomes the digital interior

    Central to the digital cockpit are larger, brighter, high-resolution displays with more design and graphics capabilities than ever. These displays integrate multiple domains such as driver information, infotainment and head-up displays and pull a huge amount of content from consumer devices – from the cloud and from vehicle sensors or the road infrastructure – which can be accessed by the driver through one seamless interface.

    Digital Cockpit

    As assisted and autonomous features free up the driver’s time to engage with the cockpit in new ways, large display surfaces will be a dominant design feature in the vehicle interior and become the primary way occupants interact with the all-digital cockpit.

    Increased functionality and design flexibility enable displays to do more with their digital feeds and for automakers to differentiate the user experience. This new generation of digital cockpits benefits greatly from its flexibility in evolving the user experience. Regularly adding new functionality keeps occupants aware and informed during periods of assisted driving, with camera systems, ambient lighting and haptic feedback among the many possibilities to support occupant monitoring, object recognition and autonomous driving.

    Beyond touch screens, smart surfaces will turn just about any interior part into an interactive spot for activating lights, readouts, images or graphics on doors, seats, windows and instrument panels.

    – Qais Sharif, VP Global Product Management – Driver Information & Displays

    Transforming the digital cockpit into a computing platform

    Addressing the digital cockpit’s increasing impact on processing power, in-vehicle electronics complexity, packaging space and cost, Visteon is building the foundation for an integrated, all-digital vehicle interior with domain controller technology.

    SmartCore™ is the first cockpit domain controller using advanced virtualization technology to integrate several cockpit domains on one powerful system-on-chip (SoC). Content fed from multiple domains running on different operating systems can be shown on any connected display in the cockpit.

    The SmartCore™ architecture is fully scalable and cybersecured through hardware-enabled virtualization of the different cores and controlled firewalls. This enables independent domains with different levels of functional safety (ASIL) requirements – such as assisted driver systems, driver monitoring and augmented reality – to operate separately and securely.

    Today’s average car operates with more than 50 separate electronic control units (ECUs), and some use more than 100. Advanced driver assistance systems (ADAS) are pushing this figure even higher. Visteon’s SmartCore™ cockpit domain controller provides a compelling system solution to drive the next generation cockpit architectures.

    – Marcus Wärmer, Director, SmartCore™ platform

    Smartcore Next Generation

    SmartCoreTM: The new standard for centralized computing

    Using Visteon’s first-to-market virtualization technology, SmartCore™ shares resources like central processing, memory and other system components in a typical SoC setup across the different operating systems, but the solution is also fully capable of integrating any other virtualization technology. Advanced virtualization of the graphics processing unit (GPU) makes it possible for all domains to drive high-definition displays more effectively and enables dynamic sharing of graphics content.

    SmartCore™ is the base platform for a new era of centralized computing in the cockpit. It supports the increase in the number, size and resolution of high-end displays with a scalable and modular architecture.

    It is the first cockpit domain controller that meets the high demand for scalable concepts delivering up to 20 TFLOPs by leveraging multi-core SoC stacks. Additional ECU and software integration will result in further opportunities for cost reductions for automakers.

    To enable faster configuration, integration and time to market, Visteon has created a set of tools for developing and verifying software used in domain controllers. SmartCore™ Studio simplifies cockpit domain configuration for automakers while SmartCore™ Runtime is a scalable middleware solution that enables communication between different domains. It also provides the system HMI to enable sharing of graphical assets on different displays.

    Share this article:

    Privacy Preference Center