With the rise of autonomous vehicle technologies and self-driving innovations in the automotive space, there has been an explosion of intelligent software and solutions in the car. Analysts and experts have debated on what would be the most significant impact of artificial intelligence-related technologies in the automotive space, but one key potential application no one disagrees on is safety. This was top of mind in a talk from HARMAN’s engineers Pratyush Sahay, Srinivas Kruthiventi SS and Rajesh Biswal at the recent NVIDIA GPU Technology Conference in Silicon Valley, which had a specialized ‘Self-Driving and AI Cars’ program track.

20170510_082955

HARMAN’s talk at the event titled ‘Towards Scene Understanding in Challenging Lighting Conditions for ADAS Systems’ focused on our innovative work in utilizing deep learning techniques to improve a driver’s visual perception under challenging or low lighting conditions. Current estimates suggest three times more fatalities occur during night-time driving as compared to day time, and existing thermal cameras (that are used as a part of night vision systems) used to avoid these fatalities can be very expensive. Our engineers elaborated on approaches that can be implemented to manage severe illumination challenges that are posed by poor lighting. This helped conduct object detection efficiently in such situations, and utilizing GPU acceleration to attain an equitable throughput for ADAS systems.

The core idea was multi-source knowledge distillation, where images from two different sources – regular low cost red, green and blue (RGB) camera and high cost thermal camera – were used to train the system. However, the final prediction used only the low cost regular RGB camera for object detection in low light conditions. The basic idea behind the approach was that a system trained on RGB and thermal images will build a deeper understanding of objects in low light conditions. This understanding will enhance capabilities of the trained system to reconstruct missing information captured from an RGB camera alone. This enables us to remove thermal cameras during system deployment and attain performance improvement in low light detection. The team also presented compelling results (~12% improvement for low quality images and 30% improvement for better quality images) that were achieved by our systems on a publicly available low-light benchmark dataset.

20170508_160141

As this work is largely focused on low light perceptions, we imagine a wide range of applications with this technology for which the patent filing is currently under progress. Some potential applications of this technology could be in the domain of augmented vision systems for ADAS alerts and improving scene illumination in low light conditions, among others.

And this is not all, this technology could also have potential applications outside of the automotive space, such as improving capabilities of surveillance cameras using advanced analytics for video streaming.

The Self-Driving and AI Cars track at NVIDIA’s GPU Technology Conference brought together Silicon Valley experts from automotive research labs, Tier-1 suppliers and startups that rely on artificial intelligence solutions to develop self-driving vehicles. This track also highlighted innovations across areas like the future of automotive design, engineering simulation, virtual showrooms and in-vehicle infotainment, in addition to AI-related automotive sessions. The GPU Technology Conference or GTC is an important event for GPU developers and the event showcases the latest work happening within the computing industry and across areas such as AI and deep learning, big data analytics, virtual reality, self-driving cars and more.