Recently, a colleague and I were discussing autonomous cars and the possibility of technology replacing human intelligence in taking critical decisions. With major automakers such as Ford, GM, Volkswagen, and BMW working to bring autonomous cars to the market in the next three to five years, debates are raging everywhere about whether we are ready for self-driving cars.
Humans can not only observe objects, but also gauge their characteristics, actions, and relative positioning in the surrounding environment, which enables logical decision-making. However, most state-of-the-art algorithms in machines detect only high-level information of objects such as position and type. This begs the question is this information sufficient to ensure appropriate responses under varying environmental, lighting, and traffic scenarios? Well, the answer is no. But incorporating cognitive analysis can help enhance the navigational accuracy of advanced driver assistance systems (ADAS).
Todays autonomous vehicles are equipped with multiple sensors such as vision, radars, LIDAR (light detection and ranging), and ultrasonics to ensure driver and passenger safety. Data from these sensors is analyzed simultaneously to understand the vehicles surrounding environment. While the industry is making huge strides with advanced technology enabling multiple driver assistance solutions, there are still some concerns about the ability of these systems to completely eliminate the need for human intervention.
Why is cognitive analysis a must for autonomous cars?
Systems designed with cognitive analysis imitate the human brain by using data-mining and pattern recognition algorithms, which facilitate problem-solving without human assistance. Research is underway to develop such cognitive systems for real-time applications. Several user assistance applications available in the market showcase cognitive computing capabilities. Such systems are trained on large datasets to develop deeper understanding and reasoning of unstructured data to derive useful insights and respond more naturally as a human would. Driver assistance or auto-pilot systems require such cognitive computing systems to mindfully handle dynamic operating conditions.
Autonomous cars need to contextually process data in order to ensure complete passenger safety. To better understand, lets consider the role of vision sensors used within automotive safety solutions. To handle critical requirements of safety under challenging and dynamic conditions, the algorithms should be enhanced from mere image processing or classification to image understanding by using cognitive analysis. Decades of research on deep learning a branch of machine learning and artificial intelligence (AI) shows encouraging results in solving some of the most challenging cognitive problems, and which can be used to enhance vehicle safety.
So how is deep learning different from other machine learning methods? The key difference is that it understands high-level abstractions, as well as the low-level features of data, using multiple filters of different resolutions. This means deep learning techniques help extract meaningful patterns from a complex raw data set by adjusting the parameters of the network through multiple iterations. Several studies show that features selected by deep learning machines can outperform human engineered features for object detection. Researchers have also found that the actions performed by different objects can be detected by long short-term memory (LSTM)-based recurrent neural networks (RNN), a variant of deep neural networks. It uses spatiotemporal (belonging to both space and time) context to predict the future.
Deep learning can help autonomous cars tread safely into the future
Deep learning offers significant advantages in terms of scalability, and can automatically learn complex mapping functions. It is a sophisticated form of AI that has the ability to make self-driving cars think like humans. While the concept of deep learning has been around for decades, its usage in real-time applications was limited until recent times. With latest advancements in computing processors, data mining techniques, and algorithms to handle complex operations, deep learning techniques are back in focus. Daimler is using deep learning to teach its self-driving cars how to drive. A Silicon Valley start-up Drive.ai that uses deep learning has been approved to be tested on California roads. Chip manufacturers are coming up with dedicated graphic processing unit (GPUs) and processors to accommodate deep learning networks. With the advancements in high-performance computers, driver safety solution providers are focusing on engineering deeper networks with custom architectures which can understand the scenario in a human way. With the advancements in high-performance computing technology, driver safety solution providers are focusing on engineering deeper networks with custom architectures, which can help autonomous cars understand scenarios in a human way.
Please share your thoughts below in the comments section and keep following the blog for more insights on Connected cars & Autonomouscars.