Getting to Autonomous Vehicles Isn't Nearly as Exciting as the Journey
Everyone has an opinion about autonomous vehicles: my mother-in-law thinks it’s nuts, my daughter wants one so she can Insta-Tweet-a-Snap instead of driving, my wife has no interest, while my friends and I are curious but in no rush to try one. Let the tech buffs take the lead: we’re happy to wait a decade or two.
In truth, it actually doesn’t matter what anyone thinks, autonomous vehicles are coming either way. And it’s not so much because there’s a huge public pull from potential passengers or because the technical challenges have suddenly been overcome. What’s exciting about autonomous vehicles is the challenge itself, and what can be learned along the way.
From sensors and artificial intelligence (AI) to security and safety, they present such problems on so many fronts that the innovation and methodical work required to realize them will reap benefits for vehicle automation as well as many associated industries. Already, the advances in AI required for feature recognition and classification has fueled an explosion in AI innovation that has spread to other fields, such as pattern recognition and feature extraction in the fields of medicine, surveillance, and security.
Still, there’s much work to be done. At a panel during NIWeek, some more opinions and insights came to the fore.
Kamal Khouri, vice president of ADAS for NXP Semiconductors was philosophical and cautious.
“I want to focus on safety and security,” he said. “Scientists need to understand the impact of their discovery,” he added, referring to the need to be aware of what’s being created and its potential for both positive and negative uses. “But I am an engineer, not a scientist. I solve problems.”
The problems Khouri pointed out started with functional safety, which automotive engineers understand well as part of ISO 26262. However, two new dimensions of safety arise with autonomous vehicles: behavioral and environmental (Figure 1).
Figure 1: Automotive engineers are familiar with functional safety, but autonomous vehicles add two new dimensions: behavioral and environmental safety. (Image source: NXP Semiconductors)
Behavioral safety refers to obeying rules such as stopping at crosswalks, cyclists, and wayward pedestrians, while environmental means reacting to environmental anomalies, such as potholes, fallen trees, and other obstacles.
However, there are other dangers, such as security breaches of the V2X connected vehicle that could subject it to remote control and weaponization.
Bryant Walker Smith, School of Law Assistant Professor at the University of South Carolina helped develop the now well-known SAE Levels for autonomous vehicles, and now advises cities, states, and countries on how to deal with autonomous vehicle laws and restrictions. He quoted Elon Musk, stating that, “Everything is beta.” In that case, validation, verification, and simulation are just as important as sensing, he said.
However, he had a particularly pointed comment for designers and automotive companies. “Regulations are important, but the key isn’t asking if a product is safe, but whether the companies vouching for these technologies are trustworthy.” At each stage in the supply chain, from semiconductor providers to software, to subassemblies, customers should be asking, “What are you doing? Why do you think it’s safe? And why should we trust you?” Weighty questions indeed.
Smith’s request is that everyone performing self-driving tests should provide a safety report.
“Share successes and failures,” he said. “These technologies will create so many new opportunities, as well as great power… and responsibility.”
“Crashes are inevitable,” he said, “But that doesn’t mean each individual crash is unavoidable.” This reinforces the commitment to safety, he noted, “But it will not ultimately affect the kind of development we’re seeing.”
When there are accidents, as many engineers – both optimists and skeptics alike – predict will be the case, it will be a failure of the test environment and not the vehicle, said Kamal. Which puts an extra few pounds of responsibility on test engineers.
However, with regard to liability, Smith predicts it’ll be worked out much like accidents are today, based on laws and accident specifics.
Still, designers like assurances so they are desperate for standards to work to ensure safety and reduce liability. But Smith believes we need more safety innovation versus standards. Many in the industry are more than aware of how standards, if introduced too soon, can stifle innovation.
Indeed, innovation is the operative word when it comes to autonomous vehicles. Developers are working out the neural networks and the weightings to assign to the various parameters. They’re looking to balance the depth perception of cameras with radar’s ability to see through fog and around corners. Lidar can see in 3D, while sonar has limited range. While there was a debate early on about which modality was better, it quickly became clear that due to the nature of the problems, all useful sensing modalities need to be fused to optimize for safety.
Innovation, enthusiasm, and time overcome skepticism
For sure, automated driving will be in our future. Many designers tell me otherwise, emphasizing the problems and liability issues. However, managed fleet deployments are already underway in controlled environments, with Waymo just recently announcing that it would be partnering with Walmart and others to bring customers to stores.
This is but one business model that few foresaw. It may work out for the young, the brave, the handicapped or the elderly who would appreciate the freedom it may provide, and don’t mind going in the slow lane.
In the meantime, the impetus behind the endeavor smacks of space travel: many are determined to get there just because we think we can. Along the way, the innovations and discoveries to be had will benefit every industry.

Have questions or comments? Continue the conversation on TechForum, DigiKey's online community and technical resource.
Visit TechForum