Home > Supplier Companies > Convoy Technologies, LLC > News > What Cameras Bring to the...
Convoy Technologies, LLC

What Cameras Bring to the Autonomous Vehicle Game that Lidar Doesn’t

Sep 25, 2020
Tesla CEO Elon Musk made waves when he said that any autonomous driving company that relies on lidar is doomed. Unsurprisingly, Tesla’s own self-driving technology uses cameras placed all around the vehicle, along with a radar sensor on the front bumper, to gather information about the environment. This shouldn’t, however, distract us from the fact that most of the other players in the autonomous vehicle market are betting on lidar being the most effective option. 
 
Do the engineers at Tesla know something these other companies don’t? Or is Musk simply doubling down on an investment his company made early on and is now reluctant to walk away from?  
 
For a long time, lidar’s main disadvantage was its cost. The technology, which bounces light off surrounding objects to create 3D point maps, relies on cumbersome hardware and a lot of sensitive equipment that hasn’t been easy to manufacture. Cameras on the other hand are everywhere, and the ones in self-driving vehicles aren’t all that different from the ones in your phone. As the technology advances, though, lidar will continue getting cheaper, and that’s why so many companies are investing in it as the most promising for autonomous navigation.
 
Neural Nets and Coding Tangles
 
The main problem with cameras is that it takes a lot of computing power to make any sense of the two-dimensional images they render, while the point cloud models created by lidar offer rich data about the environment without the need for much downstream analysis. Basically, the processing that goes into creating lidar point cloud maps happens upfront, while the images recorded by cameras don’t offer much useful information until they’ve gone through so-called neural nets, which rely on pattern-recognition programs based on machine learning. That’s why Tesla is working so hard on getting more and more data to train its systems. 
 
What this means is that while the cameras themselves are relatively cheap, the software needed to turn camera data into actionable information will likely be prohibitively expensive into the indefinite future. Computer vision based on neural nets is also tricky to debug, and trickier still to integrate with other systems. That’s because the machine, in the process of learning, is essentially writing its own code. All this has led most engineers to conclude lidar is probably the superior solution.
 
Still, there are some shortcomings with lidar that will need to be overcome. For one, a lidar sensor system relies on the interval between a pulse of light being sent and the arrival of its reflection to determine distance to objects. How can such a system figure out what a street sign says? It would know the sign was there by its shape and orientation. But any writing would be invisible. Lidar has the same problem when it comes to distinguishing between a red light and a green light. It picks up shape and distance, but not color. 
 
Whether lidar or cameras operate better in rain or snow is a matter of debate. It’s easy to see how models created using bouncing pulses of light could get tripped up by thousands of snowflakes descending to the ground. Then again, human eyes often have a hard enough time of their own picking out objects through mist or rain or heavy snow. Suffice to say, weather conditions leading to poor visibility represent a challenge to any senor system.
 
Surface Knowledge vs. Deep Knowledge
 
One way to think about the difference between what the two technologies offer is that lidar provides information from one category—space. Cameras on the other hand provide data from multiple categories, including space, color, weather conditions, textures, and pretty much anything else you can glean from an image. Granted, you may need more than one camera, along with some processing to measure parallax, to get a good gauge of relative distances, but this information is there to analyze. 
 
You might even say that the strength of lidar derives from its reliance on that one type of information, which makes the technology simpler to work with. It provides a direct and robust path to three-dimensional environmental awareness. Data provided by cameras meanwhile is more complex, copious, and therefore more difficult to process and analyze. And, before processing, its information is two-dimensional. 
 
From this perspective, we may ask whether lidar may be the most promising initial solution, while cameras offer potentially more degrees of freedom for future developments of self-driving technologies. What this means is that companies using lidar may be the first across the finish line in the race to full autonomy, but it will be the companies using cameras who end up better positioned to make further advances in the technology down the road (pun intended?). 
 
One possible upside for cameras that’s going largely overlooked, for instance, is that they are probably better suited to generating the type of data that can be sold to third parties like local governments. In one scenario, municipalities might offer discounted parking permits to transportation companies in exchange for data that will help them manage curb spaces or intersections. It’s not impossible that data generated using lidar could be useful as well, but again video footage offers several types of information beyond the spatial dimensions of a scene rendered by lidar. 
 
Tesla already has a lot of resources and infrastructure devoted to its efforts at achieving full autonomy, so it’s a bad idea to count them out regardless of what sensory technology they’re using. And, even if another company beats them to this initial milestone, that wouldn’t preclude Tesla from eventually dominating the market. 
 
The good thing from an outsider’s perspective is there are multiple innovators working on multiple potential pathways to achieving full autonomy. This type of competition and experimentation are bound to lead, not just to the breakthroughs we’re already imagining, but to discoveries that haven’t even occurred to anyone yet. 
 

Source: https://www.convoytechnologies.com/post/what-cameras-bring-to-the-autonomous-vehicle-game-that-lidar-doesn-t