The principles of biomechanics form the basis upon which autonomous systems are built.
Consider the case of drone inspections - a domain where pilot precision and visual acuity used to dictate success.
Though the degree of autonomy has surged forward, operational efficacy still comes down to the drone’s capacity to perceive, process, and respond to visual cues.
The difference is it’s no longer the hand-eye coordination of the pilots that determines how the flight turns out to be in so much as the technology driving it.
Now, two remote sensing methods have become particularly prevalent for powering autonomous vehicles, whether it be cars, drones, or warehouse robots.
Computer vision and light detection and ranging (LiDAR).
A Multibillion-Dollar Take
While we refrain from telling you which holds the upper hand, here’s what the leading tech visionary of our era has to say.
Lidar is a fool's errand. Anyone relying on lidar is doomed. – Elon Musk
True. The man has a knack for sparking controversy, not to mention his recent Twitter spouts. But don’t let that carry you away. Musk has long been a prominent critic of LiDAR. He has been rather vocal about Tesla’s commitment to driving innovation strictly through computer vision.
Before delving into why that is, let’s comb through a little primer on each remote sensing method.
LiDAR
In the image are conceptual models of three objects that commonly pose risks of a car accident. Comprising each model is a bunch of dots. That’s the very essence of LiDAR.
A rotating sensor shoots out millions of high-intensity light pulses at surrounding objects and measures the time it takes for each pulse to make its way back. The collected data are used to create 3D point clouds, as represented in the illustration above.