The principles of biomechanics form the basis upon which autonomous systems are built.
Consider the case of drone inspections - a domain where pilot precision and visual acuity used to dictate success.
Though the degree of autonomy has surged forward, operational efficacy still comes down to the drone’s capacity to perceive, process, and respond to visual cues.
The difference is it’s no longer the hand-eye coordination of the pilots that determines how the flight turns out to be in so much as the technology driving it.
Now, two remote sensing methods have become particularly prevalent for powering autonomous vehicles, whether it be cars, drones, or warehouse robots.
Computer vision and light detection and ranging (LiDAR).
A Multibillion-Dollar Take
While we refrain from telling you which holds the upper hand, here’s what the leading tech visionary of our era has to say.
Lidar is a fool's errand. Anyone relying on lidar is doomed. – Elon Musk
True. The man has a knack for sparking controversy, not to mention his recent Twitter spouts. But don’t let that carry you away. Musk has long been a prominent critic of LiDAR. He has been rather vocal about Tesla’s commitment to driving innovation strictly through computer vision.
Before delving into why that is, let’s comb through a little primer on each remote sensing method.
LiDAR
In the image are conceptual models of three objects that commonly pose risks of a car accident. Comprising each model is a bunch of dots. That’s the very essence of LiDAR.
A rotating sensor shoots out millions of high-intensity light pulses at surrounding objects and measures the time it takes for each pulse to make its way back. The collected data are used to create 3D point clouds, as represented in the illustration above.
Computer Vision
No one behind the steering wheel relies on a 3D vision to navigate the roads. Our eyes and brains are simply primed to make out the approximate distance in between and the corresponding space to maintain a safe space.
The same goes for computer vision. By immersing them in a constant stream of real-life images and videos, neural networks are trained to detect, classify, and track objects within their surroundings. Doing so, in turn, provides the data trajectory planning algorithms need to make informed decisions regarding when to make turns, switch lanes, come to a dead stop, etc.
The Moment of Truth
The allure of precise point clouds is undeniable. Yet, a glaring limitation lies beneath.
The whole premise of LiDAR hinges on emitted beams safely making it back to the sensor. Weather complications such as fog, snow, rain, or even heat can distort or deflect the pulses, rendering LiDAR blind.
But it isn’t just about reliability - or unreliability, rather. Musk’s deep skepticism of LiDAR stems more so from the cost.
To put things into perspective, a typical LiDAR package costs between $35,000 to $100,000 per vehicle, whereas Tesla’s vision-led full self-driving service comes at $10,000.
Taking Computer Vision Beyond the Horizon
Unless a major technological breakthrough gives way to producing sensors at a fraction of the cost, LiDAR will inevitably remain a luxury.
In stark contrast, computer vision, which mimics the intricate workings of the human cortex with higher precision and extended range, is poised to revolutionize industries. Especially in the realm of autonomous inspections, where drones have shouldered the sheer weight of LiDAR, the cost-effectiveness of computer vision truly shines.
What further distinguishes computer vision is its ability to achieve this without requiring additional hardware. This very distinction paved the way for the launch of NearthWIND Mobile, an award-winning plug-and-play solution.
Since its launch, NearthWIND Mobile has been empowering operators and site managers to conduct inspections using off-the-shelf drones, readily available from local hardware stores or on Amazon.
If you're intrigued by the possibilities that computer vision and NearthWIND Mobile bring to the table, reach out to our specialists today to learn more.