feature article
Subscribe Now

MEMS Lidar for Driverless Vehicles Takes Another Big Step

Alibaba builds MEMS Lidars into a Driverless Courier Vehicle

“We must understand what infrastructure is needed to support 1 billion parcels a day.” – Jack Ma, Alibaba Executive Chairman

Late in May, Alibaba Group’s Cainiao Network and RoboSense jointly announced G Plus, the world’s first unmanned logistics vehicle to incorporate solid-state Lidar. The announcement coincided with Alibaba’s Cainiao Network 2018 Global Smart Logistics Summit. According to the release:  “Alibaba’s Cainiao G Plus is equipped with three RS-LiDAR-M1Pres, two in front and one in rear, to ensure the most powerful 3D perception for driving. This allows vehicles to clearly see the direction of travel: shape, distance, azimuth, travel speed, and direction of travel of pedestrians, cars, trucks, etc., as well as exact areas to drive, ensuring smooth flow of unmanned logistics vehicles in complex road environments.”

Alibaba’s Cainiao G Plus is equipped with three RoboSense MEMS Lidars, two in the front and one in the back.

The Cainiao Network is Alibaba’s logistics arm, established by the Alibaba Group and eight other companies in 2013. Alibaba Executive Chairman Jack Ma’s goal in creating this network is to eventually ensure single-day delivery across China and 72-hour delivery to the rest of the world. At the Global Smart Logistics Summit in Hangzhou where the G Plus vehicle was announced, Ma said, “…the logistics industry of the future, I believe, has to be brainwork, driven by intelligence.”

Currently, China’s domestic delivery industry relies on manual package handling by millions of couriers—about 5 million people work for China’s courier and food-delivery companies—and logistics costs reportedly consume about 15% of China’s gross domestic product. Alibaba’s Cainiao Network aims to reduce those costs. The Cainiao Network’s single-day and second-day delivery areas now cover 1,500 counties and districts within China.

During this event, Ma said, “Twelve or 13 years ago, I said we’ll see 1 billion packages per year. Nobody believed me. But today, weekly package volumes exceed 1 billion.” Meanwhile, Ma is looking at one particularly staggering statistic: deliveries from e-commerce orders in China have gone from zero to 130 million parcels per day.

Cainiao Network’s G Plus delivery vehicle, which is about the size of a large utility cart and relies in part on three RoboSense Lidars for safe navigation in an all-too-human world, is one more link in the long delivery chain being forged to deliver those hundreds of millions of packages daily. Actually, Ma is looking out further. “We have to think clearly today. We must understand what infrastructure is needed to support 1 billion parcels a day,” Ma said.

RoboSense introduced the RS-LiDAR-M1Pre, its first MEMS solid-state Lidar, at CES 2018 early this year. It uses a MEMS micro-mirrors to mechanically reflect and project the beams from “a few” solid-state lasers into the surrounding environment. The MEMS micro-mirror eliminates the large and expensive, mechanically spinning mechanisms used in previous existing Lidar designs.

The RS-LiDAR-M1Pre incorporates a laser, a 2D MEMS scanning mirror, and a time-of-flight sensor. The laser reflects off the scanning mirror, creating a rectangular, projected cone of light. That light cone measures 63 degrees (horizontal) by 20 degrees (vertical) and stretches out to a detectable distance of about 200m. The angular resolution of the Lidar is 0.09 degrees by 0.2 degrees. A time-of-flight sensor detects the light reflected by obstacles and other objects. The round-trip photon travel time is converted to a distance and then combined with the angular data to produce a 3D point cloud within the laser-illuminated light cone.

Key specs for the RoboSense RS-LiDAR-M1Pre include 200m range and a 20 fps frame rate.

Limitations of RoboSense’s initial MEMS Lidar

The RoboSense RS-LiDAR-M1Pre’s 200m range appears to be greater than mechanically-scanned Lidars, which have ranges of around 100m to 120m. However, the RS-LiDAR-M1Pre’s field of view is limited to 63 degrees by 20 degrees versus the 360-degree views generated by mechanically scanned sensors. That’s why the G Plus vehicle uses three Robosense sensors – two in front and one in back. It’s to widen the field of view.

The RS-LiDAR-M1Pre’s 20 fps scan rate compares well with mechanically-scanned Lidars, but a vehicle traveling at 100 kph covers 1.4 meters between successive frames when taken at the RoboSense Lidar’s 20/second frame rate. That’s a bit too coarse of a measurement for predicting object trajectories and avoiding pedestrians at that speed. You’ll still need high-frame-rate visual cameras as a parallel sensor input.

Cainiao Network’s G Plus logistics delivery vehicle has a maximum speed of 15 kph, and it slows to 10 kph as soon as it detects a potential obstacle. At that speed, the G Plus is clearly not a road vehicle (even with gridlocked city traffic). Instead, it is designed for small package delivery within Alibaba’s logistics centers and to Alibaba’s urban residential customers who have ordered online, perhaps using the existing sidewalks. The G Plus is currently being road tested at Alibaba’s Hangzhou headquarters, and the company is reportedly testing the unmanned G Plus delivery vehicle on roads around Alibaba’s Hangzhou headquarters. It expects to start building G Plus vehicles in volume by the end of 2018.

RoboSense has already posted specs for an upgraded version of its MEMS Lidar, the RS-LiDAR-M1. (“Pre” apparently stood for “pre-production.”) The new Lidar features a much larger field of view (120 degrees by 25 degrees), a faster 25 fps frame rate, and double the vertical angular resolution.

RoboSense’s improved RS-LiDAR-M1 delivers improved field of view, better angular resolution, and a faster frame rate.

Lidar sensors will not be the be-all and end-all sensor technology for driverless vehicles. At frame rates of 20 fps (or even 25 fps), these sensors are too slow to be used as the primary sensors for driverless vehicles in at-speed road traffic. Visible-light cameras have much higher frame rates of 50 to 120 fps and can more effectively deal with real-world driving scenes. At the same time, I’m not buying the argument that visible-light cameras (and lots of them) are the only sensors you need to create a safe, driverless vehicle that can pilot today’s roadways alongside human-piloted vehicular traffic, bicyclists, skateboarders, riders of electric-powered scooters (there’s lots of them darting into traffic from sidewalks in downtown San Jose these days), pedestrians, people in wheelchairs, and driverless delivery vehicles. Visible light does not contain all the information a driverless vehicle needs for safe navigation in a human-infested urban environment, especially in inclement weather (rain, snow, fog, and smog).

It seems clear to me that safe, driverless vehicles will need a complex, overlapping array of sensors with which to detect all sorts of obstacles getting in the vehicle’s way over a wide range of speeds from snail-like pedestrians to the screeching, rubber-burning motorcycles that routinely launch themselves from street-side parking spaces straight into traffic below my condo windows at two in the morning.

When I discussed this topic in April—see “Maybe You Can’t Drive My Car. (Yet)”—I wrote, “Autonomous cars need to be safe enough to drive themselves. Period.”

Nothing much changes just because RoboSense has developed MEMS-based Lidars that end up being incorporated into Alibaba’s prototype package hauler. Better sensors and better processing are required.

All kinds of congratulations go to RoboSense for developing, fielding, and improving a MEMS Lidar for vehicular use and to Alibaba for incorporating the sensor into a prototype package-delivery vehicle. However, it still seems like very early days for these driverless vehicles, despite the hype and hoopla behind these announcements.

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Premo-Flex
Sponsored by Mouser Electronics and Molex
In this episode of Chalk Talk, Logan Lukasik from Molex and Amelia Dalton explore the benefits of Molex’s flat flexible cables and flexible printed circuits called Premo-Flex. They also investigate custom capabilities of the Premo-Flex solutions and how you can get started using Premo-Flex in your next design.
Dec 12, 2024
6,962 views