feature article
Subscribe Now

Don’t React, PreAct!

Is it just me, or are things becoming even more exciting than they already were? I’m not sure if it’s just because I’m getting older, or if it’s all down to the world spinning faster. How old am I? Well—let’s put it this way—I’m so old that the kids next door believed me when I told them one of my chores when I was their age was to shoo the dinosaurs out of our family cabbage patch (this is obviously a joke because my family never owned a cabbage patch).

It seems like every time I talk to a company, they give me something new to think about. For example, I was just talking to the folks at PreAct. These little scamps recently introduced their third-generation modular software-defined Flash LiDAR called Mojave.

Meet Mojave (Source: PreAct)

I’m not an expert here, but this is the smallest LiDAR I’ve ever seen (90mm x 25 mm x 25mm) with this resolution (320 x 240 = QVGA) coupled with millimeter accuracy. It’s also the cheapest at only $350 from online vendors like Brevan Electronics. As we read on its associated spec sheet: “With its software-defined capabilities, well documented SDK, and powerful, user-friendly toolset, Mojave empowers your business to capture and process precise depth and location data, and quickly develop and deploy new features and functionality to your product line now and in the future.”

PreAct as a company is only around five years old. Since its founders are self-described “lifelong car guys,” their original plan was to create a LiDAR that was fast enough to be able to detect an imminent collision. They weren’t looking to avoid a collision like existing camera / LiDAR / radar systems coupled with automatic emergency braking systems do today. In their own words, their system was more for the “Oh shit, we’re screwed,” type of scenario, like a deer jumping in front of the car, another car suddenly emerging from a junction in front of you, or the classic “piano dropping out of the sky” (I hate it when that happens).

With traditional airbag technology, the airbags deploy when the event has been detected by accelerometers, which means after the crash has occurred. At this point, it takes the airbags around 1/20th of a second (50ms) to inflate, which—it must be admitted—is pretty darned fast. On the other hand, I just ran some back of the envelope calculations. These tell me that if one is travelling at 100km/h (62mph), which equates to 28m/s (92 feet/second), then this means one will have travelled an additional 1.4m (4.6 feet) before the airbag has fully deployed. Hmmm.

As an aside, saying “back of the envelope calculations” reminds me of a joke I recently saw bouncing around the interweb. This was in the form of an image accompanied by the caption “I was up early this morning crunching some numbers.”

Up early crunching numbers.

Well, I thought it was funny, but we digress…

This is why the folks at PreAct were aiming for something lightning fast that could do things like tightening the seatbelts and pre-arming—possibly deploying—the airbags prior to the collision taking place. What we are talking about here is something that can detect things, identify them, and make decisions at the extreme edge local to the sensor. Something that’s running at hundreds of frames a second. By comparison, a traditional camera-based machine vision system running at tens of frames a second communicating visual data to some central compute system over a CAN bus isn’t going to cut the mustard, as it were.

It was only after they had succeeded in this mission that the folks at PreAct realized their technology had many more potential uses in industry, security, robotics, agriculture, smart buildings and smart cities… the list goes on.

First, since Mojave illuminates the scene using flashes in the near infrared (NIR), this means it works in the dark (insofar as human vision is concerned), which can be jolly useful in many circumstances. Second, since it’s using time-of-flight (ToF) coupled with a sensor array that’s detecting all the pixels simultaneously, Mojave can build a 3D point cloud frame-by-frame on the fly. And third, as reported by Forbes, in January 2023, PreAct acquired  Gestoos—a Barcelona-based company specializing in human gesture recognition technology.

During our chat, one of the examples the folks from PreAct and Gestoos talked about was using their LiDAR and gesture recognition in operating theaters where the surgeon wishes to control something like changing the view from an imaging machine without touching anything apart from the person being operated on.

Another thing I hadn’t thought about was how difficult it is to extract 3D information and relationships from traditional 2D camera images. Doing so typically requires a humongous amount of computational power, which often demands the cloud. By comparison, since Mojave already builds a 3D point cloud, this makes a lot of artificial intelligence (AI)-based object detection and recognition much easier to the extent that it can be performed at the extreme edge close to the sensor.

Exploded view (Source: PreAct)

Yet one more aspect of all this that was new to me is the concept of a sort of camera-LiDAR sensor fusion. The idea here is to superimpose the 3D point cloud “on top of” a traditional 2D camera image.

One reason for doing this is to combat the oncoming tsunami of deep fakes in the form of AI-generated videos that purport to be people saying and doing things they never actually said or did.

Of course, there’s always the view that “AI taketh away and AI giveth back.” If the bad guys can use AI to generate deep fakes, then the good guys can use AI to detect them.

Additionally, unlike conventional 2D images, depth maps in the form of 3D point clouds provide a 3D spatial understanding of the scene. It’s incredibly challenging for AI algorithms to generate artificial depth data consistent with the real world, so merging 2D camera data with 3D point cloud data adds an extra layer of information to an image or video that could be used to separate the wheat (real-world imagery) from the chaff (AI-generated imagery).

To be honest, there’s a lot for us to wrap our brains around here, so I’d be very interested to hear your thoughts on this topic.

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

Libby's Lab

Libby's Lab - Scopes Out Silicon Labs EFRxG22 Development Tools

Sponsored by Mouser Electronics and Silicon Labs

Join Libby in this episode of “Libby’s Lab” as she explores the Silicon Labs EFR32xG22 Development Tools, available at Mouser.com! These versatile tools are perfect for engineers developing wireless applications with Bluetooth®, Zigbee®, or proprietary protocols. Designed for energy efficiency and ease of use, the starter kit simplifies development for IoT, smart home, and industrial devices. From low-power IoT projects to fitness trackers and medical devices, these tools offer multi-protocol support, reliable performance, and hassle-free setup. Watch as Libby and Demo dive into how these tools can bring wireless projects to life. Keep your circuits charged and your ideas sparking!

Click here for more information about Silicon Labs xG22 Development Tools

featured chalk talk

Reliability: Basics & Grades
Reliability is cornerstone to all electronic designs today, but how reliability is implemented and determined can vary widely by different market segments. In this episode of Chalk Talk, Amelia Dalton and Sam Accardo from the YAGEO Group explore the definition of reliability for electronic components, investigate the different grades of reliability offered by the YAGEO Group and the various steps that the YAGEO Group is taking to ensure the greatest reliability of their components.
Aug 15, 2024
53,494 views