feature article
Subscribe Now

Don’t React, PreAct!

Is it just me, or are things becoming even more exciting than they already were? I’m not sure if it’s just because I’m getting older, or if it’s all down to the world spinning faster. How old am I? Well—let’s put it this way—I’m so old that the kids next door believed me when I told them one of my chores when I was their age was to shoo the dinosaurs out of our family cabbage patch (this is obviously a joke because my family never owned a cabbage patch).

It seems like every time I talk to a company, they give me something new to think about. For example, I was just talking to the folks at PreAct. These little scamps recently introduced their third-generation modular software-defined Flash LiDAR called Mojave.

Meet Mojave (Source: PreAct)

I’m not an expert here, but this is the smallest LiDAR I’ve ever seen (90mm x 25 mm x 25mm) with this resolution (320 x 240 = QVGA) coupled with millimeter accuracy. It’s also the cheapest at only $350 from online vendors like Brevan Electronics. As we read on its associated spec sheet: “With its software-defined capabilities, well documented SDK, and powerful, user-friendly toolset, Mojave empowers your business to capture and process precise depth and location data, and quickly develop and deploy new features and functionality to your product line now and in the future.”

PreAct as a company is only around five years old. Since its founders are self-described “lifelong car guys,” their original plan was to create a LiDAR that was fast enough to be able to detect an imminent collision. They weren’t looking to avoid a collision like existing camera / LiDAR / radar systems coupled with automatic emergency braking systems do today. In their own words, their system was more for the “Oh shit, we’re screwed,” type of scenario, like a deer jumping in front of the car, another car suddenly emerging from a junction in front of you, or the classic “piano dropping out of the sky” (I hate it when that happens).

With traditional airbag technology, the airbags deploy when the event has been detected by accelerometers, which means after the crash has occurred. At this point, it takes the airbags around 1/20th of a second (50ms) to inflate, which—it must be admitted—is pretty darned fast. On the other hand, I just ran some back of the envelope calculations. These tell me that if one is travelling at 100km/h (62mph), which equates to 28m/s (92 feet/second), then this means one will have travelled an additional 1.4m (4.6 feet) before the airbag has fully deployed. Hmmm.

As an aside, saying “back of the envelope calculations” reminds me of a joke I recently saw bouncing around the interweb. This was in the form of an image accompanied by the caption “I was up early this morning crunching some numbers.”

Up early crunching numbers.

Well, I thought it was funny, but we digress…

This is why the folks at PreAct were aiming for something lightning fast that could do things like tightening the seatbelts and pre-arming—possibly deploying—the airbags prior to the collision taking place. What we are talking about here is something that can detect things, identify them, and make decisions at the extreme edge local to the sensor. Something that’s running at hundreds of frames a second. By comparison, a traditional camera-based machine vision system running at tens of frames a second communicating visual data to some central compute system over a CAN bus isn’t going to cut the mustard, as it were.

It was only after they had succeeded in this mission that the folks at PreAct realized their technology had many more potential uses in industry, security, robotics, agriculture, smart buildings and smart cities… the list goes on.

First, since Mojave illuminates the scene using flashes in the near infrared (NIR), this means it works in the dark (insofar as human vision is concerned), which can be jolly useful in many circumstances. Second, since it’s using time-of-flight (ToF) coupled with a sensor array that’s detecting all the pixels simultaneously, Mojave can build a 3D point cloud frame-by-frame on the fly. And third, as reported by Forbes, in January 2023, PreAct acquired  Gestoos—a Barcelona-based company specializing in human gesture recognition technology.

During our chat, one of the examples the folks from PreAct and Gestoos talked about was using their LiDAR and gesture recognition in operating theaters where the surgeon wishes to control something like changing the view from an imaging machine without touching anything apart from the person being operated on.

Another thing I hadn’t thought about was how difficult it is to extract 3D information and relationships from traditional 2D camera images. Doing so typically requires a humongous amount of computational power, which often demands the cloud. By comparison, since Mojave already builds a 3D point cloud, this makes a lot of artificial intelligence (AI)-based object detection and recognition much easier to the extent that it can be performed at the extreme edge close to the sensor.

Exploded view (Source: PreAct)

Yet one more aspect of all this that was new to me is the concept of a sort of camera-LiDAR sensor fusion. The idea here is to superimpose the 3D point cloud “on top of” a traditional 2D camera image.

One reason for doing this is to combat the oncoming tsunami of deep fakes in the form of AI-generated videos that purport to be people saying and doing things they never actually said or did.

Of course, there’s always the view that “AI taketh away and AI giveth back.” If the bad guys can use AI to generate deep fakes, then the good guys can use AI to detect them.

Additionally, unlike conventional 2D images, depth maps in the form of 3D point clouds provide a 3D spatial understanding of the scene. It’s incredibly challenging for AI algorithms to generate artificial depth data consistent with the real world, so merging 2D camera data with 3D point cloud data adds an extra layer of information to an image or video that could be used to separate the wheat (real-world imagery) from the chaff (AI-generated imagery).

To be honest, there’s a lot for us to wrap our brains around here, so I’d be very interested to hear your thoughts on this topic.

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Wi-Fi Locationing: Nordic Chip-to-Cloud Solution
Location services enable businesses to gather valuable location data and deliver enhanced user experiences through the determination of a device's geographical position, leveraging specific hardware, software, and cloud services. In this episode of Chalk Talk, Amelia Dalton and Finn Boetius from Nordic Semiconductor explore the benefits of location services, the challenges that WiFi based solutions can solve in this arena, and how you can take advantage of Nordic Semiconductor’s chip-to-cloud locationing expertise for your next design.
Aug 15, 2024
59,588 views