editor's blog
Subscribe Now

CogniVue Drives at Mobileye

iStock_000068339495_Small.jpgCogniVue recently made a roadmap announcement that puts Mobileye on notice: CogniVue is targeting Mobileye’s home turf.

We looked at Mobileye a couple years ago; their space is Advanced Driver Assistance Systems (ADAS). From an image/video processing standpoint, they apparently own 80% of this market. According to CogniVue, they’ve done that by getting in early with a proprietary architecture and refining and optimizing over time to improve their ability to classify and identify objects in view. And they’ve been able to charge a premium as a result.

What’s changing is the ability of convolutional neural networks (CNNs) to move this capability out of the realm of custom algorithms and code, opening it up to a host of newcomers. And, frankly, making it harder for players to differentiate themselves.

According to CogniVue, today’s CNNs are built on GPUs and are huge. And those GPUs don’t have the kind of low-power profile that would be needed for mainstream automotive adoption. CogniVue’s announcement debuts their new Opus APEX core, which they say can support CNNs in a manner that can translate to practical commercial use in ADAS designs. The Opus power/performance ratio has improved by 5-10 times as compared to their previous G2 APEX core.

You can find more commentary in their announcement.

 

Updates: Regarding the capacity for Opus to implement CNNs, the original version stated, based on CogniVue statements, that more work was needed to establish Opus supports CNNs well. CogniVue has since said that they’ve demonstrated this through “proprietary benchmarks at lead Tier 1s,” so I removed the qualifier. Also, it turns out that the APEX core in a Freescale device (referenced in the original version) isn’t Opus, but rather the earlier G2 version – the mention in the press release (which didn’t specify G2 or Opus) was intended not as testament to Opus specifically, but to convey confidence in Opus based on experience with G2. The Freescale reference has therefore been removed, since it doesn’t apply to the core being discussed.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'Ā”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi ā€“ Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Alteraā€™s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

From Sensor to Cloud:A Digi/SparkFun Solution
In this episode of Chalk Talk, Amelia Dalton, Mark Grierson from Digi, and Rob Reynolds from SparkFun Electronics explore how Digi and SparkFun electronics are working together to make cellular connected IoT design easier than ever before. They investigate the benefits that the Digi Remote ManagerĀ® brings to IoT design, the details of the SparkFun Digi XBee Development Kit, and how you can get started using a SparkFun Board for XBee for your next design.
May 21, 2024
37,643 views