feature article
Subscribe Now

These Aren’t Your Mother’s SoMs!

I just learned something new. I thought the System-on-Module” (SoM or SOM, depending on your preference) moniker was relatively new. I should have known better. “What has been will be again, what has been done will be done again; there is nothing new under the sun,” as the prophet said in Ecclesiastes 1:9.

No matter what your religious persuasion, it has to be admitted that the lad was a tad on the gloomy side. Not the sort of person you’d expect to be invited to too many parties. Can you imagine summoning all your friends to your abode—the music is playing (could that be 100 Polka Classics on the Accordion, I wonder?), and everyone is happily chatting, quaffing*, and munching on tasty treats. And then the door is flung open and the prophet strides in, bellowing out his favorite greeting (the one with which he opened the book of Ecclesiastes): “Meaningless! Meaningless! Everything is Meaningless.” It would certainly put a bit of a damper on things. (*For the uninitiated, quaffing is like regular drinking except you tend to spill more down your chest.)

But we digress… I was just wandering around the Wikipedia as is my wont, having a quick Google while no one was looking, searching for the history of SOMs, when I ran across the following: “The acronym SoM has its roots in the blade-based modules. In the mid-1980s, when VMEbus blades used M-Modules, these were commonly referred to as system on a module (SoM). These SoMs performed specific functions such as compute functions and data acquisition functions. SoMs were used extensively by Sun Microsystems, Motorola, Xerox, DEC, and IBM in their blade computers.”

Well. Who would have “thunk”? The reason for my interest in this topic is that I was just chatting with Ohad Yaniv, who is the CEO of Variscite. The topic of our discussion was the fact that Variscite recently released their latest-and-greatest state-of-the-art SOM for efficient machine learning (ML) on edge devices.

This bodacious beauty, the VAR-SOM-MX93, is based on NXP’s i.MX 93 processor, which is the industry’s first implementation of the Arm neural processing unit (NPU), Ethos-U65 microNPU. As we’ll see, the VAR-SOM-MX93 (let’s just call it the MX93 for short) offers a rich set of applications targeting industrial, IoT, smart home, automotive system, and wearables markets.

When I’m writing this sort of column, I sometimes talk about the company first and the product second. In this case, however, I’m going to lead with the product because… well, you’ll see (said Max, mysteriously).

Meet the VAR-SOM-MX93 (Source: Variscite)

Designed to accelerate ML, the MX93 boasts an energy flex architecture for efficient processing. Along with its 1.7GHz Dual Cortex-A55 NXP i.MX 93 processors, the MX93 uses an additional 250MHz Cortex-M33 real-time co-processor and a dedicated Neural Processing Unit (NPU) with 256 MACs operating at up to 1.0GHz and 2 OPS/MAC for a combination of performance and efficiency within an optimized footprint that enables developers to create high-performance, cost-effective, and energy-efficient ML applications (pause for deep breath).

But wait, there’s more, because this SOM also offers camera interfaces and high-quality image processing (MIPI-CSI2 and MIPI-DSI 1920×1200 24-bit), plus a range of connectivity options: certified dual-band high-speed WiFi, low-power BT/BLE, dual GbE, dual USB2, and CAN-FD. Security is implemented via NXP’s EdgeLock secure enclave, a preconfigured, self-managed, autonomous security subsystem.

Apart from anything else, the camera interfaces and the on-SOM NPU means high-speed machine vision functions like object detection and recognition can be performed at the edge.

What’s not to love?

But we haven’t finished yet. One thing I really like is the fact that we (as users) can customize this little scamp by selecting how much RAM we want between 512MB and 2GB LPDDR4 (I hear 4GB is also available, although this doesn’t show on the web at the time of this writing), and the quantity of eMMC Flash memory we desire between 8 and 128GB.

The thing is that there are several companies offering SOMs. I like to think that I’m not a mean man, but I felt moved to make mention of this fact to Ohad and to ask him, “What makes your SOMs better than any of the others that are out there?” As quick as a flash, he replied, “Ours are dark green, not light green like the others!”

You can’t argue with logic like that.

Ohad went on to point out that Variscite has been in business for 20 years and that they have an enviable reputation for quality, support, the longevity of their products, and the pin-compatible nature of their SOMs, which means users of the MX93 could decide to migrate to a lower-power single core SOM if they wish to save money, or to a higher-performance six-core SOM if they feel the need for speed.

With respect to longevity, some of Variscite’s customers have products built around earlier Variscite SOMs, say 10 or so years ago, where these products are still selling with the same SOM. These customers can keep the main product, swap out the SOM for an MX93, and extend the lifetime of the product for another 10 or 15 years. On the MX93 webpage, I see that this SOM is guaranteed to be in production until at least 2038 (by which time there will doubtless be numerous higher-performing pin-compatible alternatives available).

But the real differentiator, the metaphorical dollop of whipped cream on top of the cake, as it were, is the fact that all of Variscite’s SOMs are built in-house in their medically certified facility. The medically certified part is important, because in addition to industrial and IoT use, a lot of these little rascals find themselves deeply embedded in medical applications.

In fact, everything is done in-house—the entire enchilada, from design and development to part procurement to final assembly and testing. On the software side, all of the operating systems and real-time operating systems (e.g., Linux, Android, FreeRTOS) are handled in-house, including driver creation and so forth. Furthermore, the engineers who design and develop these products are the same engineers who support them. I like that very much!

As always, I could waffle on for hours, but I’d rather hear what you have to say about all this. I think I can hear the comment field below calling out to you. Don’t be shy.

Leave a Reply

featured blogs
Sep 5, 2024
I just discovered why my wife sees our green watering can as being blue (and why she says I see our blue watering can as being green)...

featured video

How Switch Provides Unparalleled Exascale Data Center Solutions with Cadence and NVIDIA

Sponsored by Cadence Design Systems

Learn how Switch, a leading designer, builder, and operator of U.S. exascale data centers, is taking their data center’s cooling capabilities even further. In the past 20 years, Switch has built some of the densest air-cooled data center environments. With AI taking off in the last couple of years, see how they were able to deploy many of the first NVIDIA H100 clusters inside using Cadence’s Reality Digital Twin Platform for pre-modeling, design, and validation.facilities.

Click here for more information about Cadence Data Center Solutions

featured chalk talk

Accelerating Tapeouts with Synopsys Cloud and AI
Sponsored by Synopsys
In this episode of Chalk Talk, Amelia Dalton and Vikram Bhatia from Synopsys explore how you can accelerate your next tapeout with Synopsys Cloud and AI. They also discuss new enhancements and customer use cases that leverage AI with hybrid cloud deployment scenarios, and how this platform can help CAD managers and engineers reduce licensing overheads and seamlessly run complex EDA design flows through Synopsys Cloud.
Jul 8, 2024
12,105 views