feature article
Subscribe Now

Something’s Coming but I Can’t Say What!

I just received an unusual briefing, and now I’m sitting here at the start of this column looking at a blank page thinking to myself, “So, how do I go about explaining this one?” What we’re talking about here is a next-generation processing chip that’s based on a new processing architecture. Known as a hierarchical learning processor (HLP), this technology is intended to be a game-changer for tasks like artificial intelligence (AI) training, high-performance computing (HPC), and metaverse processing (don’t ask, I’ll tell you later).

Generally speaking, when I receive a briefing on something like this, its creators thrust eye-wateringly complicated architectural block diagrams at me with reckless abandon. It’s also common for me to be taken on a deep dive through the inner workings of the beast, only occasionally managing to raise my nose above the surface of the PowerPoint slide deck to gasp for air. Not this time. Although I have been given a hint of a sniff of a whiff of some of the high-level functional units of which this device is composed, I’ve not even been presented with an image of a black box.

Is this bodacious beauty available now? Sadly not. Test silicon and performance signoff are scheduled for this year (2022), with customer samples following in 2023 and full-on production in 2024.

During the course of my career, I’ve been regaled with promises of delight, I’ve been told tall tales of wonders that are to come, and I’ve seen vaporware versions of hardware and software you wouldn’t believe (it’s the audacity of the vaporware you wouldn’t believe; the hardware and software didn’t exist in any of the dimensions with which I’m familiar). I know you’ll find this difficult to believe but — on occasion — many of the promised products didn’t turn out as their proponents originally proposed. (This might be a good time for you to imagine me standing tout seul in the middle of an unforgiving city on a cold blustery night with a tumbleweed rolling down the street in the background, my lower lip quivering, and a little tear rolling down my cheek.)

Based on these past experiences, in the normal scheme of things, this is the point where I would typically have bailed out of the briefing, muttering platitudes while sidling towards the nearest exit. However, this wasn’t the normal scheme of things because I was being briefed by industry veteran Dr. Venkat Mattela, who is Founder and CEO of Ceremorphic.

Venkat has an interesting way of going about things. First, he looks for a problem that needs solving. Next, he spends the time and effort to build the underlying technology required to address the problem. It’s only then that he starts working on game-changing products based on his new technology. This is the approach Venkat took at Redpine Signals, which was the previous company of which he was the Founder and CEO. Under his leadership, Redpine Signals delivered breakthrough innovations and industry-first products leading to the development of an ultra-low power wireless solution that outperformed products from wireless industry giants by as much as 26 times in terms of energy efficiency.

Silicon Labs acquired Redpine Signals’ wireless assets for $308 million in March 2020. If this had been yours truly, I would have taken some time to run for the hills and bask on the beach (I never metaphor I didn’t like). By comparison, only one month later, in April 2020, Venkat founded Ceremorphic. 

So, let me tell you what I do know. There are already other players in this area, so — as Venkat says — if HLP offers only 50% improvement over existing solutions, no one will be interested. By comparison, if HLP can outperform the competition by 50X in terms of performance and power efficiency, then this will cause people to sit up and pay attention.

It turns out that the underlying architectural concepts for HLP have been years in the making. Furthermore, a physical realization of HLP technology has been in development since 2018. Initial development employed TSMC’s 7nm process node, switching to the newly available 5nm technology node when Ceremorphic was launched in 2020.

With the exception of things like GPUs and FPGAs, the vast majority of companies creating processor chips for AI are targeting ultra-low-power devices for inferencing at the edge. These devices are typically not intended to scale up to data centers and the cloud. By comparison, HLP technology is primarily targeted at high-end deployments and applications like data centers, AI training, automotive advanced driver assistance systems (ADAS) and autonomous driving (AD) systems, robotics, life sciences, and metaverse processing.

“Remind me again; what’s metaverse processing?” I hear you cry. I’m pleased to see that you’re paying attention. As the Ceremorphic website tells us: “The convergence of extended Reality (XR), augmented reality (AR), mixed reality (MR), and virtual reality (VR) is an exciting new capability that enables the unprecedented creation of virtual worlds. Metaverse processing creates a greater overlap of digital and physical spaces to make productivity and entertainment applications abundant.” Well, I’m glad we’ve cleared that up.

One interesting aspect to all of this is that — although it wouldn’t initially be cost-effective to use the latest and greatest technology node to create processing engines targeted only at cheap-and-cheerful IoT devices — HLP technology is based on chiplets that allows it to start off targeting high-end applications and be scaled down to mid-range and low-end applications over time.

Another point that caught my attention was the use of intellectual property (IP). When it comes to functions like PCIe 6.0 / CXL 3.0 connectivity interfaces, most companies will purchase this IP from a third-party. By comparison, while Venkat has no problem purchasing commodity IP blocks, he prefers to develop his own low-energy incarnations of high-end IPs like PCIe 6.0 / CXL 3.0 because this allows him to further differentiate his products. (Although it’s not a big focus for them, the folks at Ceremorphic won’t be averse to licensing this IP to other companies working in different markets.)

The limited details that are available regarding the QS 1, which will be the first member of the HLP family, include the following key features:

  • Custom machine learning processor (MLP) running at 2GHz.
  • Custom floating-point unit (FPU) running at 2GHz.
  • Patented multi-thread processing macro-architecture, ThreadArch-based RISC-V processor for proxy processing running at 1GHz.
  • Custom video engines for metaverse processing running at 1GHz along with an Arm Cortex-M55 core (the M55 core is the first to feature Arm Helium vector processing technology for enhanced, energy-efficient digital signal processing (DSP) and machine learning (ML) performance).
  • Quantum-resistant security microarchitecture.
  • Custom designed X16 PCIe 6.0 / CXL 3.0 connectivity interface.
  • Open AI framework software support with optimized compiler and application libraries.
  • Soft error rate: (100,000)^-1

Ceremorphic is backed by more than 100 patents in core technologies, it’s led by proven members of the management team, board of directors, and technical advisors, and it currently has 150 full-time employees with plans to ramp up to 250 this year.

So, at the end of the day I have to say that I think this briefing was a success because — if nothing else — it’s left me longing for more on the nitty-gritty details front. How about you? Do you have any thoughts you’d care to share? 

4 thoughts on “Something’s Coming but I Can’t Say What!”

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
46,359 views