feature article
Subscribe Now

Living With Ambiguity: The Other HLS

Synthesizing Matlab and SystemC code

 

As the discussion wound down, the meeting leader asked them all about their capabilities as a source for RTL generation.

Both Matlab and SystemC jumped into action and started expounding at the same time. “You can use Simulink to add timing and system elements…” “…pre-mapped cells to RTL…” “… directly to FPGAs…” “… avoiding endless timing closure iterations…”

But at the same time, C chimed in with, “Oh, I’ve been doing that for years! There are all kinds of tools you can use to turn me into RTL. We call it High-Level Synthesis.”

“Me too!” said C++ after C had finished.

“I’m sorry, C,” said the meeting leader, “I’m not sure I quite got that because you were being interrupted by these other two unruly fellows. High-level synthesis, you say? Why, aren’t you a clever lad!”

“But we can do the exact same thing!” complained Matlab and SystemC.

“Now, you’re just jealous. You lot could learn a lot from C here…”

“From me too!” said C++.

It seems that ANSI C (and C++) have been the darlings of high-level synthesis (HLS); that’s where most mainstream attention has gone. But C isn’t the only game in town.

In fact, a funny thing happens when you discuss the usage of C with many designers. You try to describe to them why C is useful, say, for designing algorithms and such without worrying about how to implement them, and, more often than not, you’ll get the following reaction: “Oh, we use Matlab for that.”

Kind of leaves you wondering who actually uses C. Clearly people do, so Question One is, who uses C vs. who uses Matlab?

Then there’s that specific hardware version of C, or, more accurately, C++: SystemC. Often thought of in verification terms only, Forte Design Systems claims HLS status due to their ability to synthesize RTL from SystemC. So Question Two is, how is SystemC synthesis different from C/C++ synthesis, and is it really HLS?

Matlab vs. C – no clear decision

First, let’s get some background. It seems like everyone has heard of “Matlab,” even though that’s actually the name of a tool produced by The Mathworks. Somewhat less familiar may be the term “M code” for code written in the language used for Matlab or “M Language” for the language itself. But it’s all more or less the same thing, and we’ll leave it to the lawyer’s and marketeers to determine whether “M Language” is the generic name for “Matlab” or whether “Matlab” is simply the best-known program that processes “M Language.”

Looking at it from The Mathworks’ perspective, Matlab is a tool that allows high-level continuous-time modeling of a variety of mathematical constructs. It’s heavily used for filters and other aspects of digital signal processing. There is no notion of clocking; there is no notion of a system. It’s all about the math.

The Mathworks’ companion tool, SimuLink, allows you to start the process of realizing a mathematical construct in an actual system. Critical at this phase is the quantization of floating point values as fixed-point values if full floating-point processing isn’t going to be provided. Clock timing is also introduced, and various system elements can be brought together so that you’re verifying something a bit closer to real.

From this point, a couple of directions are possible. FPGAs have allowed the direct implementation of Matlab – er – M-code for some time, pioneered by AccelChip before their acquisition by Xilinx. But you can also generate C code from the M-code representation, generally seen as useful for creating C models for verification testbenches. But… couldn’t you also generate RTL from that C? Would two generations of generation – M to C to RTL – render an altogether inefficient result?

And then there’s the real question we’re trying to get at, should you really start with Matlab or with C? I figured that The Mathworks would be an obvious first place to turn for some insight on this. After all, they’ve been at this game for a long time. Ken Karnofsky, their Senior Strategist for Signal Processing and Communications, came up with some interesting correlations but no hard and fast rules. They are: SoC applications tend to use C more than non-SoC apps; analog/Mixed-Signal (AMS) designers tend to use the Matlab/SimuLink flow more than digital designers; engineers with a software background tend to use C more than “domain experts”; and Matlab tends to be more of a research vehicle, C more of a production vehicle.

Interesting, but it’s kind of a tail-wagging-dog thing. If you’re trying to figure out whether to use C or M, those rules aren’t much help.

The landscape has changed recently with Synopsys’s announcement of Synphony, an HLS tool for generating RTL from M-code. Their focus is on the datapath (with some support for control paths) and, in particular, the process of creating fixed-point implementations from floating-point models. They also generate C/C++ models for use in high-level verification. They specifically see their M-language approach as complementary to C/C++ flows, not as competing.

So how do they see M vs. C? In a discussion with Synopsys’s Chris Eddington and George Zafiropoulos, it was observed that wireless applications tended to use M more; video applications tended to use C more. But again, these are correlations. No one seems quite sure of what the overriding determiner is between M and C.

In fact, if we’re really honest with ourselves, it really seems like there isn’t one.

And over in this corner…

Meanwhile, Forte’s Brett Cline minces no words regarding his view on C vs. SystemC. “ANSI C is a dead end. It’s easy to get started on a design but hard to finish.” Forte’s Cynthesizer tool synthesizes RTL from SystemC using cell-mapped IP, matched with a specific technology. You give it a .lib file and a clock speed and it will generate an optimized datapath. You can do the same when targeting an FPGA.

The real kicker from their standpoint is that you can do your validation at the untimed level and then generate RTL that you know will work; they say that other technologies require you to postpone much of your validation until you get to the RTL level.

Many folks see SystemC as simply a hardware language with a C++ syntax, and that makes it suspect as an HLS methodology. But Forte sees HLS as simply starting from an untimed representation and synthesizing a timed one. You can start with an untimed SystemC representation (with the possible exception of having interfaces that are pin/cycle accurate). From there you can generate RTL, so this qualifies it as HLS in their book.

One might also argue that SystemC, as a specific C++ environment, is less general than “pure” C++. But Brett argues that there really is no such thing as pure C++ for HLS; you typically are required to use certain classes. So, he says, it “is really SystemC vs. MentorC.”

As to the murkier question of where M Language fits into the picture, well, he was also unable to draw a clear boundary. Filters raise their head again as a Matlab indicator… more correlation.

So… what have we learned from all of this? Primarily that there is more to HLS than C and C++. Despite the infatuation with C, Matlab or M Language has a role, as does SystemC.

With respect to Question Two, Forte sees a clear distinction between their SystemC and ANSI C/C++ methodologies, and, according to their definition, SystemC-to-RTL can indeed be HLS.

As to Question One, well, there is no satisfying answer: the distinction between where M and other languages get used is really not clear. There apparently is an ill-defined world where M is used and some other ill-defined world where C is used. Sometimes they’re used together. It’s not a good place to be if you have a hard time dealing with ambiguity.

Links:

Forte Cynthesizer

The Mathworks Matlab

Synopsys Synphony

 

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Tungsten 700/510 SMARC SOMs with Wi-Fi 6 / BLE
Sponsored by Mouser Electronics and Ezurio
In this episode of Chalk Talk, Pejman Kalkhorar from Ezurio and Amelia Dalton explore the biggest challenges for medical and industrial embedded designs. They also investigate the benefits that Ezurio’s Tungsten700 and 510 SOMs bring to these kinds of designs and how you can get started using them in your next design.
Nov 7, 2024
23,380 views