feature article
Subscribe Now

Xilinx Divides the World

Separate Flows Target Software and Hardware

The problem… is you.

I know, it seems a bit harsh, blaming FPGA designers for restricting the expansion of the FPGA market. After all, FPGA designers are the fans, right? We are the loyal, the ones who have supported the technology all these decades, the ones who have toiled and struggled and applied our customer-side creativity to help solve the myriad challenges associated with getting one of the coolest and oddest chip architectures ever invented to behave well enough for actual system use.

Exactly.

Since today’s FPGA designers basically grew up with FPGAs, a kind of co-dependent relationship evolved. FPGAs and tools bent toward the designers, and the designers bent toward the limitations of FPGAs. Today, a top-notch FPGA designer is a kind of sage, a wizard whose profound insight and fire-forged expertise afford an almost magical command of the intrinsic power of FPGA technology. The ninchacu, in the hands of the ninja, are formidable weapons. Wielded by a novice, however, they are much more likely to injure the user than the opponent. (Yeah, we know. We all knocked ourselves in the noggin at least once. Your secret is safe with us.)

Now, it is time for FPGAs to leave the nest, to go out into the world, to conquer new lands and explore new horizons. New horizons – such as computation. Computation, of course, is the domain of the programmer. And programmers live in a different universe, speak a different vernacular, and have a different set of values from hardware engineers. 

Hardware engineers care about optimization. Programmers care about managing complexity. The Venn diagrams of those two priorities probably don’t touch at all. Witness the years of work hardware engineers have invested trying to produce the perfect multiplier – one that will find the product of two numbers in the shortest amount of time, using the smallest possible silicon area, and consuming the tiniest amount of power. Endless iterations of architecture and implementation nuances have gone into the quest for the optimal solution, and the search continues still.

Programmers, on the other hand, can ill afford to waste time and energy in an esoteric assessment of the asterisk of multiplication. They have MUCH bigger fish to fry. They want to throw down a quick “A takes the value of X (splat) Y” and be done with it. The programmer’s problem is complexity – with a typical interesting system containing millions of lines of code. Getting all those millions of lines to march in the same direction with a minimal number of bugs and a structure that has some chance of being understood by those who must maintain and enhance that system in the future is a formidable challenge. Programmers need productivity above all else. They need rapid iteration, insightful analysis, and impeccable organization. They need to be able to quickly and cleanly create their code structures, locate and remove bugs with aplomb, and manage their complex virtual creations in a structured and easily understandable manner. 

If FPGAs are to become the compute engines of the future, the flow of FPGA design must make a tectonic shift toward the software engineer. The devices must seem like processors, and the implementation must feel like programming. Synthesis, simulation, and layout must be replaced by compilation and debug. Detailed control of optimization options must be replaced by rapid iteration and productive workflow. In short, the whole hardware-engineer-centric FPGA process must be turned on its side.

That is exactly what Xilinx is doing today.

First, the company re-built its implementation suite from the ground up. After more than two decades of patching and upgrading, the old ISE tools were getting a bit long in the tooth, and they needed a major update to keep up with the demands of modern, ultra-high-complexity FPGAs. But a ground-up re-design gave the company an opportunity to re-think some other assumptions as well. It presented a chance to think about multiple user personas. In addition to the classic FPGA ninja, Xilinx wanted to serve a new audience – the world of software engineers – with their vastly different set of priorities and values.

Enter SDx (for “Software-Defined (whatever)”), which is Xilinx’s marketing approach to the various flavors of software-based systems engineering. In different application domains, software developers have different dialects. Software-defined networking is markedly different from search, or big data processing, or image and signal processing. So, Xilinx wanted a brand that was extensible to various tribes of software engineers, and which had a basic underlying flow that was understandable and familiar to all flavors of software engineers. The first two examples are SDNet (Software-defined specification environment for networking) and SDAccel (development environment for OpenCL, C, and C++).

At the same time, the company wanted to preserve the hard-earned respect of hardware engineers by keeping the Vivado brand and flow focused on their needs. So, in marketing terms, Vivado is for hardware engineers, and anything starting with “SD” is for software engineers. Yes, under the hood, the “SD” flows share the important underlying components – synthesis, place and route, HLS, simulation, etc. But, the interaction with the user has a decidedly software-centric accent.

Because the company now makes devices like Zynq – which have sophisticated FPGA-like fabric and peripherals along with a complex, high-performance, ARM-based conventional processing system, the company must support both a hardware and a software development flow for the same device. The easy approach to that – conventional FPGA tools for the hardware portion and traditional software development IDE for the software part – would fail to take full advantage of the unique power and capabilities of these integrated devices.

In order to truly transcend the hardware/software partitioning problem, an environment was required that was hardware/software agnostic – where algorithms could be described in the native tongue of the systems engineer, and the final implementation of each portion (whether hardware or software) could be left for the optimization phase of the design process. This is what Xilinx is working to achieve with the SDx suite, and it’s more than just a marketing ploy. There is substantial tool and IP development behind the company’s software-centric SDx ecosystem.

One place in which the software challenge is more demanding than the hardware one is in the turnaround/iteration time expectation. Hardware engineers are accustomed to changing a few lines of VHDL, then facing hours or even days of waiting for the tools to complete before the new version could be evaluated in working hardware. Software engineers start to break out into a cold sweat if their code-to-evaluation time is more than a minute or two. They have been known to break out into fits of rage if the process takes half an hour or more. If the hour hand on the clock passed more than one tick, they’d probably fall into a full-on catatonic depression.

Xilinx’s answer to this has two clever components. First, their code-to-hardware strategy is based around their high-level synthesis (HLS) tool. While HLS is “compiling” your code (shhh, don’t tell the software folks, but it’s actually performing complex behavioral synthesis – doing resource allocation, scheduling, and construction of datapath, controller, and memory structures), it also pops out a handy-dandy software-executable (but hardware-cycle accurate) model that you can use immediately for functional testing, debug, and performance evaluation. This gives software engineers something bright and shiny to play with – distracting them from the fact that logic synthesis and place-and-route are toiling tirelessly in the background, preparing their code snippet for actual deployment in FPGA fabric. 

The second component of the strategy is run-time related. The problem is that FPGAs are designed to be cold-started, configured, and then run for an indeterminate amount of time on that configuration. Processors, on the other hand, are ready to tackle constantly changing code bases without missing a beat – and certainly without requiring a full-on re-start. To keep system continuity while reconfiguring the FPGA portion of an active algorithm, Xilinx takes advantage of partial reconfiguration of the FPGA, so the computing system doesn’t go crazy when one of its most important chips has to go offline unexpectedly for housekeeping. 

Xilinx says that more SDx variants are coming, and it’s clear that SDx is part of a bigger strategy to engage a much wider audience than programmable logic devices have historically supported. If the strategy is successful, we’ll start seeing FPGAs (and FPGA-like devices) in a host of applications we would not have contemplated before. In most of those domains, the “system engineer” is much more likely to have software rather than hardware skills and expertise, as the majority of the application will be implemented in software. We are still in the earliest phases of this trend at an industry level, and the outcome certainly is not yet assured. It will be interesting to watch.

One thought on “Xilinx Divides the World”

  1. “In order to truly transcend the hardware/software partitioning problem, an environment was required that was hardware/software agnostic – …” Agreed, Kevin, and what Xilinx offers is a solution for two classes of problems (and hats off to them). They have also presented other solutions that exploit the Vivado tools suite, in their All Programmable Abstractions program. While we are not part of that program, startup Space Codesign offers ESL HW/SW co-design solution that targets multimedia and aerospace applications, also making use of the Xilinx Vivado infrastructure to get down to the chip. The fundamental perspective is to look at the application, and decompose it into functions, before even thinking of what will eventually be implemented as hardware and what as software. We can afford to do that now.

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Datalogging in Automotive
Sponsored by Infineon
In this episode of Chalk Talk, Amelia Dalton and Harsha Medu from Infineon examine the value of data logging in automotive applications. They also explore the benefits of event data recorders and how these technologies will shape the future of automotive travel.
Jan 2, 2024
56,196 views