feature article
Subscribe Now

Xilinx Divides the World

Separate Flows Target Software and Hardware

The problem… is you.

I know, it seems a bit harsh, blaming FPGA designers for restricting the expansion of the FPGA market. After all, FPGA designers are the fans, right? We are the loyal, the ones who have supported the technology all these decades, the ones who have toiled and struggled and applied our customer-side creativity to help solve the myriad challenges associated with getting one of the coolest and oddest chip architectures ever invented to behave well enough for actual system use.

Exactly.

Since today’s FPGA designers basically grew up with FPGAs, a kind of co-dependent relationship evolved. FPGAs and tools bent toward the designers, and the designers bent toward the limitations of FPGAs. Today, a top-notch FPGA designer is a kind of sage, a wizard whose profound insight and fire-forged expertise afford an almost magical command of the intrinsic power of FPGA technology. The ninchacu, in the hands of the ninja, are formidable weapons. Wielded by a novice, however, they are much more likely to injure the user than the opponent. (Yeah, we know. We all knocked ourselves in the noggin at least once. Your secret is safe with us.)

Now, it is time for FPGAs to leave the nest, to go out into the world, to conquer new lands and explore new horizons. New horizons – such as computation. Computation, of course, is the domain of the programmer. And programmers live in a different universe, speak a different vernacular, and have a different set of values from hardware engineers. 

Hardware engineers care about optimization. Programmers care about managing complexity. The Venn diagrams of those two priorities probably don’t touch at all. Witness the years of work hardware engineers have invested trying to produce the perfect multiplier – one that will find the product of two numbers in the shortest amount of time, using the smallest possible silicon area, and consuming the tiniest amount of power. Endless iterations of architecture and implementation nuances have gone into the quest for the optimal solution, and the search continues still.

Programmers, on the other hand, can ill afford to waste time and energy in an esoteric assessment of the asterisk of multiplication. They have MUCH bigger fish to fry. They want to throw down a quick “A takes the value of X (splat) Y” and be done with it. The programmer’s problem is complexity – with a typical interesting system containing millions of lines of code. Getting all those millions of lines to march in the same direction with a minimal number of bugs and a structure that has some chance of being understood by those who must maintain and enhance that system in the future is a formidable challenge. Programmers need productivity above all else. They need rapid iteration, insightful analysis, and impeccable organization. They need to be able to quickly and cleanly create their code structures, locate and remove bugs with aplomb, and manage their complex virtual creations in a structured and easily understandable manner. 

If FPGAs are to become the compute engines of the future, the flow of FPGA design must make a tectonic shift toward the software engineer. The devices must seem like processors, and the implementation must feel like programming. Synthesis, simulation, and layout must be replaced by compilation and debug. Detailed control of optimization options must be replaced by rapid iteration and productive workflow. In short, the whole hardware-engineer-centric FPGA process must be turned on its side.

That is exactly what Xilinx is doing today.

First, the company re-built its implementation suite from the ground up. After more than two decades of patching and upgrading, the old ISE tools were getting a bit long in the tooth, and they needed a major update to keep up with the demands of modern, ultra-high-complexity FPGAs. But a ground-up re-design gave the company an opportunity to re-think some other assumptions as well. It presented a chance to think about multiple user personas. In addition to the classic FPGA ninja, Xilinx wanted to serve a new audience – the world of software engineers – with their vastly different set of priorities and values.

Enter SDx (for “Software-Defined (whatever)”), which is Xilinx’s marketing approach to the various flavors of software-based systems engineering. In different application domains, software developers have different dialects. Software-defined networking is markedly different from search, or big data processing, or image and signal processing. So, Xilinx wanted a brand that was extensible to various tribes of software engineers, and which had a basic underlying flow that was understandable and familiar to all flavors of software engineers. The first two examples are SDNet (Software-defined specification environment for networking) and SDAccel (development environment for OpenCL, C, and C++).

At the same time, the company wanted to preserve the hard-earned respect of hardware engineers by keeping the Vivado brand and flow focused on their needs. So, in marketing terms, Vivado is for hardware engineers, and anything starting with “SD” is for software engineers. Yes, under the hood, the “SD” flows share the important underlying components – synthesis, place and route, HLS, simulation, etc. But, the interaction with the user has a decidedly software-centric accent.

Because the company now makes devices like Zynq – which have sophisticated FPGA-like fabric and peripherals along with a complex, high-performance, ARM-based conventional processing system, the company must support both a hardware and a software development flow for the same device. The easy approach to that – conventional FPGA tools for the hardware portion and traditional software development IDE for the software part – would fail to take full advantage of the unique power and capabilities of these integrated devices.

In order to truly transcend the hardware/software partitioning problem, an environment was required that was hardware/software agnostic – where algorithms could be described in the native tongue of the systems engineer, and the final implementation of each portion (whether hardware or software) could be left for the optimization phase of the design process. This is what Xilinx is working to achieve with the SDx suite, and it’s more than just a marketing ploy. There is substantial tool and IP development behind the company’s software-centric SDx ecosystem.

One place in which the software challenge is more demanding than the hardware one is in the turnaround/iteration time expectation. Hardware engineers are accustomed to changing a few lines of VHDL, then facing hours or even days of waiting for the tools to complete before the new version could be evaluated in working hardware. Software engineers start to break out into a cold sweat if their code-to-evaluation time is more than a minute or two. They have been known to break out into fits of rage if the process takes half an hour or more. If the hour hand on the clock passed more than one tick, they’d probably fall into a full-on catatonic depression.

Xilinx’s answer to this has two clever components. First, their code-to-hardware strategy is based around their high-level synthesis (HLS) tool. While HLS is “compiling” your code (shhh, don’t tell the software folks, but it’s actually performing complex behavioral synthesis – doing resource allocation, scheduling, and construction of datapath, controller, and memory structures), it also pops out a handy-dandy software-executable (but hardware-cycle accurate) model that you can use immediately for functional testing, debug, and performance evaluation. This gives software engineers something bright and shiny to play with – distracting them from the fact that logic synthesis and place-and-route are toiling tirelessly in the background, preparing their code snippet for actual deployment in FPGA fabric. 

The second component of the strategy is run-time related. The problem is that FPGAs are designed to be cold-started, configured, and then run for an indeterminate amount of time on that configuration. Processors, on the other hand, are ready to tackle constantly changing code bases without missing a beat – and certainly without requiring a full-on re-start. To keep system continuity while reconfiguring the FPGA portion of an active algorithm, Xilinx takes advantage of partial reconfiguration of the FPGA, so the computing system doesn’t go crazy when one of its most important chips has to go offline unexpectedly for housekeeping. 

Xilinx says that more SDx variants are coming, and it’s clear that SDx is part of a bigger strategy to engage a much wider audience than programmable logic devices have historically supported. If the strategy is successful, we’ll start seeing FPGAs (and FPGA-like devices) in a host of applications we would not have contemplated before. In most of those domains, the “system engineer” is much more likely to have software rather than hardware skills and expertise, as the majority of the application will be implemented in software. We are still in the earliest phases of this trend at an industry level, and the outcome certainly is not yet assured. It will be interesting to watch.

One thought on “Xilinx Divides the World”

  1. “In order to truly transcend the hardware/software partitioning problem, an environment was required that was hardware/software agnostic – …” Agreed, Kevin, and what Xilinx offers is a solution for two classes of problems (and hats off to them). They have also presented other solutions that exploit the Vivado tools suite, in their All Programmable Abstractions program. While we are not part of that program, startup Space Codesign offers ESL HW/SW co-design solution that targets multimedia and aerospace applications, also making use of the Xilinx Vivado infrastructure to get down to the chip. The fundamental perspective is to look at the application, and decompose it into functions, before even thinking of what will eventually be implemented as hardware and what as software. We can afford to do that now.

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

Libby's Lab

Libby's Lab - Scopes Out Silicon Labs EFRxG22 Development Tools

Sponsored by Mouser Electronics and Silicon Labs

Join Libby in this episode of “Libby’s Lab” as she explores the Silicon Labs EFR32xG22 Development Tools, available at Mouser.com! These versatile tools are perfect for engineers developing wireless applications with Bluetooth®, Zigbee®, or proprietary protocols. Designed for energy efficiency and ease of use, the starter kit simplifies development for IoT, smart home, and industrial devices. From low-power IoT projects to fitness trackers and medical devices, these tools offer multi-protocol support, reliable performance, and hassle-free setup. Watch as Libby and Demo dive into how these tools can bring wireless projects to life. Keep your circuits charged and your ideas sparking!

Click here for more information about Silicon Labs xG22 Development Tools

featured chalk talk

Reliability: Basics & Grades
Reliability is cornerstone to all electronic designs today, but how reliability is implemented and determined can vary widely by different market segments. In this episode of Chalk Talk, Amelia Dalton and Sam Accardo from the YAGEO Group explore the definition of reliability for electronic components, investigate the different grades of reliability offered by the YAGEO Group and the various steps that the YAGEO Group is taking to ensure the greatest reliability of their components.
Aug 15, 2024
53,494 views