feature article
Subscribe Now

Acceleration as a Service

Accelize Attacks Barriers

Let’s say you have a huge batch of data that needs to be crunched. Maybe it needs the special help of some new neural network algorithm running on a massive server cluster, accelerated with a pool of FPGA accelerators. We’ll call you “Lisa.” But, Lisa, you don’t have a giant server farm with FPGA-based accelerators. You also don’t have the specialized software required to crunch your data, let alone the super-specialized version that can take advantage of FPGA-based acceleration. You’re just a team of specialized experts and several million dollars worth of exotic hardware away from solving your problem. Poor Lisa.

Now let’s say you’re a provider of cloud-based computing services. We’ll call you “AWS” (but you could be any one of the growing number of companies leasing cloud computing capabilities). You could solve Lisa’s hardware problem. You’ve got acres of the latest servers, and shiny-new racks of FPGA-based accelerators with the latest, snazziest, fastest FPGAs, all lined up and liquid cooled – just waiting to eat Lisa’s data for breakfast, at a price far more realistic than what Lisa would pay for much less capable hardware of her own. Lisa and thousands like her could really help generate revenue on that giant server/FPGA investment you’ve made. Unfortunately, there’s that super-specialized software problem. You don’t have it, so Lisa can’t lease your servers and accelerators. Poor AWS.

Next, let’s assume that you’re a data scientist who is expert at developing neural network applications. We’ll call you “Norm.” You know how to create an application to solve Lisa’s problem. You know how to select the best training data and how to optimize and tune the coefficients so that Lisa’s data would flow beautifully through the machine, giving exactly the results she needs. You could probably even make a business with your application as software-as-a-service running on AWS servers. Unfortunately, you don’t know anything about FPGA design, and especially about designing FPGA-based accelerators. Your software-only version of the application would weigh in a couple orders of magnitude too slow to solve the problem. Poor Norm. 

Moving right along, let’s say you’re an expert FPGA designer. We’ll call you “Franz.” You’re a black-belt at RTL, high-level design, and architecture optimization. You could design the accelerators Norm needs in your sleep. You could maybe even make a business selling those accelerators to applications experts like Norm. Or even better – offering them “hardware as a service” for anybody to use on AWS’s servers. Unfortunately, you need a bunch of IP blocks to get the job done, or else you’ll have to do the whole application from scratch, which wouldn’t be practical. The IP blocks you need aren’t licensed such that you could re-use them and license them on a pay-per-use basis in your accelerators, and you can’t plunk down 6-figure IP licensing fees just for this development project. Poor Franz. 

Finally, let’s assume you are “Inez” with tons of experience at low-level RTL design of highly-optimized IP blocks. You’ve tried to make a business with your IP, but, in the past, you developed FPGA IP intending to sell it, only to have the FPGA companies come in and offer similar IP for free. You’ve been somewhat successful licensing your IP to companies using it for major hardware development projects in ASICs, ASSPs, and FPGAs, but your licensing scheme has no provisions for people like Franz to use your IP to develop an accelerator, which would then be licensed to Norm, who would then license the whole thing to Lisa on a software-as-a-service basis running on AWS’s FPGA cloud. Your IP would have to somehow allow itself to be used on all of those random FPGAs on AWS’s servers, which doesn’t jive at all with your IP protection schema. Poor Inez.

Accelize is French company – a spinoff of a twenty-year-old IP development company called “PLDA.” Accelize has some FPGA design tools to sell, and those tools are specifically aimed at application developers like Franz (and maybe even Norm) who want to create specialized accelerators to run on FPGAs (or clusters of FPGAs). Their tools help to stitch together the necessary IP and to take a high level approach that simplifies and speeds up the development of accelerators. Unfortunately, the whole ecosystem for acceleration-as-a-service doesn’t exist. The entire stack above from Inez to Franz to Norm to AWS to Lisa would all be thrilled if there were a secure, standard way to make this happen – and, of course, Accelize would be thrilled about that too, because there would be a solid market for their tools in this scenario. But this ecosystem does not yet exist. Poor Accelize. 

But Accelize is trying to do something about it. The company has rolled out an entire ecosystem roadmap for acceleration-as-a-service, and they are working to develop and deploy the necessary components and establishing the required agreements to make it all happen. The way Accelize sees it, there are three major challenges: the FPGA programming problem, the IP procurement problem, and the business, economic, and legal model incompatibility. They are attacking each one with a comprehensive and innovative solution that should please everyone from “Inez” to “Lisa.”

The way Accelize sees it, “IP providers” (companies like their old buddies PLDA) are creating high-value IP such as video transcoders, scalers, HDR modules, combinational neural network processors, and so forth. Those would be very useful to “accelerator developers” who want to do things like image classification in video streams. But, since that IP is currently licensed for use in electronic system designs, it’s not practical (or legal) to incorporate it into FPGA accelerators running in cloud-computing situations.

What Accelize proposes is a three-pronged solution. First, the “QuickAlliance” is an ecosystem of accelerator developers and IP providers who agree to participate in the acceleration-as-a-service schema. Second “QuickPlay” is an FPGA development framework/tool set that facilitates the creation and deployment of FPGA accelerators based on IP from participating suppliers. Finally “QuickStore” is an application marketplace that allows end users to license applications developed in this ecosystem on cloud-service-provider servers. Doing all this requires the participation of IP providers, accelerator developers, and cloud compute providers.  

Accelize has come up with a secure pay-per-use licensing scheme that extends from the application layer all the way back through the accelerators and IP that make it happen. So, when Lisa runs her data through her application on AWS, the meter runs for everyone in the stack, and each stakeholder gets compensated according to their particular licensing terms. It’s a clever and innovative solution that could create an entire profitable universe for IP developers, accelerator developers, application experts, and cloud computing providers. Of course, Accelize would be in there taking a cut somewhere (kinda like a credit card company, perhaps?) and their design tools would be integral in making the appropriate licensing and use connections in the actual application.

Accelize has also worked to make this system platform independent, so an application deployed on one cloud provider who happens to use Xilinx FPGAs should work on a different cloud provider whose accelerators are Intel FPGAs, for example. That portability and platform independence should enable application developers to sell their wares and services more broadly without having to do complex ports to different compute architectures.

Accelize’s vision is certainly ambitious, and there are many obstacles to overcome before it is up and working on a broad scale. But the micro-licensing scheme and the idea of acceleration as a service should be extremely attractive in the current climate – for every stakeholder in the chain. It will be interesting to watch what happens.

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Machine Learning on the Edge
Sponsored by Mouser Electronics and Infineon
Edge machine learning is a great way to allow embedded devices to run applications that can collect sensor data and locally process that data. In this episode of Chalk Talk, Amelia Dalton and Clark Jarvis from Infineon explore how the IMAGIMOB Studio, ModusToolbox™ Software, and PSoC and AURIX™ microcontrollers can help you develop a custom machine learning on the edge application from scratch. They also investigate how the IMAGIMOB Studio can help you easily develop and deploy AI/ML models and the benefits that the PSoC™ 6 Artificial Intelligence Evaluation Kit will bring to your next machine learning on the edge application design process.
Aug 12, 2024
56,197 views