industry news
Subscribe Now

Roofline enables agile adaptation to new edge AI models and hardware

Aachen, Germany, 6 August 2024 – With the field of artificial intelligence evolving at lightning speed, the agility to adapt to emerging models and disruptive hardware solutions is a significant competitive advantage. Traditional edge AI deployment methods, mainly based on TensorFlow Lite, can’t keep up with the pace. Low adaptability, limited performance, and a painful user experience make them barriers to adoption for edge AI.

RooflineAI GmbH, a spin-off from the RWTH Aachen University, revolutionizes this process with a software development kit (SDK) that offers unmatched flexibility, top performance, and ease of use. Models can be imported from any AI framework such as TensorFlow, PyTorch, or ONNX. The one-stop AI backend enables deployment across diverse hardware, covering CPUs, MPUs, MCUs, GPUs, and dedicated AI hardware accelerators – all with just a single line of Python.

“Our retargetable AI compiler technology is building on the shoulders of proven technologies to create massive synergies with the open-source community and chip vendors. This approach provides the flexible software infrastructure required to overcome technology fragmentation in the edge AI space,” says Roofline CEO Moritz Joseph. “Roofline enables faster to-market times for new architectures, improves the efficiency of existing solutions, and brings the user experience to the next level.”

The compiler is key

AI compiler technology has become mission critical for deploying AI models at scale on edge devices. Today, widely used solutions build on a legacy software stack that interprets AI models but does not allow for compilation, partly relying on handwritten and manually optimized kernels. This limits the application of the technology for state-of-the-art AI models, such as language models, as they are not compatible with the existing technology stack.

AI compilation optimizes model execution at different levels of abstraction – known as “intermediate representations” – that represent specific features of the execution of an AI workload. The compiler translates the AI model through various intermediate representations to a low level that is close to the target hardware.

Using AI compilation instead of model interpretation allows users to adapt to the constant stream of new AI models. In addition, novel heterogeneous hardware platforms can be targeted as the compiler generates code for each component.

The AI compiler space is driven by open-source technologies that allow the exploitation of synergies beyond the scope of individual players in the market. Roofline is active in the open-source community and committed to providing code that drives innovation from cutting-edge AI models to novel chips.

Reach out to Roofline for a product demo via joseph@roofline.ai. Special license conditions are available for academic research and teaching purposes.

Further information:

Communications contact – Madeleine Gray: communincation@hipeac.net

Product demos – Moritz Joseph: joseph@roofline.ai

Roofline website

Roofline LinkedIn

Leave a Reply

featured blogs
Oct 17, 2024
One of the most famous mechanical calculators of all time is the Curta, which was invented by Curt Herzstark (1902-1988)....

featured chalk talk

Developing a Secured Matter Device with the OPTIGA™ Trust M MTR Shield
Sponsored by Mouser Electronics and Infineon
In this episode of Chalk Talk, Amelia Dalton and Johannes Koblbauer from Infineon explore how you can add Matter and security to your next smart home project with the OPTIGA™ Trust M MTR shield. They also investigate the steps involved in the OPTIGA™ Trust M Matter design process, the details of the OPTIGA™ Trust M Matter evaluation board and how you can get started on your next Matter IoT device.
Jul 2, 2024
28,421 views