industry news
Subscribe Now

sureCore announces ultra-low power memory IP for AI applications

Sheffield, England, 7 March 2024 – CES 2024 showed how AI is becoming pervasive in a huge range of products making them smarter, safer and feature rich. To run these AI workloads requires significant compute power which in turn requires considerable amounts of embedded memory integrated on chip as close to the compute units as possible to reduce latency. Delivering the inferencing power that users now expect requires massively parallel processing arrays which means not only increased power consumption, but also ever challenging thermal loads placing demands on packaging and cooling needs. SureCore has revisited its PowerMiser IP and further optimised it to drive down dynamic power further as well as exploiting the power efficiencies of FinFET technology. This has delivered a memory technology that minimises thermal impact whilst delivering the demanding performance profile needed by AI and has called it “PowerMiser AI”.

Paul Wells, sureCore’s CEO, explained, “Our typical customer has been using our ultra-low power SRAM IP in battery-powered applications to provide a longer operational life between recharges. The surge in AI augmentation means that whole new areas for our low power memory solutions have appeared in new and exciting areas that are not constrained by battery life and can be mains powered or are even in the automotive space. Power consumption is still a critical factor for these applications but the constraining factor is starting to become heat dissipation and potential thermal damage. In order to keep product form factors under control and obviate the need for forced cooling so as to prevent overheating, new low power solutions are needed. Our recent announcements about working on ultra-low power memory IP for use in cryostats in the quantum computing arena, where heat generation by chips has to be minimised, has resulted in enquiries from companies who also need to keep AI chips operating within temperature boundaries albeit at the other end of the scale.

“Standard off-the-shelf SRAM IP has been optimised for area or speed, but not power.  Our technology is extremely power efficient and therefore generates less heat making it the ideal solution for the next generation of AI-enabled chips. This includes everything from Edge devices to in-car applications, and even to data centres all of which must minimise thermal overheads. This will become increasingly important as products increasingly rely on AI at the Edge and less on cloud-based solutions.”

Embedded SRAM can be a significant power drain when, for example, pattern matching. Thus, on a large AI-enabled chip, memory can account for as much as 50% of the power usage and is thus a major contributor to power consumption and thermal load. The company estimates that using PowerMiser AI would reduce dynamic power consumption by up to 50% delivering compelling cuts in thermal load meaning heat sinks or other cooling systems are either not required or are dramatically reduced thereby increasing overall system reliability.

Leave a Reply

featured blogs
Nov 5, 2024
Learn about Works With Virtual, the premiere IoT developer event. Specifically for IoT developers, Silicon Labs will help you to accelerate IoT development. ...
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Easily Connect to AWS Cloud with ExpressLink Over Wi-Fi
Sponsored by Mouser Electronics and AWS and u-blox
In this episode of Chalk Talk, Amelia Dalton, Lucio Di Jasio from AWS and Magnus Johansson from u-blox explore common pitfalls of designing an IoT device from scratch, the benefits that AWS IoT ExpressLink brings to IoT device design, and how the the NORA-W2 AWS IoT ExpressLink multiradio modules can make retrofitting an already existing design into a smart AWS connected device easier than ever before.
May 30, 2024
34,310 views