industry news
Subscribe Now

Arteris Selected by Rain AI for Use in the Next Generation of AI

Optimizing on-chip mesh connectivity with Arteris’ FlexNoC 5 physically aware network-on-chip enables Rain AI to realize faster data transfers at ultra-low power to achieve record performance for Generative AI and Edge AI computing at scale.

CAMPBELL, Calif. – January 30, 2024 – Arteris, Inc. (Nasdaq: AIP), a leading provider of system IP which accelerates system-on-chip (SoC) creation, today announced that Rain AI, an AI company building the world’s most cost and energy efficient hardware for AI, has selected Arteris’ FlexNoC 5 physically aware network-on-chip (NoC) IP. The company will utilize the Arteris interconnect IP for its AI accelerator family. The on-chip connectivity enabled by Arteris’ IP supports the design of an advanced mesh network topology for superior performance that helps support Rain AI’s digital in-memory compute for AI workloads.

The core of Rain AI’s endeavor lies in co-designing fundamental innovations across software, hardware, and algorithms to both speed up processing and lower power consumption. The mesh network-on-chip topology design for on-chip connectivity is a cutting-edge approach to solve the technical challenge of maintaining high performance while interconnecting various AI processing elements. Arteris’ FlexNoC 5, connecting a mesh topology for high-density AI computing, will enable Rain AI to achieve optimal performance at a lower cost of operation.

“The AI problem is an energy problem. Creating a future with abundant and scalable artificial intelligence is critical for the AI revolution,” said William Passo, CEO of Rain AI. “The right NoC is critical for AI computing and Arteris FlexNoC 5 was an easy choice given its unmatched product performance including ultra-low power, lowest latency, and highest bandwidth, along with deep expert support and proven track record in reducing time to market.”

Rain AI is on a mission to build the compute platform for the future of AI, including training and inference on the same platform to enable scale on-device AI. Utilizing the versatility of the RISC-V instruction set architecture (ISA) and the proven high-performance NoC IP from Arteris, Rain AI expects to deliver products that outperform GPUs and are radically more cost-effective. 

“We are very excited to support Rain AI in their vision to transform AI compute through their novel approach to machine learning,” stated K. Charles Janac, president and CEO of Arteris. “FlexNoC 5’s ability to deliver high performance, flexibility and scalability was a great fit for Rain AI’s approach to redefining compute for Generative AI and on-device AI applications.”

Arteris remains steadfast in its commitment to delivering state-of-the-art system IP products, empowering innovators like Rain AI to achieve groundbreaking advancements in semiconductor technology. Learn more about FlexNoC 5 and solutions for AI at arteris.com. 

About Arteris

Arteris is a leading provider of system IP for the acceleration of system-on-chip (SoC) development across today’s electronic systems. Arteris network-on-chip (NoC) interconnect IP and SoC integration automation technology enable higher product performance with lower power consumption and faster time to market, delivering better SoC economics so its customers can focus on dreaming up what comes next. Learn more at arteris.com.

About Rain AI

Rain AI is creating a future with abundant and scalable artificial intelligence. They’re building the world’s most cost and energy efficient hardware for AI. Their products achieve an order of magnitude improvement over the status quo by co-designing every layer of the AI stack.

Rain AI is currently a Series A stage startup and backed by world leaders in AI, including Sam Altman (OpenAI), Y Combinator, Daniel Gross, Jaan Tallinn, Founders X Fund, Airbus Ventures, and Grep VC. Learn more at rain.ai

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
37,326 views