industry news
Subscribe Now

IEEE Study Leverages Silicon Photonics for Scalable and Sustainable AI Hardware

With the swift progress in artificial intelligence (AI) and machine learning (ML), the hardware enabling these technologies must evolve to accommodate increasing workloads and energy requirements. A recent study introduced AI accelerators—customized hardware optimized for AI tasks—built on silicon photonic integrated circuits. Powered by III-V compound semiconductors, these silicon PICs consume less energy, presenting a promising direction for creating a more efficient and sustainable AI infrastructure to support future computing advancements.
The emergence of AI has profoundly transformed numerous industries. Driven by deep learning technology and Big Data, AI requires significant processing power for training its models. While the existing AI infrastructure relies on graphical processing units (GPUs), the substantial processing demands and energy expenses associated with its operation remain key challenges. Adopting a more efficient and sustainable AI infrastructure paves the way for advancing AI development in the future.
A recent study published in IEEE Journal of Selected Topics in Quantum Electronics demonstrates a novel AI acceleration platform based on photonic integrated circuits (PICs), which offer superior scalability and energy efficiency compared to conventional GPU-based architectures. The study, led by Dr. Bassem Tossoun, a Senior Research Scientist at Hewlett Packard Labs, demonstrates how PICs leveraging III-V compound semiconductors can efficiently execute AI workloads. Unlike traditional AI hardware that relies on electronic distributed neural networks (DNNs), photonic AI accelerators utilize optical neural networks (ONNs), which operate at the speed of light with minimal energy loss.
“While silicon photonics are easy to manufacture, they are difficult to scale for complex integrated circuits. Our device platform can be used as the building blocks for photonic accelerators with far greater energy efficiency and scalability than the current state-of-the-art”, explains Dr. Tossoun.
The team used a heterogeneous integration approach to fabricate the hardware. This included the use of silicon photonics along with III-V compound semiconductors that functionally integrate lasers and optical amplifiers to reduce optical losses and improve scalability. III-V semiconductors facilitate the creation of PICs with greater density and complexity. PICs utilizing these semiconductors can run all operations required for supporting neural networks, making them prime candidates for next-generation AI accelerator hardware.
The fabrication started with silicon-on-insulator (SOI) wafers that have a 400 nm-thick silicon layer. Lithography and dry etching were followed by doping for metal oxide semiconductor capacitor (MOSCAP) devices and avalanche photodiodes (APDs). Next, selective growth of silicon and germanium was performed to form absorption, charge, and multiplication layers of the APD. III-V compound semiconductors (such as InP or GaAs) were then integrated onto the silicon platform using die-to-wafer bonding. A thin gate oxide layer (Al₂O₃ or HfO₂) was added to improve device efficiency, and finally a thick dielectric layer was deposited for encapsulation and thermal stability.
 
“The heterogeneous III/V-on-SOI platform provides all essential components required to develop photonic and optoelectronic computing architectures for AI/ML acceleration. This is particularly relevant for analog ML photonic accelerators, which use continuous analog values for data representation”, Dr. Tossoun notes.
This unique photonic platform can achieve wafer-scale integration of all of the various devices required to build an optical neural network on one single photonic chip, including active devices such as on-chip lasers and amplifiers, high-speed photodetectors, energy-efficient modulators, and non-volatile phase shifters. This enables the development of TONN-based accelerators with a footprint-energy efficiency that is 2.9 × 10² times greater than other photonic platforms and 1.4 × 10² times greater than the most advanced digital electronics.
 
This is indeed a breakthrough technology for AI/ML acceleration, reducing energy costs, improving computational efficiency, and enabling future AI-driven applications in various fields. Going forward, this technology will enable datacenters to accommodate more AI workloads and help solve several optimization problems.
The platform will be addressing computational and energy challenges, paving the way for robust and sustainable AI accelerator hardware in the future!
***
 
Reference
Authors: Bassem Tossoun1, Xian Xiao1, Stanley Cheung1, Yuan Yuan1, Yiwei Peng1, Sudharsanan Srinivasan2, George Giamougiannis3, Zhihong Huang1, Prerana Singaraju1, Yanir London1, Matěj Hejda1, Sri Priya Sundararajan1, Yingtao Hu1, Zheng Gong1, Jongseo Baek1, Antoine Descos1, Morten Kapusta1, Fabian Böhm1, Thomas Van Vaerenbergh1, Marco Fiorentino1, Geza Kurczveil1, Di Liang4, Raymond G. Beausoleil1
Title of original paper: Large-Scale Integrated Photonic Device Platform for Energy-Efficient AI/ML Accelerators
Journal: IEEE Journal of Selected Topics in Quantum Electronics
                              

One thought on “IEEE Study Leverages Silicon Photonics for Scalable and Sustainable AI Hardware”

Leave a Reply

featured blogs
Apr 4, 2025
Gravitrams usually employ a chain or screw lift to hoist their balls from the bottom to the top, but why not use a robot?...

Libby's Lab

Arduino Portenta Environmental Monitoring Bundle

Sponsored by Mouser Electronics and Arduino

Join Libby and Demo in this episode of “Libby’s Lab” as they explore the Arduino Portenta Environmental Monitoring Bundle, available at Mouser.com! This bundle is perfect for engineers requiring environmental data such as temperature, humidity, and pressure. Designed for ease of use, the bundle is great for IoT, smart home, and industrial devices, and it includes WiFi and Bluetooth connectivity. Keep your circuits charged and your ideas sparking!

Click here for more information about Arduino Portenta Environmental Monitoring Bundle

featured chalk talk

Using NXP’s FRDM Ecosystem to Break Down ML Complexity
In this episode of Chalk Talk, Michael Pontikes from NXP and Amelia Dalton explore the details of the FRDM ecosystem from NXP. They explore the scalability component of this ecosystem, the details of the FRDM i.MX 93 Development Board and how the machine learning software and tools of this ecosystem will streamline and simplify your next machine learning enhanced design.
Apr 17, 2025
39 views