- Fujitsu launches AI computing broker middleware to improve GPU utilization and address global shortages, achieving up to a 2.25x increase in computational efficiency in trials with multiple companies
- The middleware dynamically allocates GPU resources on a per-GPU basis, optimizing resource allocation and memory management across various platforms and AI applications
- The technology’s memory management enables the simultaneous handling of up to 150GB of AI processing—approximately five times the physical GPU memory capacity
Kawasaki, Japan, October 22, 2024
Fujitsu today announced the launch of an AI computing broker middleware technology designed to enhance GPU computational efficiency in AI processing and address the global GPU shortage. The new technology integrates Fujitsu’s proprietary adaptive GPU allocator technology, which dynamically allocates GPUs for real-time high-efficiency processing, with various AI processing optimization techniques.
Following successful pilot trials, TRADOM Inc. (1) will begin utilizing solutions based on the AI computing broker technology in October 2024. Additionally, SAKURA internet Inc. (2) has commenced a feasibility study on the AI computing broker technology for its data center operations.
Starting in May 2024, Fujitsu has also been conducting trials of this newly developed technology with AWL, Inc., (3) Xtreme-D Inc., (4) and Morgenrot Inc., (5) and has demonstrated significant improvements in these companies’ operations. The trials confirmed up to a 2.25x increase in computational efficiency for various AI processes and a substantial increase in the number of concurrently handled AI processes across diverse cloud environments and servers.
The newly developed technology will be available to customers in Japan starting October 22, 2024 and to users globally.
Fujitsu will continue to provide its AI computing broker technology to end users including AI service providers seeking to reduce GPU costs by enhancing computational efficiency, and to cloud service providers aiming to maximize GPU utilization. By addressing the challenges of GPU shortages and power consumption driven by the increasing global demand for AI, Fujitsu aims to contribute to enhanced business productivity and creativity for its customers.
Addressing growing energy consumption of AI with adaptive GPU allocation
Driven by the rapidly increasing global demand for AI technology (including generative AI), the need for GPUs, which are better suited to AI processing than CPUs, has been increasing dramatically. The global market for generative AI is projected to grow approximately 20-fold from 2023 to 2030 (6), and correspondingly, GPU demand is expected to increase at a similar rate. However, the rising power consumption in data centers to meet this GPU demand presents a significant challenge. It is estimated that data centers will consume 10% of the world’s electricity by 2030 (7).
To address this global societal challenge, Fujitsu developed an adaptive GPU allocator technology in November 2023. The technology optimizes the use of CPUs and GPUs by allocating resources in real time to give priority to processes with high execution efficiency, even if the GPU is running a program. Fujitsu has been conducting verification trials of this allocater technology across various platforms.
About the newly developed AI computing broker
The newly developed AI computing broker middleware integrates adaptive GPU allocator technology with AI processing optimization technologies, automatically identifying and optimizing GPU resource allocation for AI processing in multiple programs.
Unlike conventional allocation on a per-job basis, Fujitsu’s AI computing broker dynamically allocates GPU resources on a per-GPU basis, leveraging Fujitsu’s computational optimization expertise to improve availability rates. The GPU memory management capabilities of the technology allow users to run numerous AI processes without worrying about GPU memory usage or physical capacity.
In pre-release trials, Fujitsu’s AI computing broker demonstrated up to a 2.25x improvement in GPU processing performance in terms of time per unit compared to deployments not using the technology. Furthermore, the technology’s memory management enables the simultaneous handling of up to 150GB of AI processing—approximately five times the physical GPU memory capacity.
Future Plans
Moving forward, Fujitsu will expand the scope of its AI computing broker technology application, including implementation across multiple GPUs installed on multiple servers, anticipating use in even larger computing environments. Fujitsu will continue to develop advanced computing technologies to address challenges including GPU and power shortages, contributing to the realization of AI that enhances productivity and creativity as a trusted assistant.
Junichi Kayamoto, Chief Data Science Officer, TRADOM Inc. comments:
“Our company delivers cutting-edge AI-powered solutions for managing foreign exchange risk. The trial of Fujitsu’s AI computing broker technology proved its ability to significantly streamline GPU resource allocation for AI model generation, enabling the development of substantially more accurate models in significantly less time through AI learning process multiplexing. We are committed to leveraging this technology, in continued collaboration with Fujitsu, to proactively expand our solution offerings, driving both TRADOM‘s growth and the advancement of the FinTech industry.”
Ken Wakishita, Senior Director, SAKURA internet Inc./ SAKURA Internet Research Center comments:
“Our trial of Fujitsu’s AI computing broker has proven the technology’s ability to significantly enhance the efficiency of GPU resource allocation within our cloud business, expanding GPU access to a broader customer base. We hope to collaborate with Fujitsu to fully integrate this technology and meet the increasing demand for GPUs.”
Hiroshi Fujimura, R&D General Manager – AWL, Inc., comments:
“We are committed to developing and delivering cutting-edge AI camera solutions that solve various problems and maximize the value of all real-world spaces, particularly retail environments. Meeting the high demands of our clients, optimizing GPU operating costs during parallel AI model training is paramount. Our trial proved Fujitsu’s AI computing broker’s ability to significantly enhance GPU utilization and AI processing efficiency. We are excited to see further advancements in this important technology.”
Naoki Shibata, Founder, CEO, Xtreme-D Inc., comments:
We congratulate Fujitsu on the launch of the AI computing broker. Xtreme-D provides Raplase, a cloud-based service for AI and HPC clients. A critical challenge for our customers is optimizing price/performance through efficient utilization of costly on-premise and bare-metal cloud GPUs. We are confident that Fujitsu’s AI computing broker will significantly address this challenge, and we are actively collaborating with Fujitsu to integrate this solution into our customer offerings.“
Masamichi Nakamura, COO, and Hisashi Ito, CTO, Morgenrot Inc., comments:
“Our company is revolutionizing the cloud computing landscape with its innovative, decentralized approach leveraging container data centers. We’re building a cutting-edge sharing economy model for computing power, perfectly poised to meet the increasing demand for advanced computational resources, including GPUs. Our recent trial, with a view to collaborating between our HPC management solution (M:Arthur) and cloud service (Cloud Bouquet) and Fujitsu’s AI computing broker technology, yielded impressive results. By enabling GPU sharing between multiple jobs, we achieved a remarkable near 10% reduction in overall execution time compared to running jobs sequentially on two GPUs. This parallel processing capability unlocks significant advantages, allowing simultaneous execution of long training sessions for model building and shorter inference/testing tasks, all within constrained resources. We are excited to explore further usage of Fujitsu’s AI computing broker to integrate this transformative technology into our product suite.”