industry news
Subscribe Now

Blaize Emerges from Stealth to Transform AI Computing

Blaize, formerly known as Thinci, unveils the first true Graph-Native silicon architecture and software platform built to process neural networks and enable AI applications with unprecedented efficiency

News Highlights

  • Blaize Graph Streaming Processor (GSP) architecture: the first to enable concurrent execution of multiple neural networks and entire workflows on a single system, while supporting a diverse range of heterogeneous compute intensive workloads
  • Fully programmable solution brings new levels of flexibility for evolving AI models, workflows, and applications that run efficiently where needed, a breakthrough for dynamic intelligence at the edge
  • Directly addresses technology and economic barriers to AI adoption via streamlined processing that yields 10-100x improvement in systems efficiency, lower latency, lower energy, and reduced size and cost
  • Early access customer engagements since 2018 in automotive, smart vision and enterprise computing segments

El DORADO HILLS, CA — November 12, 2019 — Blaizeä today emerged from stealth and unveiled a groundbreaking next-generation computing architecture that precisely meets the demands and complexity of new computational workloads found in artificial intelligence (AI) applications. Driven by advances in energy efficiency, flexibility, and usability, Blaize products enable a range of existing and new AI use cases in the automotive, smart vision, and enterprise computing segments, where the company is engaged with early access customers. These AI systems markets are projected to grow rapidly* as the disrupting influence of AI transforms entire industries and AI functionality becomes a “must-have” requirement for new products.

 “Blaize was founded on a vision of a better way to compute the workloads of the future by rethinking the fundamental software and processor architecture,” says Dinakar Munagala, Co-founder and CEO, Blaize.  “We see demand from customers across markets for new computing solutions that address the immediate unmet needs for technology built for the emerging age of AI, and solutions that overcome the limitations of power, complexity and cost of legacy computing.”

“Blaize is very innovative, our important business partner,” says Yukihide Niimi, CEO of NSITEXE and DENSO Advisory Board member. “DENSO is demonstrating leadership in many areas as the automotive industry undergoes extraordinary technology changes. NSITEXE was established to catch up such technology change and to accelerate development of flexible compute IP solutions like DFP. NSITEXE is willing to work together with Blaize to boost the flexible Graph (Data Flow) compute technology eco-system.”

 “I have been watching Blaize for several years and saw early on that their graph-native architecture would be particularly well suited to a wide range of AI and robotics workloads,” says Schuyler Cullen, VP AI & Robotics, Samsung Strategy and Innovation Center. “I have been impressed by their rapid scaling in team, organization, and technology.”

“The proliferation of AI across multiple industries and application areas is dependent upon robust, programmable, efficient, scalable, high-performance hardware, that extends AI processing from cloud datacenters through to the end device, server or appliance,” says Aditya Kaul, Research Director, Tractica. “It’s becoming clear that traditional processing architectures will not be enough to meet the demands of this new emerging market, with new techniques like graph-based computing showing promise. Success will be defined by combining new computing approaches with modular hardware and a deployment-oriented software stack, all of which is part of the Blaize value proposition from day one.”

“Blaize’s vision of a native graph streaming processor (GSP) is relatively unique,” noted Karl Freund, Sr. AI analyst at Moor Insights & Strategy. “The GSP is more general purpose than, say, a single-function ASIC for AI, and can consequently create opportunities in many markets, from Automotive to the Edge to the Cloud.”

“The coming out of Blaize and its leading Graph Streaming Processor is extremely exciting,” says David (Dadi) Perlmutter, an angel investor, entrepreneur and former EVP and Chief Product Officer of Intel corporation. “As an initial investor in Blaize, I recognized early on the great efficiency of one of the first to market a complete solution designed from scratch, fully optimized for AI and Neural Network applications. The unprecedented efficiency is great for a wide range of edge applications, particularly the automotive market. I am proud of the team in delivering on the promise.”

Graph-Native Techniques Drive Huge Efficiency Gains

The scope and capabilities of the comprehensive Blaze technology stack is unprecedented. The Blaize GSP architecture and Blaize Picassoä software development platform deliver breakthroughs in computational efficiency. The solution blends dynamic data flow methods and graph computing models with fully programmable proprietary SOCs. This allows Blaize computing platforms to exploit the native graph structure inherent in neural network workloads all the way through runtime. The massive efficiency multiplier is delivered via a data streaming mechanism, where non-computational data movement is minimized or eliminated. This gives Blaize systems the lowest possible latency, reduces memory requirements and reduces energy demand at the chip, board and system levels.

Blaize GSP is the first fully programmable processor architecture and software platform that is built from the ground up to be 100% graph-native. While there are many types of neural networks, all neural networks are graphs. With the inherent graph-native structure, developers can now build multiple neural networks and entire workflows on a single architecture that is applicable to many markets and use cases. End-to-end applications can be built integrating non-neural network functions such as Image Signal Processing with neural network functions, all represented as graphs that are processed 10-100 times more efficiently than existing solutions. And AI Application developers can now build entire applications faster, optimize these for edge deployment constraints, and run them efficiently using automated data-streaming methods.

About Blaize

Blaize leads new-generation computing unleashing the potential of AI to enable leaps in the value technology delivers to improve the way we all work and live. Blaize offers transformative solutions that optimize AI wherever data is collected and processed from the edge to the core, with focus on automotive, smart vision and enterprise computing markets. Blaize has secured US$87M in funding from strategic and venture investors Denso, Daimler, SPARX Group, Magna, Samsung Catalyst Fund, Temasek, GGV Capital, SGInnovate, and Magna. With headquarters in El Dorado Hills (CA), Blaize has teams in Campbell (CA), Cary (NC), and subsidiaries in Hyderabad (India), Leeds and Kings Langley (UK), with 325+ employees worldwide.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Datalogging in Automotive
Sponsored by Infineon
In this episode of Chalk Talk, Amelia Dalton and Harsha Medu from Infineon examine the value of data logging in automotive applications. They also explore the benefits of event data recorders and how these technologies will shape the future of automotive travel.
Jan 2, 2024
58,980 views