feature article
Subscribe Now

Altera’s Long Game

Intel… Then What?

We wrote a lot in these pages (even long before it happened) about the market and technology trends and pressures that led to Intel’s bid to acquire Altera. We dove into the data center and dug up the game-changing combination of conventional processors with FPGA fabric that can form a heterogeneous computing engine, which could conquer the current plague of power problems that limit the size and capacity of server farms around the world. We argued that Intel needed Altera – as a strategic defense to protect its dominance of the data center in a future world where FPGA-accelerated processing was the norm rather than the exception. 

Intel came, offered, negotiated, and eventually won. Now, pending approval of various worldwide regulatory bodies, Altera will most likely become part of the world’s largest semiconductor company. But what then? Will the world change? Will Altera be catapulted to new heights, disappear into some dark abyss of forgotten acquirees, or transform into some parallel-universe evil-twin version of the company we have known for the past several decades?

We chatted with Danny Biran, Altera Senior Vice President, Corporate Strategy and Marketing, to try to gain more insight on the direction the company might take following the Intel acquisition. Biran is upbeat about the change. His characterization of the post-event Altera sounds like a better version of today’s company, rather than a different animal. Biran paints a picture of an Altera that continues to develop and market the kinds of technologies for which it has always been known, while also exploring and expanding into new markets and opportunities.

It’s interesting to superimpose a discontinuous step function of change – such as a major acquisition – on top of a world where constant exponential change has always been the norm. If a tree falls during a Metallica concert – will it make enough sound to get anyone’s attention? Biran emphasizes that Altera will still be an FPGA company, and that the traditional FPGA business is still just as important as it was before Intel came along.

Why is that an important point to make? If Intel came in and transformed Altera into a group that focused only on the data center problem, it would wreak havoc on the customers in other markets who have come to depend on Altera for critical technology. It would alienate important companies around the world – most of whom Intel probably wants to retain as long-term customers. And, it would hand over the FPGA market – which is one of the fastest-growing, highest-margin segments of the semiconductor industry – to Altera’s archrival Xilinx.

Diving into those details a little farther – Altera plans to continue to develop and market FPGAs for the same markets they have always served – wireline and wireless communications and networking, industrial, automotive, medical, storage, and so forth. This FPGA business is a dynamic and profitable operation, and Intel would be kooky to turn away from it. Message to existing Altera customers? Don’t worry. Intel won’t force Altera to bail on you. If you’re happy with Altera today, you’ll probably be happy with post-Intel Altera. 

Hey, but what about those ARM cores in Altera’s SoC FPGAs?

Yep. We’ve heard that one a lot too. As you may know – both Altera and Xilinx have developed new lines of devices that combine sophisticated ARM-based processing subsystems with FPGA fabric, IO, and DSP resources. The result is a new, awesomely powerful class of devices whose impact on the world has just begun. But the question is: how would Intel feel about making chips with ARM processors on them right there in their own Intel fabs? Well – welcome to the modern world. You know – the world where Samsung and Apple are archrivals in the smartphone market while Samsung actually makes chips that go into the iPhone.

The answer is simple. Altera plans to continue to produce and develop SoC FPGAs with ARM processors. Biran emphasizes the “develop” part of that position, which is to say that Altera will not merely continue to produce the devices they’ve already announced; they will also continue to engage in the development of new devices and technologies based on that same architecture. That is great news for everyone who has already committed to the existing Altera/ARM architecture, and who probably doesn’t relish the idea of rewriting piles of legacy software for a different processor architecture.

All of this adds up to “business as usual” for Altera’s FPGA and SoC FPGA lines. At least as much of “business as usual” as one would expect in a world where every product is obsoleted every two years.

But – what about the new thing? What about the data center? What about the super-snazzy hyper-fast power-sipping heterogeneous data-center-dominating processing engine that combines Altera FPGA technology with Intel processor technology on one chip or in one package? Yep. That’s gonna happen. Intel/Altera will most likely produce both system-in-package and system-on-chip products that combine Intel architecture processors with Altera FPGA technology – aimed straight at the data center market. 

Intel currently dominates the data center with its conventional server processors. But companies like Microsoft, Google, Facebook, and others have had success using FPGAs to accelerate certain common algorithms – resulting in massive power savings and improved performance. Now, there is a race to bring FPGAs into more mainstream use in data centers, but that involves building standardized machines that include both conventional processors and FPGA fabric, and then (most importantly) providing a tool flow that allows software developers to take advantage of the acceleration and power-saving capabilities of FPGAs.

Biran confirms that the tool flow is the biggest challenge here. Altera pioneered the use of OpenCL to target software to FPGAs, and that technology continues to evolve. The important thing about the OpenCL approach compared with, say, high-level synthesis (HLS), is that Altera’s OpenCL is designed to be used by the software developer, where HLS is primarily useful as a productivity booster for hardware developers.  OpenCL allows software developers to target FPGAs in the same way that they would write code for graphics processors (GPUs). 

What OpenCL does not do is to open up the vast world of legacy software for FPGA acceleration. While it makes such work easier for programmers who want to take advantage of acceleration and power reduction, it does not offer the possibility for automatic optimization of arbitrary code for FPGAs. Achieving that goal is a distant reach for FPGA tool technology.

Overall, what we heard from Altera is what most people would hope – that the company plans to continue their pursuit of the FPGA market – but now with the advantage of the world’s most sophisticated semiconductor fabs at their disposal. At the same time, we should see some truly new and innovative products aimed at the data center, with the opportunity to accelerate progress by combining the experience and skills of the two companies.

12 thoughts on “Altera’s Long Game”

  1. Thanks for the update Kevin!!

    Now if we can just convince Altera/Intel to push the place and route software technologies into open source, we can start a long delayed decade transition toward open source HDL which will make highly portable C/SystemC/OpenMP to logic finally a reality.

    Hopefully with active labor funding/sharing by both the Altera FPGA teams, and the Intel embedded FPGA/CPU teams, which will drag Xilinx and all the smaller FPGA vendors into a common industry open source tool chain.

    Both already require and support the open source tool chains for C compilers and OS’s for Arm/x86 as a critical market necessary resource.

  2. “Altera plans to continue to produce and develop SoC FPGAs with ARM processors.” but “Intel/Altera will most likely produce both system-in-package and system-on-chip products that combine Intel architecture processors with Altera FPGA technology.”

    Do they plan on hiring more engineers so that the same resources will be available for the ARM/SoC and the Intel/SoC? Will the Intel/SoC get the better/newer features or more support?

    This is the problem with these kinds of acquisitions fortunately it looks like Xilinx is maintaining focus on the ARM/SoC market.

  3. @TotallyLost with all due respect when has Intel ever open sourced anything substantial? Altera is a fantastic company with many talented engineers but any engineer who has ever been through an acquisition will tell you that “things change” companies unfortunately don’t get bought for charity especially charity directed at benefitting your foremost up and coming competitor in the 64-bit server space: ARM

    Full disclosure though we are using a Xilinx part in our $55 ARM+FPGA SoC board http://snickerdoodle.io and Xilinx has been very supportive of this board which is targeted to benefit the maker and open-source community.

  4. @Weatherbee — I’ve used both Altera and Xilinx for more than a decade … found Xilinx exceptionally hostile to supporting C HDL to logic (aka HLS) a decade ago …. VERY hostile.

    Today they support their own version, and praise the same HLS benefits/features we tried to introduce a decade ago.

    The big problem is that they still refuse to open up their tool chain to support fully automated compile-load-and-go for reconfigurable computing …. a necessary market for supporting real software development on the logic side of an SoC.

    Intel wants to support data center software acceleration in FPGA’s. Believe me, that is highly marginal when it requires a highly specialized EE to do the floor plan and timing closure for EVERY design iteration from concept prototyping to released production — and so significantly expensive in EE labor, that the cost benefits of FPGA acceleration rapidly diminish.

    90% of the performance can be had with better tools that choose automated straight forward compile-load-and-go optimizations …. these tools are not what EE’s want, or require today … they are what reconfigurable computing Software engineers do need.

    This requirement was completely dismissed by Xilinx … and still seems to be today, as the tools are not anywhere in sight.

    Historical reference Jan/2006:http://www.fpgarelated.com/showthread/comp.arch.fpga/36163-1.php

    >
    >Much of what the FpgaC project needs to support compile, load, and go
    >for Xilinx
    >product is trapped in this project, with a clear warning from Xilinx to
    >stay clear.
    >The likely outcome is to focus on other vendors which are more willing
    >to allow
    >open source investment in RC technologies which support their products.
    >

    > Just as I said in the other thread: Corporations like control. Xilinx, having
    > the majority share of the market, especially seems to like control. Perhaps
    > one of the hungrier vendors would be willing to work with the open source
    > community. If so, then in the longrun it’s Xilinx’s loss.

    > Phil

  5. Pingback: cpns 2018
  6. Pingback: DMPK
  7. Pingback: adme
  8. Pingback: wedding planners
  9. Pingback: iraqi geometry

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Datalogging in Automotive
Sponsored by Infineon
In this episode of Chalk Talk, Amelia Dalton and Harsha Medu from Infineon examine the value of data logging in automotive applications. They also explore the benefits of event data recorders and how these technologies will shape the future of automotive travel.
Jan 2, 2024
57,863 views