feature article
Subscribe Now

It is Difficult to Make Predictions, Especially About the Future*

Two Weeks of Forecasting

In the last few weeks, I have been exposed to a firehose of information about the future of electronics, firstly at the Arm Research Summit and then at the Future Horizons’ Mid-Year Industry Forecast Briefing.

The first drink from the firehouse was from Future Horizons. And the good news is – the news is good. Malcolm Penn, the Chair and CEO of Future Horizons, always starts his forecasting with the global economy as, “For the last ten years the economy has been the driving factor for the [semiconductor] industry. The industry needs a strong economy.” And where, in January, the economic picture was obscure, with huge areas of uncertainty, it has settled down considerably. The US, under President Trump, and the UK, in the shadow of Brexit, are both still disturbing factors, but the Eurozone is stable, pending the German and Italian elections, and Asia, apart from Japan, is looking good. In fact, the International Monetary Fund (IMF) has, for the first time in ten years, revised its forecast of GDP growth upwards. It is only from 3.4% to 3.5% but could possibly be the first signs of a longer-term recovery.

How does this translate to the semiconductor industry? Already we are seeing IC unit shipments looking strong. For 30 years, the underlying growth, quarter by quarter, has been 10%, with peaks and troughs. Today it is above this. How does this translate into money? In January Penn forecast growth of 11%, with a range of 8.0% to 16% ($365b to $395b). After two quarters of higher growth than predicted, he has radically revised his forecast to 20% growth with a range between 18.4% and 23%. Even his bear rating would push the industry over $400b for the first time, and his bullish figure would be $417b.  He is more cautious for 2018, with a preliminary outlook of 15.6% but that is still $470b.

But this growth will come with problems. The vast bulk of manufacturing is through foundries, dominated by TSMC. TSMC has stopped building to meet forecasts and builds new fabs only for specific requirements. It also operates all its fabs at pretty close to full capacity. The other foundries are the same. This means that there is inevitably going to be a capacity crunch. New buildings take around year to construct and then a further year to equip and run-up. (The monster new Samsung fab in Korea was announced in October, 2014, construction began in April, 2015, and the first production run began in June 2017.) The equipment manufacturers are building only to order, so that might drag the construction period out farther, and suppliers, such as wafer manufacturers, are also capacity constrained. In fact, Samsung has announced that it is looking for wafer companies who are prepared to enter long-term contracts.

Penn argues that there are few managers who have ever experienced a constrained manufacturing period, and even fewer who have lived through the period of excess capacity that will inevitably follow. Even big customers of the foundries are going to have to argue for allocations, while smaller ones are going to have a hard time finding any manufacturing capacity. Chip users are going to see prices rise as they compete for the limited supplies, at the same time as – if the economy continues to rise – they will have market demand for their products.

Where Future Horizons is looking at the future from the point of view of the business, The Arm Research Summit is academic-research based and gave three days of intense presentations, in two streams, on topics such as machine learning, quantum computing, high performance computing, and new architectures. There was an audience of 368, representing 130 different institutions, including Arm people.  My brain ached for about a week afterwards. Almost any of the papers presented from academia would make the basis of a full article in itself, and I will be returning to some of them in due course. However, if you have time to spare, visit the streamed presentations starting at https://www.youtube.com/watch?v=Qi34rm0gSZA There is an option for you to switch between streams. In the meantime, I will look at Arm’s own research and their research agenda.

Arm needs to be aware of future developments if it is to create the processor architectures that are needed to power them. To do this, while its product teams carry out product centred research and development, it also has a relatively small research group (around 100 people in a company of nearly 5,000) and builds strong links with university research teams around the world.

The research group is structured as a group of teams, some based in Cambridge, others with members in Boston, Austin and San Jose. Their role is seen as feeding existing product groups and the ecosystem and creating new product groups, and they gain leverage from their relatively small numbers by engaging in collaboration with universities and research centres.

The overall strategic agenda has four focus areas:

 

  • Achieve more with constrained budgets (including transistor budgets)
  • Scale connected compute intelligently (processing at the most appropriate place, not just in the cloud)
  • Proliferate access to high performance and efficient ML (Machine Learning) compute (again, not just in the cloud)
  • Systematically remove excuses for untrustworthiness (or increase security in systems)

 

Each of the groups gave a five-minute elevator pitch, and I will summarise their main areas of activity:

 

  • Architecture is focused on looking at longer-term developments for Acceleration, Embedded Efficiency, Next Generation Systems, and Scalability.
  • Software and Large Scale Systems is working mainly in the fields of High Performance Computing and High Performance Data Analytics.
  • Emerging Applications and Implementations looks at newer developments; that is, stuff that hasn’t found a home within a dedicated research team. It is tracking over 100 emerging and potentially disruptive technologies to identify areas to carry out research such as Robotic Compute, Bio-Tech (including Bio-Electronic Medicine and large-scale data analysis), which, in turn, leads to Data Science and Mobile Systems.
  • Systems and Memory is looking at how memory can cope with unstructured and sparse data. It is also looking at how memory can provide processors with access to data speeds to service the improved processing speed gained from accelerators and specialist processors. Specific study areas include tracking and driving memory road maps, Caches and Interconnect, Compute-Near Memory, Non-Volatile Memories, and Future Memory Architectures.
  • Machine Learning Applications (which was spun out from the Emerging Applications team in January) has built an ML Model Zoo – a large collection of different ML models that are exercised to understand the demands that they will make on processing and other hardware. They are also studying Highly Constrained Deployments, Accelerators, Emerging Algorithms, and the Data Centre.
  • Security is again a new team. It is looking at different issues for secure computing and communication including Separation and Isolation Mechanisms, Trust, Identity and Provenance, Side Channels, Specifications and Correctness, and Crypto Performance and Emerging Ciphers.
  • Devices, Circuits and Systems provides Arm with the direct experience of designing and building chips, which helps to keep the IP products grounded in the realities of implementation. They are working on IoT Sensor Nodes, Chip-Scale Energy Harvesting, Driving Low Power Standards, Printed Electronics (Is this the way to get to the 1-cent microcontroller?), Post 5 nm, and Emerging Process Technology.

 

One thing that came though clearly is that the Softbank acquisition of 2016 has changed Arm. This is given graphic focus in a new logo, but it is more fundamentally exhibited by changed investment and focus areas. The chairman and CEO of Softbank, Masayoshi Son, has spoken frequently about how the IoT will have a trillion devices, and, while much of the research described already can be seen as providing tools for the IoT, Arm is also looking at some of the broader issues, and they began to look above their traditional focus on specific devices to the issues of broader distributed systems. Some of these are: Distributed Systems Architecture, Optimisation against Power, Latency and Bandwidth restrictions, Scalable Standardized Solutions, End-to-End Security, and Data Oriented Services. For this large-scale IoT to be successful requires that things must work separately, must work together, must work automatically and must work resiliently.

I said earlier that this is a relatively small group, but it is aggressively recruiting and hopes to have grown by around 40% in the next year, but it will also be aggressively courting co-operation with university, and other, research teams.

There are several threads to this. Firstly, Arm wants to work directly with specific research teams in areas like semiconductor physics, including jointly funding research. Secondly, it wants to make it much easier for researchers to use Arm IP for their work, and, finally, it is providing teaching materials for specific courses. None of this is new, but Arm is putting more effort into these areas.

They have created research enablement kits – software packages, models, and tools and hardware prototypes, making it easier for researchers to use Arm and get started on their research without spending time on integration.

This has recently been reinforced by the recently upgraded DesignStart programme. With DesignStart, it is possible to use Cortex M0 IP, without the usual licensing formalities – just a click-through. There are limitations on the configurability of the core and the number of chips you can build, but there are upgrade paths. DesignStart has now been extended to the Cortex M3. With the Arm ecosystem, there are low-cost or open-source tools and IP available, and there were strong hints that there are plans to provide access to other Arm tools.

DesignStart is open to anyone, not just to universities, and there are seminars and workshops in association with Mentor Graphics (the group that used to be Tanner EDA) to show how it is possible to move into ASICs without the horrendous NRE costs that are normally needed.  

Finally, there is the free course material. Arm is working hard to extend the premium video courseware and textbooks that have already been used in over 3000 courses world-wide.

Arm is not attempting to bet on a single prediction, but it is, instead, investing in a wide range of activities that should allow it to continue to grow, however the future develops.

*The quote, “It is difficult to make predictions, especially about the future,” has been ascribed to a whole range of people, from Mark Twain to Yogi Berra. (For non-American readers, Yogi Berra was a baseball player with a reputation for pithy statements.) Extensive research seems to lead to the conclusion that it may be a Danish proverb that was brought into more general use by the Danish physicist Niels Bohr.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Versatile S32G3 Processors for Automotive and Beyond
In this episode of Chalk Talk, Amelia Dalton and Brian Carlson from NXP investigate NXP’s S32G3 vehicle network processors that combine ASIL D safety, hardware security, high-performance real-time and application processing and network acceleration. They explore how these processors support many vehicle needs simultaneously, the specific benefits they bring to autonomous drive and ADAS applications, and how you can get started developing with these processors today.
Jul 24, 2024
91,803 views