feature article
Subscribe Now

How Do We Tackle Chip Security?

Accellera Convenes a Panel and Some Working Groups

Security was huge at this summer’s DAC. By that, I don’t mean that you had to get frisked to get in; I mean that it was a hot topic of discussion. It came together at a luncheon hosted by the Accellera standards body, with further depth in a separate discussion with Accellera. The highest-level take-away would be… that we – and you – have a lot of work to do. The good news is that concrete steps may be underway.

The lunch panel was supposed to be about IP security, which I expected to mean, “How to protect your IP from reverse engineering or outright theft.” As it turns out, it wasn’t that (or only that); it was pretty much about how we do designs that contribute to a secure system. And we pretty much need to start at the start.

The panel was moderated by Cadence, with panelists from Analog Devices (ADI), DARPA, Intel, and Tortuga Logic. Two of those panelists (ADI, Intel) obviously represent the silicon design constituency; one (DARPA) represents a rather prominent (if secretive), silicon customer. The remaining panelist and the moderator represent EDA tools – one generically (Cadence), the other focusing specifically on security (Tortuga).

Usually, when technologists get together, it’s to provide answers to questions. But what was clear from the security discussions was that we’re still trying to figure out the questions that need answering, as specifically noted by ADI. For example, when designing a true random-number generator, what questions need to be asked? How do you design one that will be secure?

Security ≠ Safety – But There Are Similarities

One of the key points made was that the security discussion today is similar to the functional safety discussions of the past. There’s a focus on process and accountability, for one. Intel noted that, in fact, the two are linked: “If it’s not safe, it’s not secure – and vice versa.” That said, there are systems – like fitness trackers – that don’t necessarily have much in the way of safety concerns, but do have security concerns.

But safety is way farther down the road: emphasis is now on implementation of functional safety, as we’ve discussed in the past. Security is still in its infancy; we’re quite a ways from simply implementing known security solutions and processes.

And there’s one other big difference: safety is a bounded problem. At some point, we have figured out all the ways in which a system could be unsafe. Security, on the other hand, is unbounded. We will never be done dealing with new attack vectors that we need to address.

So where do we stand with security? By now, we have mostly achieved awareness of security as a need. There may still be some convincing needed on that score, but, for those who have received the security religion, it’s still a tough row to hoe. Frankly, it reminds me of one of the characteristics of MEMS design: each design is unique and custom. 

Starting Over Every Time

Every time a design gets started, you have to step back and analyze the security implications, beginning with first principles. Whatever you learned on the last design, whatever the industry has cobbled together as the start of a collection of best practices, hasn’t been abstracted enough to be of use for the next design. Yeah, you might have gotten some practice thinking about security and doing the analysis, but you still have to go back and completely restart with each design.

It feels to me like this need to restart from scratch every time wastes energy reinventing the same things over and over. Ideally, we’d start to put together abstractions that could help with the overall focus on security. We’re constantly revisiting the old attack vectors and patching them for each design – while, at the same time, having to figure out new solutions for new attacks. Isn’t there some way to abstract away the stuff we already know – solutions to old, well-known attacks – so that we can implement them easily and reserve the bulk of our energies for dealing with the new stuff?

I asked that question, and the answer was, “Maybe someday; not yet.” But ADI confirmed that there is interest in automation – of which there isn’t nearly enough at this point.

Many Levels of Vulnerability

Part of the issue is that this is a really complex, multi-faceted issue. Systems are affected on many levels: pure hardware, IP, chip, system, the software/hardware boundary, and software.

Let’s start with hardware. DARPA had quite a bit to say here, starting with four different aspects of hardware security:

  • Side-channel attacks, where the attacker looks at things like EMI radiation or power-supply perturbations to infer something about internal secrets;
  • Reverse engineering;
  • Malicious hardware, aka Trojans – circuits that aren’t there to do you any favors; and
  • Supply-chain engineering, looking for ways that foundries and packaging houses (and others) might be able to tamper with a design.

That whole Trojan thing gets to a general principle when looking for security holes: does the circuit do everything it should do and nothing it shouldn’t? That is, are all the circuits necessary and sufficient for a chip’s intended purpose?

DARPA noted that it can be hard to find Trojans. The kinds of places to look, spurred by past gotchas, include logic that’s rarely activated, logic that’s really hard to activate, and – <raises eyebrows> – extra undocumented instructions in an ISA.

Those first ones don’t constitute hard evidence of malware; they just create a suspicion. At least it’s something you can look for (or a tool could be created to look for); if you find it, then you have to figure out whether or not it’s legit.

And what about that ISA thing? That requires someone on the design team who has… mixed loyalties. Call him or her a mole, if you like. The idea is that one or more secret instructions exist that no one officially knows about. The attacker, knowing these instructions, can then use them to do their nefarious bidding.

IP can suffer from all of these issues, although there’s a qualitative difference, in that a company designing a chip is relying on some other company for critical IP. Both sides will have concerns: the IP vendor wants their IP protected; the IP user wants to be sure that the IP doesn’t do something dodgy to the rest of the design or system. But this also becomes a supply-chain concern: could someone interfere with IP that’s being delivered? Perhaps it leaves the IP vendor looking pure and innocent, but it gets corrupted before delivery.

IP is also implicated in the hardware/software boundary concern. The issue here is that software may be used to configure IP and other elements of the chip. Some IP options are activated during design, and, frankly, in some cases, there are so many different options that it’s hard to anticipate all combinations and permutations. That leaves a subset of options to be configured by software, but it becomes a new attack “surface” (as is the industry lingo for a vulnerability) – a way for software to launch an attack.

Accellera Taking Action

A follow-on discussion with Accellera took me deeper into what their new working group, the IP Security Assurance task force, is doing. They’re considering a process and certification to give designers information and confidence in both tools and their circuits.

The IPSA has two subgroups: the EDA subgroup and the KSC subgroup. The EDA subgroup is focused on reporting tools. This gives designers information about the security status of their design, and it creates documentation when all is complete. The KSC subgroup is considering putting together a database of Known Security Concerns.

Note that the EDA group is specifically not tasked with ways of identifying and fixing security holes. They see that as work for a commercial tool vendor. That’s why their focus is on reporting and documentation. 

The KSC database idea still has to be fleshed out with answers to lingering questions. Who is responsible for the database? Would it be in the public domain? Will companies contribute to it, or will they withhold their discoveries to maintain a competitive advantage?

As you can see, we’re still in the early stages here. They hope to have a proof-of-concept ready by the end of this year. Contrast that with the functional safety realm, where ISO 26262 is now the law of the land, about 5” thick, undergoing revisions, and well embraced by the community. (OK, 26262 is specifically about automobiles, but there are sister standards for other related domains.) So security has a long way to go before we reach safety’s level of maturity.

I’ll be updating this topic as news warrants. The bottom line: at this point, there’s no shortcut to doing everything from scratch. Every time.

 

More info:

Accellera

Sourcing credit:

  • Lei Poo, Analog Devices
  • Serge Leef, DARPA
  • Brent Sherman, Intel and IP Security Assurance (IPSA) Working Group Chair
  • Andrew Dauman, Tortuga Logic
  • Adam Sherer, Cadence and IPSA Working Group Secretary
  • Lu Dai, Board Chairman, Accellera

One thought on “How Do We Tackle Chip Security?”

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Introducing the TCKE9 eFuse: Advanced Circuit Protection for Modern Electronics
Sponsored by Mouser Electronics and Toshiba
eFuse ICs provide better protection performance than conventional mechanical fuses. In this episode of Chalk Talk, Amelia Dalton and Talayeh Saderi from Toshiba chat about the what, where, and how of eFuse technology. They also investigate the benefits that Toshiba’s TCKE9 eFuses bring to server power management and how you can get started using a TCKE9 eFuse in your next design. 
Oct 29, 2024
22,394 views