Sometimes even the circuit designer doesn’t know how the chip works. And that can be a good thing.
If you’re designing a chip or a system that includes security features, anti-tampering mechanisms, DRM protection, or defenses against DPA attacks, it’s probably better if you don’t know how it all works. That kind of stuff is mysterious. Secret. Black magic. And there are practitioners of these dark arts who are far more skilled than mortals like you or me. For they dwell in the deep places, apart from the rest, shunning the daylight and the company of men. And we call them Elliptic Technologies.
ET is a 50-person company based in Ottawa, and they… they do things there. Things that other engineers can’t, or won’t, do for themselves. Specifically, Elliptic develops industrial-strength security features on behalf of embedded developers, chipmakers, and product companies. Need to make your box hacker-proof? Elliptic’s got your number.
Spend any time working with security products from Intel, from Rambus (formerly Cryptography Research), from Elliptic, or from anybody and you’ll hear a lot about the “root of trust.” Security is like a chain, with each link presuming that its neighbor is trustworthy. The application software believes the operating system is trustworthy; the operating system believes that the firmware is reliable; and the firmware believes the boot code in ROM is safe. In most systems, that makes the boot ROM the very first link in the chain of trust, hence, the “root of trust.” If the boot ROM has been spoofed, nothing downstream from that can really be trusted.
Just adding a checksum to your boot code isn’t good enough. Checksums are good for catching the occasional bit error, broken pin, or artless hack. But they’re like parity bits in DRAM: cheap and easy, but far from exhaustive and not reliable in all cases. A determined hacker could rewrite your boot code while keeping the checksum the same. Or just hack the checksum-checking code at the same time.
(There’s a certain mathematical entropy at work here. If the checksum were truly unique to your boot code, it would mean that no other boot code in the world could ever have the same checksum and, conversely, that this boot code could have no other checksum. And if that were true, then the checksum itself would be enough to uniquely identify your boot ROM from all the other boot ROMs in the world. If so, then you’ve discovered an amazing new kind of data compression! Also, there’d be no need to have the actual boot code; just boot from the checksum.)
Since everyone likes to use flash memory for boot code these days, it’s fairly straightforward for the bad guys to reprogram it. Masked ROMs and OTPROMs would present a bigger challenge, but they’re less commonly used. The ability to upgrade your firmware in the field also opens a great big door for the bad guys. Hence the need for a reliable root of trust.
How do you do it? Checksums are out. Placing a hologram sticker on the boot ROM won’t stop anybody, although it does look pretty. What you need is a hard-wired root of trust, and that’s where Elliptic comes in.
Elliptic provides the hardware and the software to build a trusted root, called tRoot, into your SoC. The hardware consists of synthesizable IP that you fold into your chip design. The software is a binary image that you drop into your boot ROM. The two work together at boot time (and beyond) to ensure that you’ve booted the trusted code you intended to boot, and that neither it nor anything else in the system has been tampered with.
Elliptic can’t guarantee that your SoC or your system will be bug-free, of course. It’s not that kind of security. But it can make you feel good about your chip and the code that it’s running.
The insides of the tRoot hardware block are necessarily a bit secret, but it does include its own 32-bit CPU, a random-number generator (TRNG), a secure memory controller, and a key manager. You can optionally include crypto accelerator cores and/or a public-key accelerator (PKA), too. The identity of the 32-bit processor inside tRoot is also a secret, although Elliptic says they’d be happy to replace it with an ARM, ARC, MIPS, or other processor if you like. Given that tRoot is intended to be a black box, however, I don’t see much point in using a commercial CPU core, even if you already have the appropriate license. The more obscure tRoot’s internals, the better.
Apart from handling cold-boot procedures, tRoot also lends a hand in other security-related chores. Need to deliver DRM-protected material? Elliptic’s hardware and software will manage the keys. Worried about cloning? Your tRoot is on the job. Concerned that hackers armed with sensitive instruments might mount a side-channel attack? That’s the sort of thing tRoot was born to fight.
Ultimately, tRoot isn’t all that exciting from a system- or circuit-designer’s point of view. It’s more of a necessary evil, a black box that enables other black boxes to do their jobs reliably. And it’s a perfect example of why we buy hardware IP from outside suppliers: because they have expertise we don’t have, and it’s important to get this right. Sloppy, home-grown security is likely to be worse than no security at all. If your goal is to really and truly thwart attacks from all angles, it makes sense to go to the experts.
Or ‘Better Living Through Digital Signatures’
Jim, Jim, Jim – You didn’t actually swallow this hype, did you?
Security through obscurity, as it is derisively termed by actual experts, is no security at all. If you think about Elliptic’s argument, you’ll realize that what they mean when they say secrecy of their mechanisms is essential is that if the secrets behind their mechanism (like the identity of their processor core?!) are disclosed, then their security is broken. So, really, what they’re saying is that they are one disgruntled employee (or licensee) away from being completely worthless.
Fortunately, that’s probably not true–what you’re parroting is just marketing BS designed to scare people. It is entirely practical–and highly desirable–to design a hardware root of trust where the entire design is open to scrutiny and that requires no secrecy except for the specific secret key used to apply digital signatures.
Their technology probably isn’t developed by idiots, but their marketing pitch certainly is. One only has to look at the enormous supply of vulnerabilities in systems that were “secret inside” to realize that sunshine, as always, is the best disinfectant. If it remains a secret, no one can tell whether it’s effective… except its purveyors, and, sooner or later, the adversaries.
Conclusion? This is probably an overpriced and potentially ineffective technology that is no more effective than trusted boot solutions now available throughout the semiconductor industry. They may claim to be “experts”, but they sure don’t talk that way.
Easy there, cowboy. Although the type of CPU used in Elliptic’s chip is (currently) undisclosed, its identity is not vital to the chip’s operation. Any number of CPUs could be used in its place, and the chip would still operate more or less the same. Elliptic simply chose to use that CPU because it was easy/convenient/cheap.