So you just shipped that new fancy-dancy Internet of Things (IoT) thingamajiggy with the kung-fu grip and the volume that goes to 11. Congratulations. That is, until you start tossing and turning nights knowing that, for profit’s sake, you omitted security. I mean, really, who is going to be interested in hacking a thingamajiggy, right? Nothing of interest there, so why waste money on unnecessary security?
Except that what’s of interest may not be the thingamajiggy, but rather what it’s connected to. No one cares about the front door; they’re interested in the safe hidden back in the bedroom closet. It’s just that it’s easier to get to the safe via the front door if it’s not locked.
But, of course, it’s too late; you’ve already shipped product. And the breaches are waiting to happen. No more sleep for you!
If only there were a way to retroactively add security… You had the foresight to build in software upgradability, but there is no hardware upgradability, right?
Well, right. But what if you could add security using your existing hardware, with only a software modification? That’s what Intrinsic ID is proposing with a couple of new products.
If you’re not familiar with Intrinsic ID, they have some fundamental physically unclonable function (PUF) technology based on the power-up state of SRAM blocks. Given a specific block, it will power up the same way every time (mostly – more on that in a sec) – and yet that power-up state will be different from the state of any other device powering up. The governing phenomena are the intrinsic process variations across a largish number of transistors and metal lines and resistors and everything else. (You don’t want environmental variations to matter – you want the same result in January or August.)
This can be really useful, since that power-up state then serves as an intrinsic signature – or intrinsic ID – for each individual chip. It’s statistically possible for two chips to end up with the same ID, but the odds are in the range of 1 in 1080. Yeah… not likely. Because there is also some very slight fuzziness surrounding the power-up state, it’s also possible to power up with a wrong ID, but those odds are about 1 in a billion. (And a power cycle is needed to fix that if it happens).
So what good is this for devices that are already deployed? Well, it’s entirely likely that those devices have SRAM in them. And, if the gods are smiling on you, you have more SRAM than you really need. Like, maybe 0.5K or 1K bits that aren’t really needed for everyday operation.
If that’s the case, then you can reallocate SRAM by sequestering that bit of memory for use by the boot loader, leaving the rest of the SRAM to do what it does. You now have the basis for adding security – keys, certificates, and public key functionality – all via software.
Key technology
How might that work? Well, it’s also related to a side question: exactly how much SRAM? 0.5K? 1K? The idea here is that the signature delivered by the PUF can be used to generate a key. And that key won’t be stored anywhere; anytime it’s needed, it’s organically recreated from that SRAM state.
That explains why you need to dedicate the SRAM to this function; you can’t just power up, use the SRAM to create a key, store the key, and then release the SRAM for normal system use. Once you’ve stored the key, you’ve added vulnerability. So you need to keep that SRAM state intact for future use when you need the key.
Which brings us back to the SRAM block size thing. How much you need depends on how long a key you want to use. For a 128-bit key, you can get away with less than 0.5K; 1K will give you enough entropy for a 256-bit key.
This fundamental capability is provided by Intrinsic ID’s new BroadKey product. It gives you basic key functionality. Depending on how much space you have for storing code, they have three (well, four, sort of) versions. The –Light version comes in an 8-KB edition that handles only the key reconstruction. A 15-KB version includes code for enrolling the device when it first comes up. Because that enrollment is a one-time function, the smaller version lets you load that code some other way and then destroy it after, rather than using up footprint in a constrained device.
The next version is –FLEX, which adds the capability of wrapping keys, deriving new keys from the root key, and managing keys. This happens by addition of both their iRNG random-number capability and an AES module. Its footprint is 20 KB. The top-level –FLEX-E version does the same, but it adds elliptic curve cryptography capability in a 25-KB footprint.
Citadel Protection
Once you’ve got the basic key stuff under your belt, there’s so much more that you could do. Security artifacts are about more than just device keys. You may have other keys that aren’t derived from the root key. And there are certificates. And there’s provisioning and issues of trusting apps.
All of this higher-level functionality is provided by a product Intrinsic ID calls Citadel, which builds on the Broadkey fundamentals, adding about 20% more to the footprint (for code that will reside in the system). Citadel is really a toolbox with elements that can be deployed at various stages of design, manufacturing, or deployment.
For keys, it includes BroadKey for software implementations, or, if you’re still designing your hardware, the pre-existing hardware-equivalent Quiddikey IP can be used. On the manufacturing side of things, they support integration (installation, management of production speed, and creating a log of the provisioned devices. Support includes wafer-level and contract manufacturing.
Beyond this, they have a key manager, supporting asset management, tracking, and audits; a certificate manager, supporting signing, certificate authority capability, allocation, and revocation; and an application manager, handling app keys, digital rights management, and in-field provisioning. Certificates can be stored in NVM.
(Image courtesy Intrinsic ID)
How much of this you use, of course, depends on the space available – in particular, for a retrofit on existing deployed units. It’s all about adding some security, even if it’s not possible to include all the bells and whistles. Clearly, if you’re still in the design stages, then you can ensure that you provide enough space to handle the code and computation.
So if you find yourself tossing and turning, wondering when your company will achieve headlines of the not-so-helpful sort – portraying your device as the entryway for some major attack – then you might still be able to fix that.
More info:
What do you think of IntrinsicID’s approach to adding security to devices that are already deployed?