I’ve said it before, and I’ll doubtless say it again (actually, now I come to think about it, I’ve said “I’ve said it before, and I’ll doubtless say it again” before, and I’ll doubtless say it again; whatever you do, don’t get me rambling about recursion), I cannot help but don my frowny face when I think about those nefarious cyber-scoundrels who make the world a worse place for everyone. I have naught but loathing for the creators of malware, ransomware, and any other “ware” of this ilk.
Life would be so much easier for hardware designers and software developers if they didn’t have to embed security features in their creations. Furthermore, the ensuing hardware and software would be smaller (more concise), more efficient, and higher-performing if they weren’t weighed down by security features. But cybercrime is the way of the world, so we must deal with it.
At least if you discover a susceptibility in your software, you have the chance to fix it, albeit the damage may already have been done. It’s a tad more embarrassing to discover a vulnerability in your hardware once your chip has been deployed into the field, at which point I won’t be the only one wearing a frowny face.
The reason for my current contemplations is that I was just chatting with Jason Oberg, who is the Co-Founder and CTO at Cycuity (which means my frowny face is all his fault).
Let’s start with processors. Even if these are standalone devices, we are probably talking about multicore machines. Hopefully, these devices will leverage some form of hardware root-of-trust (HW-RoT), which will contain the keys for cryptographic functions and manage a secure boot process, thereby providing the foundation for the subsequent software chain of trust. The problem is that if a security vulnerability is introduced into the HW-RoT, then you are buggered, if I might make so bold (this is a useful technical term to which I was introduced during my time in industry).
The designers of today’s high-performance processors leverage myriad cunning techniques, such as out-of-order execution, which takes advantage of instruction cycles that would otherwise be wasted, and speculative execution, in which work is performed before it is known whether that work is actually needed, so as to prevent the delay that would be incurred by doing the work after it is known that that work is needed, if you see what I mean.
The issue is that, if the device’s designers aren’t careful, a lot of state information inside the processor may end up being exposed, even to the extent of being shared by secure applications and user-mode applications. Now imagine the problems when multiple threads are running on the same processor core. These problems are only exacerbated in a multicore machine in which threads may be dynamically switched from core-to-core under the control of a load-balancing algorithm, which may itself be implemented in hardware or software. These “transient execution vulnerabilities” make the processor susceptible to microarchitectural timing side-channel attacks (see Spectre and Meltdown, for example).
Another scenario involves the designing of a system-on-chip (SoC) device that features a cluster of processor cores acquired from a third-party IP provider. A lot of companies these days are creating artificial intelligence/machine learning (AI/ML) accelerators that augment the processor cluster with hardware accelerators and artificial neural networks (ANNs). These companies are now including security features in their devices. For example, as I mentioned in my Generative AI is Coming to the Edge column, the new Ara-2 device from Kinara—which can run an entire Generative AI model like Stable Diffusion locally—includes support for secure boot and encrypted memory.
OK, this is where we get to the good stuff. The guys and gals at Cycuity offer a tool called Radix, which integrates into the verification environments provided by high-end Electronic Design Automation (EDA) vendors like Cadence and Mentor (now Siemens EDA). In fact, Radix comes in two flavors (or deux saveurs for our French-speaking friends): Radix-S for simulation and Radix-M for emulation.
Radix comes in two flavors, one for simulation and one for emulation (Source: Cycuity)
As Jason told me, “Radix uses a technique called ‘information flow,’ which sounds fancy but basically it’s just tracking to see where information can travel and does travel in the design.” In the context of something like a HW-RoT, Radix can very concisely model where secret information travels and report this to the users, saying “This is somewhere that this data should never have gone,” so they can go and debug the issue. Similarly, in the case of transient execution vulnerabilities, Radix can tell the user, “You have some secret data that actually ended up in some state that is observable by an adversary.” The bottom line is that Radix provides a way for chip designers to specify their security requirements and then help verify that these requirements are being met.
Another way to think about this is that Radix’s patented information flow analysis tracks and traces all security assets, independently of their values, as they flow across the chip and through logical and sequential transformations. This automated asset tracking offers unique insights into the design’s security behavior, thereby providing a foundation for powerful security discovery and verification.
Automated asset tracking (Source: Cycuity)
But wait, there’s more, because Radix doesn’t stop at a secure boot. In the case of companies who both design a device and create code in the form of application software that will run on that device, Radix can alert the software developers as to any vulnerabilities that appear when their applications are executed.
Of course, companies of this type—ones who control the entire hardware and software stack—are few and far between (Apple, Google, Microsoft…). What about the case where one company creates the chip and then myriad other companies create applications to run on that chip?
As we’ve already discussed, if a chip’s designers use Radix, they can verify that things like the security of the firmware and the boot process have been validated. Meanwhile, the creators of the software applications have tools that can check that their code is protected against security vulnerabilities like buffer overflows. However, the software developers cannot 100% guarantee that their cunning creations won’t interact with the underlying hardware in such a way that a security susceptibility raises its snout.
With the exponential generation of data and the ever-increasing connectedness of everything, the importance of security can only increase. At some time in the future, I can envision a software developer creating a program, picking a targeted hardware platform, and running their code on a virtual machine in the cloud. I’m envisaging this virtual machine including Radix, which reports any security vulnerabilities specific to this application running on the specified platform (happy face). Of course, there’s always the possibility that slimy cyber-scoundrels could do something similar, thereby enabling them to uncover vulnerabilities they could use for their own despicable deeds (frowny face).
My head hurts. What about you? Do you have any thoughts you’d care to share about any of this?