“Any sufficiently advanced technology is indistinguishable from magic.” –
Arthur C. Clarke
You have got to be kidding me. I mean, I’m an engineer. I know how stuff works. And you’re telling me you can somehow snag my computer’s encryption keys out of thin air? No way. No. @%$#-ing. Way.
Way.
I’ve seen it happen. I didn’t believe it at first, but there’s nothing quite like a live demonstration to make you a convert. It’s time to stock up on tinfoil hats.
Here’s the background: Practically every computer, cell phone, tablet, cable TV decoder, satellite box, smartcard, modern passport, or other gizmo uses encryption in some way. We encrypt our computer’s browser passwords (and sometimes our computer’s entire file system). The cable company encrypts our user ID; cell phones encrypt our transactions; tablets encrypt our passwords; credit cards encrypt our financial data; passports encrypt our identifying information; and so on. Most of us protect this vital data with weak and flimsy passwords, but that’s a different problem. These devices all use hardware encryption, and the encryption algorithm and the encryption circuitry are very tough and hard to break.
AES (Advanced Encryption Standard) is the most commonly used encryption method, and 256-bit AES is the gold standard for commercial-grade encryption. It’s generally considered hack-proof for anyone without governmental backing. We all feel safe knowing that our credit cards, passports, and cable TV boxes are protected by AES-256 encryption, right?
And as long as you use a strong password, everything’s copacetic, right?
Not even close. Turns out, you can reverse-engineer the rock-solid AES encryption. And you can do it from 10 feet away. Without even touching the box. What the #@%!?
Not only is it doable, it’s doable in multiple ways, using nothing more than an oscilloscope. Start by sticking an oscilloscope probe on your processor’s power pin(s) to measure the current the chip is drawing. With a little practice, you can figure out what instructions the chip is running.
Surprised? In hindsight, it seems pretty obvious. Every instruction on every CPU uses a different mixture of circuitry. For instance, a multiplication instruction uses the chip’s hardware multiplier, which is a big chunk of circuitry that draws measurable power when it’s active. Or, if your CPU doesn’t have a hardware multiplier, the MUL instruction will likely iterate through the adder a bunch of times, which is also detectable. Same goes for most other instructions. In theory, all you have to do is set up some test code to run through the chip’s instruction set, measure the current for each one, and build yourself a tidy little lookup table that maps scope traces to instructions.
But that’s just the first step.
Now that you know (more or less) what code the chip is running, you can spot math-intensive subroutines based on their power signature. Subroutines like, oh, let’s say… AES encryption. Crypto algorithms are necessarily complicated, iterative, and data-intensive. They stand out, relatively speaking, from other code. Easy to spot if you know what to look for.
But that’s just the second step.
Because cryptography algorithms like AES are very iterative – it does use a 256-bit key, after all – you can also recognize the boundaries of each encryption “round,” or loop through the code. And because said code involves bit-wise manipulation, there will be a lot of single-bit operations: masking, rotating, shifting, etc. And guess what? That single bit – whether a 1 or a 0 – will look different on the oscilloscope trace.
Voila! You’ve just observed the chip encrypting data, conveniently walking you through the secret encryption key one bit at a time. And you’ve done it all without monitoring a single data line or address bit, or hacking a single byte of code. It’s all visible (though just barely) through nothing more than the power connection.
Don’t believe it? See for yourself in this video demo.
But it gets better.
Given that all modern chips operate in the megahertz, if not the gigahertz, range, they necessarily radiate some RF noise. Typically, much of that noise is shielded or dampened by the enclosure, but some still leaks out. The FCC (in the United States) and other regulatory bodies control the amount of RF noise you’re allowed to radiate, but no device is completely radio silent. They all emit some random noise.
Except that it’s not random. Like current consumption, the radio-frequency noise that a circuit broadcasts depends entirely on what it’s doing. Which means you can reverse-engineer the device’s activity based on nothing more than over-the-air whispers. From a distance. No contact required.
Creeped out yet?
Sure enough, you can wave an antenna – or even a bent wire – in the general vicinity of a cell phone, a tablet, or a TV set-top box and tease out the carefully guarded encryption keys just like you can using power analysis. You’re still monitoring “power,” in a sense. It’s just radiated power instead of supply power. The concept is the same, and even the equipment and techniques are the same. It’s just far more difficult to prevent. Or even to detect. If someone’s sniffing your encryption keys from 10 feet away, how would you know? Assuming the RF antenna and lab instruments didn’t give them away, that is.
Watch here for a demonstration of how this all works.
What’s the solution? Better passwords won’t solve this problem, because it’s the encryption process itself we’re hacking, not just guessing someone’s birthday or the name of their dog. This attack is absolutely impervious to strong passwords.
Nor can you simply use better shielding. Digital devices are never entirely silent, and the cost and weight penalty of completely shielding a consumer device would be prohibitive. Bear in mind that we’re not talking about “loose lips” or data that’s transmitted in cleartext up to the cloud or over Bluetooth. These are attacks on devices that don’t even have an (intentional) RF interface to speak of, or that are in “airplane mode” and not broadcasting anything (in the usual sense).
The best countermeasure for now seems to be a deliberately obfuscated encryption system. That’s difficult to do in software, but fairly straightforward in hardware. If your AES hardware is, shall we say, optimized for obscurity, it can thwart these power-analysis attacks. And that brings us to this week’s announcement from Rambus.
The company best known for its contentious and expensive DRAM-interface IP recently acquired Cryptography Research, a company that does… well, you figure it out. The combined firm now offers an AES encryption block as “soft” IP, but it works in a way that’s designed to mask its operation. Rambus says it is “two orders of magnitude” harder to crack than existing AES circuitry, which presumably makes it tough enough that hackers won’t try. Or at least, that attempted attacks will be a lot more obvious, expensive, and time-consuming.
The company admits that its new AES IP is slower than existing solutions, at two clock cycles per round instead of the usual single cycle. And, at around 100,000 gates, it’s big. Future versions may trade off some of the security features for smaller die area and/or more speed. But security ain’t always cheap. And, according to an unknown 17th Century philosopher, the price of liberty is eternal vigilance.
There is a long history of security solutions getting better as security flaws are publicised.
What is to keep someone from implementing an unecessary set of 32 bit wide parity registers for a lot of sensitive data. Even if you can see power signatures that tell you when data is processed on an FPGA, a designer can take steps to add parallell processes that make power signitures useless for data mining. The approach I mentioned is just one of many that might make this side attack useless.