feature article
Subscribe Now

Security Blanket

Protecting Your System in an Age of Paranoia

The year is 2010. Alone in the kitchen, 8 year-old Mikey pulls a cereal container down from the cupboard. He presses the “open” button. A tiny camera with a wide-angle lens grabs an image. Inside the lid, a low-cost embedded system with hardware video processing locates Mikey’s key facial features in the image and creates an identification map. It then downloads from the household wireless network a current database of the family members allowed access to that cereal at this time of day. Mikey is on the “disallowed” list. The lock holds fast. A text notification is already on its way to both parents’ mobile phones. Mikey is busted!

Security is a growing concern in almost every type of system design today. Some applications have a more pressing need than others, of course. The consequences of Mikey subverting the automated cereal protection system and downing a few unauthorized grams of carbohydrates are far less severe than, say, a security failure in an airliner engine control system. Almost all systems these days have at least rudimentary security concerns. In a few cases, security is paramount.

A somewhat undesirable corollary to Moore’s Law might say that the more gates we have available, the more we’ll tend to use. Why connect a simple switch directly to a control line when we can add a microcontroller that allows us to use a button, de-bounce the press action, check the status of the day/night condition, and illuminate the appropriate status LED? We sprinkle superfluous software and hardware into our systems like Emeril adding the final “Bam!” of seasoning to some exotic culinary creation.

The consequence of this complexity explosion is a trend toward systems with a plethora of security vulnerabilities. Usually, we don’t care. But in the cases where we do, the difficulty of maintaining rigorous security grows almost exponentially as the complexity of our basic system rises. Throw Moore’s Law into the mix, and you end up with double security holes squared. Not a pretty picture for the paranoid.

If you dare, go deadbolt the door, double-check your belt and suspenders, strap on your helmet, goggles, bullet-proof vest, latex gloves and kneepads, and let’s go explore (cautiously, of course) some of the issues in embedded system security today. First, as an engineer, it’s important that you understand statistics. Mastering the mathematics of probability will let you make one of the most important determinations in system security design – whether you’re protecting against an actual threat or simply a perceived one.

You might think engineers would be pragmatists – practical-minded folk who would never pander to paranoic delusions. In my experience, however, quite the opposite is true. Trained problem solvers, engineers tend to work to eradicate every possible failure mechanism, often without regard to the probability of a particular failure occurring, compared with the cost of preventing it.

As a case in point, I once worked on a large software development project in the very early days of object-oriented programming with C++. Our team had written hundreds of thousands of lines of the stuff, (poorly, I might add, as all of us were complete novices in object-oriented design, and decent compilers and debug tools didn’t yet exist). Just when we were at the peak of our development project frustration, the big lockdown notice came down from on high. The company was afraid of a security breach by our competitor, and our source code must be protected at all costs. Work almost ceased while elaborate procedures were developed to thwart these imagined thieves.

Personally, I thought that the best strategy we could have employed was to just give our source code to our competitors. Simply put it in a box and mail it to them. After months of effort, we could barely get the stuff to compile, let alone perform any facsimile of its intended function, and we’d written it ourselves. Even if our competitor was smart enough to get it to build successfully, the debug effort alone would surely have set them back years. The point, however, is that we had reacted irrationally to a perceived threat without doing a sound analysis of the cost of the security compared with the cost of a security failure. Our project was set back months, launched late, and missed an important market window as a direct result of our paranoid over-reaction.

There are a number of types of security to consider in systems design. Closest to home for us as engineers is, of course, the security of our intellectual property. The last thing we want to picture is some shyster stealing our hard-earned design ideas and competing with us in the market using our own technology. (…unless we’re doing open-source software development, of course, in which case we’re helping the technological proletariat revolution rise up to defeat the IP-mongering demons of corporate greed. Power to the people!) Beyond our own IP security, if we’re developing a subsystem or chipset that’s used by downstream designers, we need to be concerned about design security for our OEMs as well.

With outsourcing and globalized manufacturing becoming more the rule than the exception these days, there is a very real risk of our designs being stolen by the very people we trust to help us realize them. Overbuilding is probably the most common theft mechanism hitting systems designers today. It works like this: Manufacturers work hard all day building the units that you’ve ordered and shipping them to you in a timely manner. They then work hard all night building more of your product to sell themselves on the black market, using standard parts they acquired through normal channels. These identical (they were made on the same assembly line) products have a much higher profit margin than the ones you’re selling, of course.

The best defense against overbuilding is to have some component in your system for which you can control or monitor the inventory, or that you can license or activate only in the hands of an authorized user. If your system contains an ASIC, that’s a good place to start. Unless the overbuilders have a way to clone your ASIC (we’ll talk about cloning in a minute), they won’t be able to build working systems without tapping into your exclusive supply chain.

FPGAs can be used in a capacity similar to ASICs, but they can also provide a security hole if you’re not careful how you use them. Since FPGAs are standard parts, unscrupulous manufacturers have an easy supply available to them. All they need to do is to capture (or redesign) your configuration bitstream, and they’re right back building working systems again. FPGA manufacturers offer a variety of schemes to subvert these thieves, with varying degrees of effectiveness and design cost. SRAM-based FPGAs (the most common devices) typically rely on bitstream encryption strategies to keep your IP out of the evildoers hands. Non-volatile devices like flash- and antifuse-based FPGAs rely on different schemes that we’ll discuss separately.

The typical attacks on the ASIC or FPGA (custom logic) part of a design are cloning and reverse-engineering. Cloning is clearly the easy one, from the thieves’ point of view. If you’re worried about reverse-engineering, you should first get out that probability calculator and determine if such an attack on your design would be financially justified for the thief. Reverse-engineering is an expensive and time-consuming crime. In ASICs, it is widely discussed that reverse engineering can be carried out by examining the device under a microscope and plotting the locations of metal traces and vias, eventually unraveling the netlist for the design. If your design happens to be a 90nm ASIC with 10 layers of metal and a billion or more transistors, I’d say buy the thieves a microscope and wish them luck. Unless they’re way smarter than most of us, it’ll be decades before they have a working replica. Their black market Speak-and-Spell might be almost ready today.

FPGAs (in the old days) made much more attractive targets. Since the bitstream is stored outside the device in an external PROM, the programming bits between the prom and the FPGA could be intercepted, and the design could then be easily cloned. In order to prevent this, SRAM-based FPGA manufacturers now allow the bitstream to be encrypted, and an encryption key programmed into the FPGA device itself. Only an FPGA encoded with the correct encryption key can read the encrypted bitstream. You can have your device manufactured in an untrusted environment, then add the encryption keys in your own facility or by a trusted third party. Stealing your design now becomes a Bondesque adventure of feature-length proportions, complete with shady characters, secret codes, and cash payoffs – lots of fun to write about, but less than practical for most commercial purposes.

The non-volatile FPGAs like antifuse and flash devices are probably inherently more secure. There is always the microscope trick (described above), but with antifuse, it is extremely difficult to tell which junctions are fused and which are not. Without that distinction, all antifuse parts look alike. Flash is similar to antifuse, except that it is reprogrammable. If you plan to put a scheme into place to reprogram it in the field, you face similar challenges to those of SRAM FPGAs, with similar antidotes.

Beyond protecting your own interests and IP, there’s the issue of protecting those downstream from you – your OEMs and end users. They have issues with protection of their data and designs that live inside or flow through your product. In the second part of this series, we’ll look at their unique problems and the methods available to secure them as well. Until then, remain vigilant. Keep a sharp watch and always remember to wear your foil hat. You never know who’s listening.

Leave a Reply

Security Blanket

Protecting Your System in an Age of Paranoia

Security is a growing concern in almost every type of system design today. Some applications have a more pressing need than others, of course. The consequences of Mikey subverting the automated cereal protection system and downing a few unauthorized grams of carbohydrates are far less severe than, say, a security failure in an airliner engine control system. Almost all systems these days have at least rudimentary security concerns. In a few cases, security is paramount.

A somewhat undesirable corollary to Moore’s Law might say that the more gates we have available, the more we’ll tend to use. Why connect a simple switch directly to a control line when we can add a microcontroller that allows us to use a button, de-bounce the press action, check the status of the day/night condition, and illuminate the appropriate status LED? We sprinkle superfluous software and hardware into our systems like Emeril adding the final “Bam!” of seasoning to some exotic culinary creation.

The consequence of this complexity explosion is a trend toward systems with a plethora of security vulnerabilities. Usually, we don’t care. But in the cases where we do, the difficulty of maintaining rigorous security grows almost exponentially as the complexity of our basic system rises. Throw Moore’s Law into the mix, and you end up with double security holes squared. Not a pretty picture for the paranoid.

If you dare, go deadbolt the door, double-check your belt and suspenders, strap on your helmet, goggles, bullet-proof vest, latex gloves and kneepads, and let’s go explore (cautiously, of course) some of the issues in embedded system security today. First, as an engineer, it’s important that you understand statistics. Mastering the mathematics of probability will let you make one of the most important determinations in system security design – whether you’re protecting against an actual threat or simply a perceived one.

You might think engineers would be pragmatists – practical-minded folk who would never pander to paranoic delusions. In my experience, however, quite the opposite is true. Trained problem solvers, engineers tend to work to eradicate every possible failure mechanism, often without regard to the probability of a particular failure occurring, compared with the cost of preventing it.

As a case in point, I once worked on a large software development project in the very early days of object-oriented programming with C++. Our team had written hundreds of thousands of lines of the stuff, (poorly, I might add, as all of us were complete novices in object-oriented design, and decent compilers and debug tools didn’t yet exist). Just when we were at the peak of our development project frustration, the big lockdown notice came down from on high. The company was afraid of a security breach by our competitor, and our source code must be protected at all costs. Work almost ceased while elaborate procedures were developed to thwart these imagined thieves.

Personally, I thought that the best strategy we could have employed was to just give our source code to our competitors. Simply put it in a box and mail it to them. After months of effort, we could barely get the stuff to compile, let alone perform any facsimile of its intended function, and we’d written it ourselves. Even if our competitor was smart enough to get it to build successfully, the debug effort alone would surely have set them back years. The point, however, is that we had reacted irrationally to a perceived threat without doing a sound analysis of the cost of the security compared with the cost of a security failure. Our project was set back months, launched late, and missed an important market window as a direct result of our paranoid over-reaction.

There are a number of types of security to consider in systems design. Closest to home for us as engineers is, of course, the security of our intellectual property. The last thing we want to picture is some shyster stealing our hard-earned design ideas and competing with us in the market using our own technology. (…unless we’re doing open-source software development, of course, in which case we’re helping the technological proletariat revolution rise up to defeat the IP-mongering demons of corporate greed. Power to the people!) Beyond our own IP security, if we’re developing a subsystem or chipset that’s used by downstream designers, we need to be concerned about design security for our OEMs as well.

With outsourcing and globalized manufacturing becoming more the rule than the exception these days, there is a very real risk of our designs being stolen by the very people we trust to help us realize them. Overbuilding is probably the most common theft mechanism hitting systems designers today. It works like this: Manufacturers work hard all day building the units that you’ve ordered and shipping them to you in a timely manner. They then work hard all night building more of your product to sell themselves on the black market, using standard parts they acquired through normal channels. These identical (they were made on the same assembly line) products have a much higher profit margin than the ones you’re selling, of course.

The best defense against overbuilding is to have some component in your system for which you can control or monitor the inventory, or that you can license or activate only in the hands of an authorized user. If your system contains an ASIC, that’s a good place to start. Unless the overbuilders have a way to clone your ASIC (we’ll talk about cloning in a minute), they won’t be able to build working systems without tapping into your exclusive supply chain.

FPGAs can be used in a capacity similar to ASICs, but they can also provide a security hole if you’re not careful how you use them. Since FPGAs are standard parts, unscrupulous manufacturers have an easy supply available to them. All they need to do is to capture (or redesign) your configuration bitstream, and they’re right back building working systems again. FPGA manufacturers offer a variety of schemes to subvert these thieves, with varying degrees of effectiveness and design cost. SRAM-based FPGAs (the most common devices) typically rely on bitstream encryption strategies to keep your IP out of the evildoers hands. Non-volatile devices like flash- and antifuse-based FPGAs rely on different schemes that we’ll discuss separately.

The typical attacks on the ASIC or FPGA (custom logic) part of a design are cloning and reverse-engineering. Cloning is clearly the easy one, from the thieves’ point of view. If you’re worried about reverse-engineering, you should first get out that probability calculator and determine if such an attack on your design would be financially justified for the thief. Reverse-engineering is an expensive and time-consuming crime. In ASICs, it is widely discussed that reverse engineering can be carried out by examining the device under a microscope and plotting the locations of metal traces and vias, eventually unraveling the netlist for the design. If your design happens to be a 90nm ASIC with 10 layers of metal and a billion or more transistors, I’d say buy the thieves a microscope and wish them luck. Unless they’re way smarter than most of us, it’ll be decades before they have a working replica. Their black market Speak-and-Spell might be almost ready today.

FPGAs (in the old days) made much more attractive targets. Since the bitstream is stored outside the device in an external PROM, the programming bits between the prom and the FPGA could be intercepted, and the design could then be easily cloned. In order to prevent this, SRAM-based FPGA manufacturers now allow the bitstream to be encrypted, and an encryption key programmed into the FPGA device itself. Only an FPGA encoded with the correct encryption key can read the encrypted bitstream. You can have your device manufactured in an untrusted environment, then add the encryption keys in your own facility or by a trusted third party. Stealing your design now becomes a Bondesque adventure of feature-length proportions, complete with shady characters, secret codes, and cash payoffs – lots of fun to write about, but less than practical for most commercial purposes.

The non-volatile FPGAs like antifuse and flash devices are probably inherently more secure. There is always the microscope trick (described above), but with antifuse, it is extremely difficult to tell which junctions are fused and which are not. Without that distinction, all antifuse parts look alike. Flash is similar to antifuse, except that it is reprogrammable. If you plan to put a scheme into place to reprogram it in the field, you face similar challenges to those of SRAM FPGAs, with similar antidotes.

Beyond protecting your own interests and IP, there’s the issue of protecting those downstream from you – your OEMs and end users. They have issues with protection of their data and designs that live inside or flow through your product. In the second part of this series, we’ll look at their unique problems and the methods available to secure them as well. Until then, remain vigilant. Keep a sharp watch and always remember to wear your foil hat. You never know who’s listening.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Advantech Dual Band WiFi
Sponsored by Mouser Electronics and Advantech
In this episode of Chalk Talk, Amelia Dalton and Monica Goode from Advantech investigate the what, where, and how of dual band WiFi. They also explore the benefits that dual band WiFi can bring to a variety of embedded designs and how you can take advantage of Advantech dual band WiFi solutions for your next design.
Jul 31, 2024
84,015 views