feature article
Subscribe Now

Is Your Design Process Secure?

Dealing With the Human Factor

[Editor’s note: this is the seventh and final installment in a series on Internet-of-Things security. You can find the introductory piece here and the prior piece here.]

We’ve dedicated lots of pages to lots of technical aspects of Internet of Things (IoT) security. With the “easy” technology bits behind us, it’s now time to address the hard part: the human factor. Ultimately, it’s people that design and build these things. So… how do you know they did it right? How do you know that there’s not a mole on the team with less than altruistic motives?

The thing we’re trying to avoid here is ending up with a system that – accidentally or intentionally – has missing or added functionality that could compromise both the system and the network to which it’s connected. But those two qualifiers make for two separate problems that we’ll address separately: how do you make sure that you haven’t accidentally done something wrong, and how do you make sure that there isn’t someone intentionally doing something wrong?

And yeah, this is a really squishy topic. Especially when you’re basing your confidence on a conversation that might start something like, “OK folks, are you sure you thought of absolutely everything that could go wrong?”

And let’s set some expectations here: this is me ruminating on difficult issues. I’ll hit on some things but might totally miss others. Feel free to add constructive ideas in the comments.

Say What You Do, Do What You Say

Ultimately, having a solid system (barring sabotage – more on that in a minute) boils down to two elements: requirements and validation against requirements. And there’s nothing really new here; this has always been the case. It’s just that the stakes have changed now that some cheap little widget, one that would have been of no consequence yesterday, will tomorrow be connected to a high-value network.

Requirements management has long been used for safety-critical design. But such extreme design processes – expensive and slow – would kill the promise of the IoT. Yes, it might do wonders to ensure that our thermostat works as promised, but we won’t sell many thousand-dollar units.

That said, requirements tracking is but one aspect of this safety-critical world, and, in principle, it could be moved into other design spaces if low-cost, easy-to-use tools were available. But there’s a catch: requirements management tends to lean towards old-school planning: assemble a large set of requirements and then go build them. Today’s agile practices make for a much more fluid set of requirements and timing.

The good news about our more flexible processes is that we can learn quickly by testing whether some idea was good – and if not, we can change it rather than sticking doggedly to it because it’s on the requirements list. But it also means that any requirements management has to be low-overhead and easy to manage.

There are different ways to manage requirements. There’s the human way and, to an extent, the automated way. The goal of both is as follows:

  • Every requirement should be associated with a part of the design that implements the requirement.
  • Every piece of functionality should be associated with a requirement.

The first bit simply says that the design should be complete. Yeah, I know, requirements are also about the battle between marketing and engineering for dominance. So a requirements list is negotiated between the two, but if all the requirements are met, then marketing won. So, as release time nears, engineering must identify a few requirements that can’t be met in time and renegotiate them away. Am I right?

That said, however you got to a final, final set of requirements, you should be able to go through each one and point to the code or chip or logic that handles that requirement. That’s a lot of work to do manually. You might be tempted simply to point to a function and say, “This function covers this requirement.” But… how do you know that’s what the function actually does? You need a code review. That’s more time and people.

Of course, good requirements come with tests to validate those requirements, so many of the functional requirements can simply be tested. If the system performs the designated task, then the requirement has been met. Simple.

But there’s another reason for tying requirements to pieces of the design. The second of those bullets above means no Easter eggs. No flight simulator hidden in the washing machine. And no Trojan horses buried deep in the design, just waiting for instructions to unload their nefarious gifts. If, after you’ve tied all requirements to code, you have code left over, then you may have a problem.

What we’re establishing here is a one-to-one correspondence between “functionality” (whatever that means) and requirements. The design should be necessary and sufficient. Period.

Makes you wonder whether there’s a way to automate the process of tying requirements to design artifacts to save time. And there is… to a point. It depends on what parts of the design you’re dealing with. To my mind, there are three aspects to any such system:

  • Software; good news is, this is language-based.
  • SoC functionality; good news is, especially with logic, this is also language-based.
  • Integration; bad news is, this is ad hoc.

The reason language-based aspects of the design are good is that they are well suited to analysis tools. At the end of the day, any new code will benefit from a manual code review, but by tying chunks of code (hardware or software) to specific modules that are protected by configuration management tools, you can much more easily create an accounting between the code and the requirements and maintain a history of that accounting.

As far as I can tell, this is more work than is typically done – especially for low-priced consumer goods. Would automation tools kill the deal? I did a quick check-in with LDRA. They don’t do requirements tracking per se, but they do help with traceability – the correspondence between requirements and code. And they are seeing a more disciplined approach starting to migrate outside of the safety-critical world. But it doesn’t sound like it’s a rush…

If rock-solid design practices are going to become a thing, then tools and methodologies have to evolve to make them affordable and easy. There could be a big market out there, although… yeah… I’m not sure I’d want to be the sales guy trying to convince a design team to add a layer of tools and discipline… Even if the tools are easy to use, it’s going to be a tough sell.

While most of this discussion has been about ensuring that all requirements are met, tools can also help confirm that there’s no extra code or functionality lurking in the dark. Non-code items are, of course, harder to prove.

The debug dilemma

One of the most prevalent “non-required” requirements tends to be a debug port. It’s not required for use by the consumer, but it is required by engineering. Trying to design a system with no debug access is nuts. But, as we’ve mentioned before, debug ports are notorious as backdoors into the system.

So you could have one board for development and one for commercial use. We already do this with dev boards, but a final production board with no debug access? That’s a bit scary. You have to be absolutely 100% sure that any failure on the final board will also be visible on the dev board.

You could propose to have a final system with a debug port, but with a final assembly that doesn’t make that port available. But what does that mean? Use a different box that covers the port? Nope; an attacker can simply remove the box. Remove the PCB traces? Maybe… if the pins to which they attach are buried somewhere inaccessible under a ball grid array.

But the whole point of leaving some debug functionality in place is in case you need it. So you need to be able to get to it. And if you can get to it, so can someone else. So the only real solution takes us back a few weeks: anything thing that can be plugged into your system should be authenticated. And that includes a debug probe.

The other thing to keep in mind when validating your system is that an attacker may subject the system to all kinds of behaviors that don’t comport with the normal operating conditions. Think of the hacks that required strange power glitches. Those glitches disturb the state of the system without causing a clean restart, and in that nowhere-land state, the attackers can start probing further.

How do you prove your system can’t be hacked in this way? There’s really only one way: give it to a bunch of hackers and have them whack away at it. The problem with this approach? Cost and time. Of course.

Ghost in the Machine

The less savory part of this discussion involves sabotage. This assumes that someone on the design team somewhere intentionally compromises the system. Requirements tracking can certainly help here, if done in detail and with good tests to prove that things work – including (or especially) corner cases.

But it’s still possible for more subtle subterfuge to make its way undetected. For example, in the implementation of some required function, a rogue designer could ensure that it was done in a way that maximizes electromagnetic radiation. This would then allow an attacker to monitor the internal workings by using an external antenna. The only way to catch this would be to have a strong requirement that there be no radiation signature – along with a good test to prove it.

So perhaps, with really thorough discipline, you can protect the parts of the design you create yourself. But, increasingly, we don’t do all of our designs: we rely on IP for lots of specialized functionality that we don’t want to reinvent. Just imagine if, say, your team had to learn PCI Express from the spec and write fresh code implementing it simply because you’re designing a board that needs to plug into a PCIe slot. Enormous waste of effort.

So you buy it from someone who has already done all that work. But how do you prove that the function you’re buying – software, HDL, or plug-in module – doesn’t have surprises lurking within? I suppose that, if you have access to source code, you could examine it, but that would be an extraordinary amount of extra work. You’d essentially need to read the PCie spec, derive your own set of requirements, and then find the code that meets those requirements to make sure that there’s nothing suspicious.

Lots of work, and even then, it can work only if you’re literally taking their source code and working it into your build – so you control the path from source to object. If you receive a module with object code already built in, then a code review assumes that you’re reviewing the exact code that got compiled into the module. That’s pretty much impossible to prove.

So how do you assure yourself that acquired modules are clean? Well, you can test them thoroughly, but if there really is some code lurking in there that wakes up only under very specific, non-obvious circumstances, then there’s no way that testing is going to uncover it. Remember, here we’re concerned with overt sabotage, not accidental oversights. So there will likely be efforts to conceal.

It’s this generally unsolvable problem that causes so much of the expense and effort for safety-critical systems. You have to go to extraordinary lengths to prove that all parts of the design are clean and that all tools used to create parts of the design are clean and that all acquired functionality is clean. And we know that such a process will never work for the IoT because it’s just too expensive.

In the end, it comes down to one thing: trust. Of the human kind, not the electronic kind. Trust is partly established through reputation, but a reputation has to be built, so someone had to be the first trusting guy.

And this is where you get into unwinnable arguments about how to ensure the government isn’t building in backdoors or that other countries aren’t gradually undermining our technology base. It’s the rare day when you can point conclusively to a smoking gun in its owner’s hand. Mostly we’re left with opinions – often strident – and not much more.

For the small design house, making its way by doing lots of subcontracting projects with nominal margins, bulking up the design process isn’t going to feel good. A conscientious engineering team might lose some sleep worrying whether some design flaw will bring down the IoT, but there’s plenty more businessmen sleeping just fine knowing that, if something blows up, it’s someone else’s problem. (Or not? Wonder how VW is feeling about that right now?)

In the end, I’ve found no good answers. The IoT will involve uncountable numbers of different devices, each with different functionality and form factor. Innovation is running rampant, so new ideas are coming all the time. This means that no one-size-fits-all checklist can work. Each design and marketing and architecture team needs to thoroughly think through all of the possible vulnerabilities of their specific system and systematically track and plug those holes.

Accumulated knowledge as captured in lists like the Common Criteria can help us learn from the past, but they still must be adapted to each system. Getting certified against such lists is a great way to establish trust… but it also takes time and money.

 

And that’s about it. Our tour of security is complete. [chuckle… as if…] Go change the world with your designs. And don’t screw up. No pressure…

 

6 thoughts on “Is Your Design Process Secure?”

  1. Another reason for clear requirements is the verification part of test and verification. Verification translates roughly into “Does my completed design do what it is intended to do?” If you don’t have documented requirements, how can you answer that question?

  2. True, but with a focus on security, we have two angles: requirements as a way to list all the security features required (which would be a subset of the total requirements), and then the total set of requirements to help ensure there’s no rogue stuff.

  3. Wow, someone is way out of date with their ideas on what requirements management is. You make it sound like it is enforcing a waterfall process, but that could hardly be further from the truth. It is quite possible to be agile and still do requirements management, all you need to do is add the requirements (and traceability) as you develop them. I recommend reading the classic paper by David Parnas and Paul Clements “A Rational design Process, How and Why to Fake it”. This was written around three decades ago, so it isn’t news!

  4. I’ve been very vocal about just how big a mistake IoT security is likely to be.

    This week, just a very small part of that nightmare reared it’s head.

    http://www.theverge.com/2016/10/21/13362354/dyn-dns-ddos-attack-cause-outage-status-explained

    “Today’s DDoS attack can be linked to the Internet of Things”

    A Mirai botnet essentially takes advantage of the vulnerable security of Internet of Things devices, meaning any smart home gadget or connected device anywhere that has weak login credentials. Mirai, a piece of malware, works by scanning the internet for those devices that still have factory default or static username and password combinations. It then takes control of those devices, turning them into bots that can then be wielded as part of a kind of army to overload networks and servers with nonsense requests that slow speeds or even incite total shutdowns.

  5. Hopefully there will be some significant class action lawsuits, that put these idiots out of business, and firmly plant a legal line in the sand that establishes either get security right, and stand behind it … or get the h*** out of town (this industry).

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 24, 2024
Going to the supermarket? If so, you need to watch this video on 'Why the Other Line is Likely to Move Faster' (a.k.a. 'Queuing Theory for the Holiday Season')....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Premo-Flex
Sponsored by Mouser Electronics and Molex
In this episode of Chalk Talk, Logan Lukasik from Molex and Amelia Dalton explore the benefits of Molex’s flat flexible cables and flexible printed circuits called Premo-Flex. They also investigate custom capabilities of the Premo-Flex solutions and how you can get started using Premo-Flex in your next design.
Dec 12, 2024
6,962 views