“There is no such thing as perfect security, only varying levels of insecurity.” – Salman Rushdie
Spend any time talking with IoT developers and one of two subjects will come up. One, they’ll admit they’re worried about security. And two, they’ll talk about wanting to connect to the cloud.
Trouble is, most IoT developers don’t know $%@# about either topic. They know they want it – possibly because someone in authority told them they wanted it – but they rarely know where to begin. Not to over-generalize, but most IoT developers don’t have experience with either security or cloud connectivity. Their expertise usually lies elsewhere, whether it’s in making elevators, or steam whistles, or industrial turbines. They’re engineering gods within their field, but now they need to graft on this “security” thing with little idea of how to go about it. Sound familiar?
There’s no point in arguing whether you need security features in your product. That’s a given. The question instead becomes, how will you implement it? And what constitutes good (or at least, adequate) security? How can you tell? What does good security look like? And where’s the additional budget for this, boss? And the extension to my schedule? And the additional headcount? Oh, right – none of the above. We’re supposed to add world-class security for free, in zero time, with no in-house talent. I’ll get right on that.
Good thing Inside Secure exists. ‘Cause that’s pretty much what they offer.
Like a lot of security-related companies sprouting up in the IoT weed patch, Inside Secure is actually a fairly old company that we just never paid attention to before. The 20-year-old firm is nominally French, but most of their engineers are in San Diego, with important development centers in Glasgow, Amsterdam, and Finland. So… everyone has a funny accent.
The group cut its proverbial teeth making security software for home/office routers (millions and millions sold) and TV set-top-boxes (ditto). They’ve also done work for financial customers (JP Morgan, Visa, MasterCard), hardware companies (Cisco, Intel, Qualcomm), and automotive companies (who shall not be named).
And it’s those automotive jobs that led them to an interesting revelation. Carmakers want to build advanced infotainment systems into their cars, and they want them to run downloadable apps. Sometimes third-party apps. Sometimes factory apps (often developed by a third party). And sometimes the driver’s own smartphone apps via Apple CarPlay or Android Auto. In short, cars are becoming app platforms, with all that that implies.
It’s no secret that smartphone apps can be hacked. And hacked car apps raise the specter of all kinds of scary scenarios. Thankfully, most of those nightmares are overblown, says Asaf Ashkenazi, Inside Secure’s Chief Strategy Officer. Hackers usually want to make money, not kill people, so they’re not interested in disabling a car’s safety features or driving it into a tree.
More good news: Cars are difficult to hack. Carmakers are (so far) pretty good at protecting their vehicles from malware. Unlike with Windows, Android, or iOS, there aren’t a lot of established car-hacking tools circulating in the malware community. Hacking a car requires a dedicated effort, not just a script kiddie. And, hacking one car doesn’t necessarily reveal vulnerabilities in other cars. At most, the bad guys might gain access to a production run of a specific make, model, and year, but even that is generally more trouble than it’s worth.
Now, for the bad news. Although a car’s factory software might be bulletproof, the third-party apps it runs are not. Android and iOS apps are hacked all the time, and vulnerabilities discovered from one hack are often applicable to others, compounding the payoff and encouraging further hacks. Hacked apps can be hard to detect, too, because they continue to operate as normal, while acting as a trusted “wrapper” around the malware payload. If the app is authorized to lock and unlock the car’s doors, for example, the malware will inherit that ability.
Worse, said apps are often – indeed, almost always – developed by third parties that are beyond the control of the carmaker. Apple and Google might vet the apps they approve for distribution, but that obviously hasn’t solved the hacking problem. What we have here is a new and attractive attack surface.
Interestingly, car-control apps generally don’t communicate directly with the car itself. They communicate with a server operated by the carmaker. It’s the server, not the car, that authenticates the app and then sends commands and/or data back down to the car. If the malware can spoof the server, it’s inside the perimeter.
So, there’s your problem. What’s the solution? Write safer apps, obviously. But how, exactly? Third-party app developers span the entire spectrum from conscientious security experts to first-time coders. Some may try their best but come up short. Besides, app developers are generally under a lot more time and budget pressure than a carmaker’s official development team. The latter knows their reputation is on the line. The former just wants to publish and move on to the next release.
Inside Secure solves all these problems at once with the simplest possible solution: pushbutton security. Instead of building in security by learning the secrets of encryption, anti-malware measures, side-channel attacks, etc. (and still doing an amateur job), developers can simply use Inside Secure’s nameless security tool and – presto! – their app is made secure. It’s really that simple.
What does it actually do? The company is understandably reticent to discuss the details, but it basically obfuscates object code. Inside Secure’s tool analyzes a ready-to-run app and tinkers with its control flow, muddying the app’s behavior while also rendering it tamper-proof. If hackers can’t disassemble and reverse-engineer the code, they can’t search for vulnerabilities. And if they can’t attach their own code to the original app, it’s moot anyway. Inside Secure makes sure that the app you download from Google Play or iTunes App Store stays the way the developer intended.
All of this happens well after the app is done and published. There’s no effect on code development, compiling, linking, or anything else to do with creating the app. It’s done retroactively. It works on .apk (Android) or .ipa (iOS) binaries directly, not on the source code. In fact, Inside Secure’s tool can be used by anyone, not just the app’s developers or publisher. It requires no knowledge of the app whatever, or even any programming skills. It’s just an automated pushbutton tool. Working apps go in, security-enhanced apps come out.
That means anyone could “security-ize” anyone else’s app, which seems legally dubious. Do you want people modifying your app after it’s been published, even if it’s for a good cause? Turns out, it doesn’t matter, because only the original publisher of an app can upload it to the iTunes App Store / Google Play. Only you can update your apps and submit them to the relevant app stores for approval; nobody else can.
Because Inside Secure adds code to existing apps, they do get bigger. The company estimates that its tool adds between 10% and 30% to an app’s download footprint, which isn’t a big deal to most users. (How many even know or care how big an app is?) And all that extra code must be doing something, so what happens to performance?
Almost nothing, says Ashkenazi. The performance effects are virtually undetectable, at less than one percent.
The commercial terms are flexible, but Ashkenazi would say that large companies protecting large numbers of apps will pay more than independent coders, who might even get to use the tool for free. “Most small app developers don’t make a profit, so why bother trying to take money from them?” he says. In between those two extremes, pricing will be “affordable.”
Inside Secure’s goal is to make app security a no-brainer. The tool is automated; you don’t need to know anything about the app you’re protecting; you can apply the tweaks after the fact; it has negligible effect on performance and memory footprint; and it might even be free. What excuse is there not to use it?
One thought on “Pushbutton Security Comes to IoT”