feature article
Subscribe Now

Heartbleed: Serious Security Vulnerability

Serious Wake-up Call

Imagine if you woke up one morning, and found out that Walmart was now selling a device for $5 that could easily and instantly open almost any deadbolt lock. That’s right – the kind of lock that is supposed to give “extra protection” to just about every door on earth. That’s the magnitude of security problem posed by the Heartbleed Bug.

Contributing columnist Bruce Kleinman wrote the first half of this article and posted it to his “From Silicon Valley” blog on April 6, 2014.  The timing of the post was a remarkable coincidence: just 36 hours before the Heartbleed Bug started making headlines.

As the creators of technology, we engineers need to re-think our commitment to security and safety. The systems we design don’t just earn us money – they are often trusted to protect people’s lives, privacy, and assets. This is a solemn responsibility that is all too often overlooked or given short shrift in our ongoing race to get timing closure, first silicon, working prototypes, and volume shipments.

-Kevin Morris

———————————————————-

Serious Security, Serious Implementation

I’ve touched on the lamentable state of security on my “From Silicon Valley” blog and provided some available real-world illustrations of security done right. This article picks up the thread and offers an extremely strong security solution that can be readily implemented today. Sand Hill Road venture capitalists, my offer stands; let’s discuss over lunch at Madera.

The solution I have in mind is a personal security token that turns conventional two-factor authentication on its head. Two-factor is a HUGE improvement over password-only authentication. There are three methods in broad use today:

  • A physical token (small enough to be attached to a keychain) that displays a six-digit code that changes every minute or so
  • An Android or iOS app that performs the same function as the physical token
  • SMS or voice callback: a six-digit code is texted to your mobile or a computer rings you up and reads the code aloud

In all three methods, the six-digit (could be shorter or longer) code is required IN ADDITION TO your username and password to authenticate and login. This is fairly secure and yet not widely adopted. True and recent story: I was struck that my bank did not offer two-factor authentication. Wondering if perhaps they added it while I wasn’t looking, I poked around their website and could find no indication that two-factor authentication was available. Oddly enough, I discovered that it WAS available … thanks to a Google search. Go figure.

Most computer-savvy Silicon Valley types who value security find the modest overhead of two-factor authentication a perfectly reasonable trade-off for greater security. I suspect adoption of two-factor authentication is inversely proportional to distance from Silicon Valley, though that is just a hunch. We need a security solution that is seamless to use AND stronger than conventional two-factor authentication.

Enter my latest idea: the personal security token (PST). The acronym ‘PST’ is clearly unacceptable, what with Pacific Standard Time having laid claim to it quite some time (pun unintended) back; then again, standard time has shrunk to what seems like a ten-week period, so PST could be available if we end up on permanent daylight savings time. But I digress.

The PST is a tiny device WITH NO DISPLAY that attaches to your keychain. It has a low-power radio (Bluetooth-LE for conversation’s sake) and runs for roughly five years on its coin cell battery. It arrives with a pre-assigned, random six-digit PIN; pre-assigned to ensure that you do not use your anniversary, birthday or 123456. I recognize that forcing you to memorize a random PIN is not user-friendly, but honestly, I’m protecting your best interests. Here’s the good news: those six digits are the only password you need, period. For everything: websites, banking … even Starbucks.

Before we peek inside the PST, let me describe the entire user experience:

The PST has one button and one multi-color LED. It communicates with your mobile phone/tablet, PC, bank, and any point-of-sale terminal via RF (Bluetooth-LE for conversation’s sake).

Setting up a new ‘account’ on the PST

(I am assuming that we are in the transition phase from “old authentication” to “PST authentication.”) Use your existing credentials—username, bank card, credit card … Starbucks card—and password/PIN to authenticate. Click the “please setup my PST button” on the website/keypad and enter your new six-digit PIN. The LED on your PST turns yellow to indicate that a new account is about to be set up; click the button on the PST to confirm. The LED blinks three times. That’s it, done. From this point forward, you’ll use your PST to authenticate.

Using the PST to visit a website

Click the “login with my PST” button on the website. Enter your six-digit PIN on the website. The LED on your PST turns green to indicate that you are about to log in; click the button to confirm. That’s it. No username, no unique website-specific password to remember.

Using the PST to perform a financial transaction in person

Walk up to an ATM or point-of-sale terminal. Touch the “login with my PST” button and enter your six-digit PIN. The LED on your PST turns green to indicate that you are about to log in or purchase something; click the button to confirm. That’s it. No wallet full of ATM and credit cards.

On the face of it, this seems criminally INSECURE. A six-digit PIN and “Open Sesame” to your entire electronic wallet? There is QUITE a bit more taking place: “the rest of the story” happens completely behind the scenes, hiding an enormous amount of complexity and an EXTRAORINDARY degree of security. Let’s go inside the PST.

The personal security token consists of a single mixed-signal IC, a tiny RF antenna, and a coin cell battery. The case of the token might be cracked open, the IC can be de-layered – and the security will not be compromised. At the heart of the lone IC is—you guessed it—a physically unclonable function (PUF) with 1 Mbit or more of statistically provable entropy.

A VERY brief summary of PUF fundamentals (see “From Silicon Valley” for more detail):

It boils down to identity; each ‘thing’ must have a digital signature with all of the following guarantees: [a] every signature is completely unique, [b] signatures must be stable over time, and [c] there is no possibility of falsifying signatures.

So your PST has a very long and completely random UNIQUE string of bits; it is stable, and it cannot be observed/read … much less cloned. The lone IC also includes a modest amount of embedded flash that might be compromised by a motivated thief, so it is used in a manner consistent with its potential vulnerability. Rounding out the IC: a 256-bit AES (or other public key encryption scheme) cryptography block and a complete radio subsystem. On a 40nm process, the chip cost is less than two dollars; BOM cost of the entire personal security token is around thee dollars.

The PST assigns chunks of the PUF string to each ‘account.’ For illustrative purposes only, imagine that each ‘account’ is assigned a 1 Kbit chunk of PUF: 512 bits of ‘userid’ and 512 bits of ‘password.’ So with even a modest 1 Mbit PUF, the PST can hold a thousand accounts, each with its own userid and password. This is probably massive overkill, but I did promise “an extraordinary degree of security” some paragraphs back. And we’re just getting rolling …

The aforementioned modest embedded flash is simply a directory: an identifier for the ‘account’ (website, bank, credit card) and a pointer to grab the chunk of PUF assigned to that account. A standalone app can manage these pointers – for example, deleting an account and freeing up the associated block of PUF for re-use (or not, if you’re truly paranoid).

Setting up a new ‘account’ on the PST

A website (via your mobile phone/tablet or PC) or a point-of-sale terminal asks the PST to set up a new account. After checking the embedded flash directory to ensure that an account hasn’t already been set up, a chunk of PUF is assigned and the directory is updated. The cryptography block uses your six-digit PIN plus the 512-bit ‘password’ from the PUF to generate a private/public key pair. The 512-bit userid and the PUBLIC key are sent from the PST back to the phone/tablet or PC or point-of-sale terminal from which they are saved in the cloud. The userid and public key can be sniffed/copied with ZERO loss of integrity. How? The PRIVATE KEY NEVER LEAVES the IC; it is not even saved, rather, it is re-generated each time the account is authenticated.

Using the PST

A website (via your mobile …) or point-of-sale terminal asks the PST for authentication. Your PST looks up the account in the directory and replies with the 512-bit userid. This long random string is your username; you do NOT need to remember the account’s username, bank account number, or credit card number. Using your six-digit PIN and public key (unique to each of your accounts) created and stored in the cloud earlier plus the PRIVATE key of the “place of business,” the server generates a unique-to-this-transaction challenge/response pair. The challenge is sent to your PST, which generates its response using the PRIVATE key (unique to each of your accounts) plus the public key of the place of business. The response is sent to the host where—thanks to the magic of public key encryption—both sides are authenticated. The challenge/response traffic can be sniffed/copied, yet thanks again to public key encryption, this does NOTHING to reveal the private keys.

What happens if you lose your PST? First of all, it is literally on your keychain so I am hoping that is a remote event. A mistyped six-digit PIN is met with a mandatory 10-second delay before retry, so it will take some time to brute-force guess your PIN (thank goodness I didn’t let you pick your anniversary, birthday or 123456). Once you realize that you’ve truly lost your keys, you remote kill the PST … long before your PIN is cracked.

“Hold on,” I hear you thinking, “how do you perform this remote kill? And does that leave me SOL with every one of my websites … AND STARBUCKS?!?! Oh yeah, and what happens when the battery finally dies?” Those are really good questions, and I have a REALLY elegant solution that I’ll happily walk you through AFTER an NDA … and lunch at Madera.

———————————————————-

The second half of this article was written one week later, after the Heartbleed Bug stormed the headlines.

Kevin Morris

———————————————————-

Serious Security Vulnerability: Heartbleed

Let’s not mince words: Heartbleed is a catastrophic security hole, with BOTH terrible impact AND terrible breadth.

IMPACT. That comforting little “padlock” icon in your browser when you are entering financial information?  There is a better than even chance that what you thought was a secure HTTPS connection could have been easily compromised, enabling an attacker to see your username, password and the traffic between you and the server.  For the past two years. And the vulnerabilities do not stop with your web browser:

  • Secure email (using TLS) was open to the vulnerability, enabling an attacker to read your email as it came off the server on its way to your email client.
  • SSL-based VPNs that gave you and your IT department a sense of, well, privacy? That may have been compromised as well, enabling an attacker to see the traffic on the VPN.
  • Yet-to-be-named hardware was open to the vulnerability. Some gear from Cisco and Juniper has already been identified; it is VERY prudent to assume that LOTS of home Wi-Fi routers use OpenSSL and will prove to be vulnerable.

BREADTHAt least 500,000 sites had/have the Heartbleed Bug. Half. A. Million. And that last bullet on impacted hardware is yet to be scoped and will present a grand challenge to address. (How many people EVER update their Wi-Fi router firmware? My utterly non-scientific guess is 10% at best.)

Good news: PRESENCE of the Heartbleed vulnerability does not automatically mean it was EXPLOITED on the site.

Bad news: if the vulnerability WAS exploited, the nature of the bug makes it a cinch for the attacker to COMPLETELY cover his/her tracks. So if the site had the vulnerability (for the past two years), there is simply no way to know if the website was compromised in general and if your data was compromised specifically.

Paraphrasing “This is Spinal Tap”: How much more bad could it be? None. None more bad.

I am STRONGLY anti-alarmism, but the tone is warranted in this case.  If you use the web, you ABSOLUTELY need to take personal responsibility for your security and take action:

  1. Educate yourself. Start with this stick figure overview (I’m serious) and then the Heartbleed Bug website.
  2. Consult this list of the Top 100 websites and their vulnerability status.
  3. Use this tool to check any website—including intranet servers—for the vulnerability.
  4. Pay special attention to your email provider, as the LAST thing you need is compromised email.
  5. Once you’ve confirmed that a site HAS PATCHED THE HOLE, change your password on the site. Re-read that: it does NO good to change your password until AFTER the Heartbleed Bug has been patched. And use this opportunity to convert to different, strong passwords on all finance and commerce websites.

Question: I use two-factor authentication on my financial websites. I’ve got nothing to worry about, right?

Answer: kudos for taking security seriously! You still have plenty to worry about. For starters, what makes two-factor authentication effective is—how can I put this?—having two authentication factors. Heartbleed potentially exposed one of those two factors (your password) to an attacker.  So you are better off than someone using only a password, but you must assume that Heartbleed reduced your two-factor authentication down to one-factor authentication. It is every bit as important, therefore, that you follow the five steps above.

The Heartbleed Bug opens a hole in one of the foundational elements of web security: Secure Sockets Layer (SSL). While I was completely serious earlier when I declared that you need to educate yourself and take appropriate action, that does not include diving into the workings of SSL. If you are so inclined, however, Wikipedia has a great article (shocking surprise, that) HERE.

A lot of really, really smart people developed SSL.  It is based on public key encryption, and we all know how much I admire public key encryption (and not just for the great terminology like “elliptical curves”). Some important elements of SSL:

  • The server MUST have a private/public key pair.
  • SSL makes provisions for a client private/public key pair; my utterly non-scientific guess is that a client key pair is used less than 1% of the time. That is really too bad, but guess what? Implementing both server and client key pairs would NOT have mitigated the Heartbleed vulnerability.
  • WTF you say? After all, I extolled the “extraordinary” security of server + client key pairs in my last post. SSL uses public key (very strong, asymmetrical) cryptography as a “wrapper” to set up simple shared-secret (less strong, symmetrical) cryptography for the entire session.
  • WTF you say? If SSL goes to the trouble of implementing asymmetrical cryptography, why not stick with it for the entire session? Because symmetrical cryptography is far less computationally intense and we all like snappy websites. SSL development started 18 years ago and was ratified 15 years ago. No exaggeration: your contemporary mobile phone has roughly an order of magnitude more compute horsepower than a late 1990s personal computer. What was perfectly sound rationale for using asymmetrical cryptography ONLY to set up the shared-secret symmetrical cryptography, to be blunt, doesn’t sound so rational in 2014.
  • Lest you think I am letting my bias for asymmetrical cryptography color my judgment – well, the simpler symmetrical cryptography SSL uses for the entire session is an essential element to what makes Heartbleed so catastrophic. The bug enables an attacker to read memory on the server – memory that holds the aforementioned shared-secret. That wonderfully secure public key cryptography “wrapper” is rendered WORTHLESS if an attacker can read the ensuing master shared secret. You might as well have skipped the wrapper entirely and transmitted the shared-secret in plaintext.

It is all right there in the cited Wikipedia article under Description step #7:

Both the client and the server use the master secret to generate the session keys, which are symmetric keys used to encrypt and decrypt information exchanged during the SSL session.

Let me set one thing perfectly straight: SSL itself is secure. Heartbleed is a BUG in the OpenSSL stack used by those half a million (and counting) websites.  With that said … s**t happens, and robust systems should be designed so that when something like the Heartbleed Bug hits the fan, the results are not catastrophic.

I am not only strongly anti-alarmist, I am strongly anti-take-advantage-of-warranted-alarmism. So it is with some trepidation that I connect the dots from the first half of this article to Heartbleed:

  • My proposed scheme not only uses asymmetrical cryptography for authentication, it ENFORCES server AND client private/public key pairs.
  • As envisioned, my protocol encrypts your 512-bit ‘username’ with the server’s public key. It is decrypted using the server’s private key and stored in memory. A Heartbleed attack—or any attack permitting access to server memory, up to physically hanging a logic analyzer off the memory bus—will be able to read your username in plaintext.
  • HOWEVER, as I highlighted, your 512-bit private key NEVER leaves the token. That VERY long private key, you may recall, is your password. Ergo, your password is NEVER on the server. Go ahead, gain physical access to the server and hang a logic analyzer off the memory bus: your password IS NEVER THERE. A gaping hole on the server side CANNOT compromise the integrity of your private key.
  • But why leave the server’s private key vulnerable? If we are going to mass produce “personal security tokens” that provide an EXCEPTIONAL degree of security, then, seriously, we need to apply the same level of security on the server side. Build a secure cryptographic engine around a PUF inside the server, and its private key NEVER LEAVES that engine. Go ahead, examine the server memory byte-by-byte … the server’s private key will never be there to observe.
  • Although I didn’t specify this in v1.0 of the MRD, it goes without saying that my proposed protocol uses strong asymmetrical cryptography for the ENTIRE session.
  • One of many important lessons from Heartbleed: security MUST be implemented on the client AND the server in CONCERT. S**t happens on servers located all over the world: patches may or may not be applied; datacenter cages may or may not be properly secured; employees may or may not be trustworthy.  The system outlined above mitigates such serious security breaches and contains the damage.

It is high time that we get serious about security. 2014 is only 14 weeks old, and we’ve seen MORE THAN ENOUGH MOTIVATION: the Target credit card breach, the NSA whatever-the-heck-you-make-of-it, the Heartbleed Bug – just to name the headline makers.

We were given Web 2.0, now it is time for SECURITY 2.0: extraordinary security built upon a foundational root of trust for a truly robust trusted platform.

If someone decided that a popular instant messaging application is worth 14 billion dollars (4 billion if you’re a hard-ass and only count the cash), then a whole TON of people ought to consider what the foundational integrity of the internet is worth.  Security 2.0 needs to happen – right now, right here.

About the Author: 

Bruce Kleinman is a senior technology/business executive and principal at FSVadvisors and blogs on fromsiliconvalley.com

11 thoughts on “Heartbleed: Serious Security Vulnerability”

  1. The alarms have been steadily going off for nearly two decades, yet each new breach is accepted as fixed when a patch/upgrade is released for a particular flaw. The design of SSL, and it’s widespread dependence, has always bothered me … I wrote aggressively against it prior to it’s adoption, with industry responses that it was “provably secure” which has been debatable since inception.

    http://blog.cryptographyengineering.com/2012/09/on-provable-securit

    And like most crypto from two decades ago that was declared safe and secure, has been repeatedly compromised (a fun read is the history of both SSL and wireless security from WEP to WPA) yet repeatedly declared safe after each new version.

    The problem with electronic data security, is the same as physical security. Thieves will always attack the weakest link. When network security is the strongest, thieves will attack either end of the network. That’s what we have seen with recent major credit card breaches, where they attack both at the card terminal and at the stores internal network and data centers.

    Going back to your deadbolt example … a strong door with an excellent “tamper proof” deadbolt just means the thieves choose a window instead, or just go right through a wall in a few seconds with a battery powered saw.

    In high school I was lucky enough to work as a locksmith for a while, and learned to pick locks and make keys for locks without a key. While I could open most locks in a few minutes, I was always impressed by master locksmiths and thieves that could do the same locks in less than a minute. I rented a house in Eagle Rock for a while, and when it was sold the new owner (a law enforcement guy) had a locksmith install the best deadbolts available at the time as we were moving out. As I left with the last load, I picked the deadbolt lock in about 30sec to lock it on my way out. The new owner was devastated, and aggressively angry to the point he showed up at my new home demanding the keys I stole from the locksmith, refusing to believe his $300 locks could be easily picked. It only took a few minute drive, to prove my point.

    I spent the mid 1970’s doing operating system security research sponsored by college faculty, in an open above table exercise against several non-sponsored underground teams. Lacking source, we disassembled and reverse compiled the operating system, libraries and core utilities over the course of two years. And from that found a number of exploits to kernel level and inserted OS machine level exploits.

    40 years later the code base is several orders of magnitude larger, and with it the errors to be compromised are also several orders of magnitude more. The history of Microsoft security patches is an EXCELLENT example.

    So the first flaw in believing on-line security is possible, starts with that mistaken belief in itself, that the next protocol will be the last one ever broken. And if by some miracle that it isn’t broken, it’s still a mistake to believe that there is absolutely no way around an unbreakable cypher, by attacking the “secure” data at either end while in plain text form. Or that there will not be a Snowden style breach, where some trusted, or not so trusted, person just walks out the front door with the data.

    Being sane, is to assume your data will be compromised again, and having a good plan to mitigate the total cost of the breach, each and every time. Or not use the technology at all, if that is cheaper and safer.

  2. Your point is good one–and a good reminder–that you cannot have perfect security on the Internet. We can have dramatically BETTER security on the Internet and NOT at the expense of ease-of-use.

    Improvement vector #1: use Physically Unclonable Function (PUF) technology to store all private keys. One of the fundamental rules of military-grade security is “NEVER store a red [plaintext] key in memory.” That rule is not followed in SSL and can be exploited by the Heartbleed bug in OpenSSL. PUFs are a truly robust method for storing red data with an extraordinary level of security.

    Improvement vector #2: add client-side private keys. Most all contemporary systems use only server-side private keys. Badly extending the deadbolt analogy: it is inherently less secure to rely on a single deadbolt, especially when you do not ‘own’ the key. It is inherently more secure to have two deadbolts, where you own and secure one of the keys.

    A system the combines server + client private/public key pairs–with all private keys seriously secured using PUF technology–would be a DRAMATIC improvement over contemporary security. Look no farther than the government agencies that place tremendous emphasis on security and you’ll find precisely such systems in use today.

  3. As Mr Kleinman points out, crypto card devices which authenticate both parties, **may** provide a safer solution in a properly engineered security oriented device.

    That would however be a PUF device embedded into an exceptionally secure CPU’s silicon with all the authentication algorithms in internal microcode, and none of the process visible on external pins or with any form of debug tracing/monitoring.

    Otherwise any virus/trojan/malware can simply read the key out of the “a red [plaintext] key” PUF device, and share or exploit it.

    Out of the millions of consumer devices that access the internet today, few, if any have secure crypto CPU’s today that are immune to a virus/trojan/malware attack on the private key.

    As implemented by government agencies, there are significant other safe guards and policies to protect the integrity of the physical machine accessing a PUF, that simply do not exist for consumer devices today, nor would they make for the highly convenient ease-of-use consumers are expecting.

    If we started today, it would take one to two decades to implement Mr Kleinman’s vision in every device that accesses the network securely, and obsolete everything built before that system was widely available.

    And in the end, what if that implementation was found to have a flaw in 10 years, as many crypto systems have in the last 10 years. What do you do then, refine the solution, and obsolete everything again?

    When that is implemented, what other form of access to the protected data, will thieves find …. after all, the data that is being protected across the internet, will still be plain text in these systems with secure PUF keys and secure CPU crypto microcode algorithms, waiting to be attacked with the same virus, trojan, and malware attack vectors. It’s simply the case of a very strong door, with a very strong lock, for storing all your valuables an empty, unmonitored house with tissue paper exterior walls in a high crime ghetto.

    The game here is to protect the data, and never leave it unprotected …. unfortunately too many professionals believe the game is the most technically interesting complex unbreakable cipher system, and completely forget to secure the rest of the system around the data that MUST be protected.

    Otherwise the computer system protecting your data is roughly as secure as posting your valuable data on every wall in a high crime ghetto (our current internet full of virus/trojan/malware attack vectors from all over the globe).

  4. Agreed, any sweeping change to internet security will undergo a lengthy transition period. During the transition, though, the ‘old’ and ‘improved’ systems can coexist.

    Today, for example, many financial institutions provide a physical two-factor authentication token … though most customers do not have/use one.

    An improved system could be fully spec’d in a manner consistent with the realities of rolling implementation: my example may have mistakenly implied a “flip the switch” cutover; my intention was that PUF-based security would be rolled out for clients and servers over time.

    2014’s many well-publicized exploits have sensitized users to the issue of internet security. Once an improved internet security framework is in place, capitalism can do its thing: if customers value security they will ‘vote’ with their business.

    One comment that I do take issue with:
    =====
    crypto card devices which authenticate both parties, **may** provide a safer solution
    =====
    There is no doubt that any ‘enhanced’ authentication method is more secure. Not perfect, mind you, but absolutely safer.

    One last comment: my vision is that the cryptographic system is a separate chip, not part of the main CPU (though the latter would be delightful). There is existing PUF technology that has super-strong anti-tamper inherently ‘woven’ into the PUF that protects the ENTIRE chip. This has been demonstrated to hold up against direct, side-channel and physical attacks to VERY demanding customers.

    In summary, the technology exists to create an internet security system that is vastly improved from what we have today. The real question is does the WILL exist?

  5. Mr Kleinman is right that PUF devices at both the client and the server will provide stronger crypto to prove identities of the client and server, using hardware passwords/certificates/challenges. This partially solves the stolen password problem, if only a single device is allowed to exist per individual.

    If multiple PUFs are allowed for individuals, then the same social engineering attacks are open for third parties to register an attackers PUF in the name of the targeted party.

    A stolen PUF is the same as a stolen debit card, the attacker has a brief window to attack everything the card protects, and to make changes locking the PUFs owner out to lengthen the attack period available.

    They do not however make the encryption any stronger, or protect the sensitive data any better.

    If the crypto algorithm has a flaw, the data can still compromised using PUFs at both ends.

    If the data can be extracted by virus/trojan/malware attacks on either the client or server end, the data is still compromised.

    The key point is that a very strong door and lock, does not protect a weak security environment with easy access around the door.

    The key point is that strong crypto simply moves the desired attack point to either end of the crypto … which is routinely available today with virus/trojan/malware attacks on consumers and servers.

    The virus/trojan/malware can perform ANY authentication with a PUF, just as the normal authorized code can using the same PUF device.

    Lastly, the very concept that the PUF can not be duplicated, I personally believe is flawed. Crypto expert after crypto expert, have declared their cyphers unbreakable … nearly all have been wrong. Why trust their claims that a magic PUF card, is truly magic and unbreakable/duplicable?

    What if they are WRONG AGAIN? Not that it matters, as there still exist easy attacks using virus/trojan/malware attack vectors around the magic PUF devices.

    For example, there are two easy attack vectors for obtaining duplicate PUFs.

    The first is that the NSA or organized crime, make a back door deal to manufacture PUFs with duplicates, one copy is sent to the victim, the other copies sent to one or more attackers.

    The second is that the algorithm used to assign identities to PUF cards, is used by the NSA or organized crime, to generate an archive of all assigned PUF keys or serial numbers, and rainbow tables are used to lookup the key value when it’s encounter in use, at which point the PUFs data is regenerated for attackers use, using the original algorithms.

    Trust begins with a process that can not be tampered with physically, or subverted with social engineering attacks on the source material by any organization. PUFs require a HUGE chain of trust …. that easily can be violated by governments and organized crime.

  6. Kudos (again) for raising good points; I am not sure I understand your line of thought however. Yes, there are always vectors of attack (malware, for example) … are you suggesting that because there is no such thing as perfect security that we should not strive to improve the current state?

    “If the crypto algorithm is flawed” … true statement, yet 256-bit AES is considered very strong for demanding commercial use; PGP stronger still.

    “If the data can be extracted by virus/trojan/malware attacks …” true again, yet prudent common sense (don’t click on links in suspect emails) plus a well-vetted security suite provide excellent protection. Most aware & responsible users have never fallen victim to these attacks.

    “The virus/trojan/malware can perform ANY authentication with a PUF …” is NOT true. I suggested a physical button on the token to ‘confirm’ all transactions; there must be a human in the loop.

    “Lastly, the very concept that the PUF can not be duplicated, I personally believe is flawed.” You are entitled to your opinion, but the MOST demanding and technically astute cryptologists have performed due diligence such that they have confidence in contemporary PUF technology. There is ample published data, so folks can reach their own conclusion.

    I certainly do not think I have all the answers and I sincerely do not want to sound critical–as I’ve mentioned, most of your points are quite good–so let me ask a most constructive question: how would you improve internet security?

  7. The answer to “how would you improve internet security” … don’t use it, or any computer connected to the internet, to process or store data that you can not risk being compromised.

    I have held DOD security clearances, and managed secure data centers, processing classified DOD data …. we severed all outside links to the machine, and wipe it, before bringing secure data into the machine, and wiped the machine when done.

    Many people work in screen room offices and data centers today … for the very reason, that is the most secure way to protect classified data, and avoid many attacks by insider agents.

    Otherwise, we take risks, and hopefully informed risks, that your data *may* be compromised at any time, and used in ways that may have high costs.

    Do not believe the snake oil salesmen, that say, trust us, it’s safe.

    And you are VERY WRONG above when you say “”The virus/trojan/malware can perform ANY authentication with a PUF …” is NOT true. I suggested a physical button on the token to ‘confirm’ all transactions; there must be a human in the loop.”

    The virus/trojan/malware simply needs to either allow the authorized application to obtain session keys with the user pushing the button, and then preempt the authorized code and perform all attack vectors with the session keys.

    Or the virus/trojan/malware simply replaces the authorized code, presenting to the user precisely the same UI while requesting the user to push the button and then uses the same algorithm to complete the authentication.

    Done well, the user sees exactly the same UI sequences, as a normal transaction … while the attack vector is exploited in the background with the virus/trojan/malware hiding the additional unauthorized transactions from the user. Your bank account is still emptied, your email still searched and forwarded, and your other sensitive information compromised.

    What is wrong with this whole approach is telling the user/public that it was safe, secure, and can not be compromised, when ANY GOOD SECURITY EXPERT knows there exist at least a dozen clear attack vectors.

    And should quantum computing become a reality in a few years … everything protected today by secure many bit keys, becomes easily broken in a few seconds/hours. I’m pretty sure that once several governments and organized crime have built multiple quantum computers, it will be years before the public knows. None of them are going to tip their hats to the public, while they are bypassing every crypto system on the planet.

  8. John–

    This has been a good & healthy bit of “argy bargy”, thank you!

    You’ve pointed out some good items and raised awareness that users need to take greater responsibility [a] for client-side security and [b] to ‘vote’ with their business for greater server-side security.

    “Do not believe the snake oil salesmen, that say, trust us, it’s safe.” Kudos , a good summary: users must be aware and accept responsibility to act accordingly.

    Obviously the public internet will NEVER be as secure as a DoD-level isolated network with full TEMPEST security measures.

    Today’s internet security CAN be dramatically improved. My idea is just one illustration of how to do so. I sincerely hope to see greater attention paid and actions taken toward “Security 2.0” on the internet.

    Thanks again for the lively interaction,
    Bruce.

  9. Bruce … I agree that authentication can be dramatically improved …. however that is less than 1% of the real problem when the base computing platform and environment is not secure.

    Securing that is neither easy, or likely, so normal everyday users should not expect better security.

    It’s very much like saying that a little tiny screw holding on the name plate of your new car is 100% reliable, and because of that the new car you are purchasing might be 100% reliable.

    It’s flawed logic … that snake oil salesmen would be happy to exploit to a customer that doesn’t know any better.

    It’s flawed logic to assert that a new wizbang secure authentication practice, will fix everything that is wrong with protecting sensitive data in a completely unsecured computing platform and internet.

  10. Now if you would like a healthy chill in your spine about client and server side security, consider the following attack that is likely to show up in the wild soon, simply because of how easy/effective it is.

    Since IT folks and customers have become VM crazy, consider what happens when a virus/trojan/malware installs itself as a VM, and pushes the existing system into a virtual machine, and takes control of the base hardware system.

    For many VM aware hardware architectures with full hardware assist, we now have a subverted system where all the previously effective attack vector detection systems are completely blind.

    And on architectures with hardware debug break points, the malware VM can snoop, and modify code, data, and algorithms on the fly, completely transparent to the still functioning (but completely blind) anti-virus and anti-malware protection tools.

    No longer does the root kit have to modify files in the target filesystem …. they simply patch the data in memory on the fly with a completely trusting and oblivious compromised system.

    Security is seldom a critical “must have” feature in todays market place. To often engineers are concerned with providing functionality, and never consider how that functionality can be subverted or abused.

    With 10’s of millions of machines infected with various botnet’s, and many more that are simply infected with malware, this is not a trivial problem to be ignored. Much bigger than Heartbleed.

    I believe strongly in Kevin’s dream:
    —————————————————–

    As the creators of technology, we engineers need to re-think our commitment to security and safety. The systems we design don’t just earn us money – they are often trusted to protect people’s lives, privacy, and assets. This is a solemn responsibility that is all too often overlooked or given short shrift in our ongoing race to get timing closure, first silicon, working prototypes, and volume shipments.

    Kevin Morris

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 24, 2024
Going to the supermarket? If so, you need to watch this video on 'Why the Other Line is Likely to Move Faster' (a.k.a. 'Queuing Theory for the Holiday Season')....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

ADI Pressure Sensing Solutions Enable the Future of Industrial Intelligent Edge
The intelligent edge enables greater autonomy, sustainability, connectivity, and security for a variety of electronic designs today. In this episode of Chalk Talk, Amelia Dalton and Maurizio Gavardoni from Analog Devices explore how the intelligent edge is driving a transformation in industrial automation, the role that pressure sensing solutions play in IIoT designs and how Analog Devices is reshaping pressure sensor manufacturing with single flow calibration.
Aug 2, 2024
60,243 views