I did a lot of typing in college. Even as an engineering student, I still had to take humanities courses, which meant writing papers. And, as cash was in short supply, I found that I could leverage this skill by typing papers for others.
I had an electric typewriter (no, I’m not that old) and the special keys for Spanish, French, and German, making me versatile. And so I got pretty good at blind touch-typing. In other words, I would simply look at the paper to be typed and proceed without glancing back at the results. In fact, I found that I could get into a zone where my fingers would be blazing away with no mistakes – that is, until I realized that and started thinking, “Wow, this is going really well,” at which point it started falling apart as my brain started to try to micromanage my fingers (like the centipede trying to think through how it can walk).
How could I be sure I was typing the right things? A big part of it was practice and confidence: I’d look back and (hopefully) see that it was correct. (The consequences of failure back then involved white-out at best – or a new sheet of paper at worst.) But there were other elements that played a part. I could feel whether my fingers were in the wrong place. And every time a keystroke happened, there was an audible click that you could feel in your fingers.
And, as I’ve oozed before, the gold standard for me was the Selectric, which had this amazing feel – a light but confident touch that would make me even more of a typing boss.
Today’s keyboards have some, but not all, of these characteristics. I can feel the little ribs that tell me my hands are positioned correctly. There’s definitely a sound when I hit a key, although it’s not as reliable. On a typewriter, you can’t accidentally brush a neighboring key and not notice – heck, the keys might even collide and get stuck. With today’s keyboards, all you’re hearing and feeling is the fact that you’ve pushed something down. Unlike a typewriter, this does not give you information that your keystroke has registered. It may or may not have, and it may have been unintentionally accompanied by neighboring keys.
So it’s much more important to go back and review what’s been typed. The good news is that editing is trivial.
What’s changed here is the nature of the feedback that the machine gives us; this is the realm of haptics. Haptics is (are?) particularly important for touch screens and tiny keyboards. If you’re using a virtual keyboard on a touchscreen, you have no clues regarding where your fingers are. You can’t type blind because if your hands drift by a quarter inch, you’ll be typing a whole new set of keys.
Even with tiny keyboards, which have some tactile characteristics, they’re so small that you have to look at the keys as you type but then constantly look back at what you’ve typed to make sure your fat thumb (speaking for myself, anyway) didn’t accidentally squash the wrong key. All of this slows typing dramatically.
Of course, as someone who does tons of writing each day, being able to type fast and efficiently makes me more productive. Less important, for sure, for Angry Birds or tweeting. Nonetheless, with non-text input, it’s really nice if you can get some feedback that what you did was correctly received. “Echo your input” and all that.
When selecting an item to move in Android, for instance, you hold your finger down for a couple seconds (one of those non-intuitive things I discovered by accident) and then the phone vibrates to let you know that it got the message and has selected the item. But, other than that, there’s not much on phones or tablets. Apparently Android does have some capabilities, but the feedback comes with too much latency and people get annoyed and turn it off.
There’s one other gotcha to be worked around for keyboards, and it helps to spell this one out. What you want is to hit a key, have that keystroke registered by whatever application you’re using, and then have that application send back a signal that your finger can feel to indicate that it got the key. Problem is, here’s the sequence: you press the key down (nothing happens) and then release the key: on key-up, as your finger loses contact with the key, the event gets transmitted. The application can now respond – which you may not feel because you’ve already released the key.
These little glitches are minor stuff, however, as compared to the technologies being worked on for more serious haptic feedback. The Interactive Technology Summit featured a couple of presentations on haptics, and there are a surprising number of technological approaches under development (or at least research).
Tactus, for example, uses microfluidics to inflate or deflate little bubbles over a screen. The two presenters, ViviTouch and Redux Labs, feature two other radically different techniques that we’ll talk about momentarily.
But Redux also provided some interesting analysis of what haptics is about. They divided its function into two: providing clues as to where to locate your fingers and what they call “affirmation” – confirming that your stroke was duly noted. And different technologies perform differently, with, by turns, passive (meaning some built-in static feature) or active (coming in response to the user) contributions to location and affirmation.
For example, they put mechanical keyboards into the “passive location, active affirmation” category because the little ribs/dots indicating key position are fixed, and the sound and feel happen only when you type a key (albeit with poor accuracy for non-typewriter keyboards). They list the Android haptics mentioned above as “no location, active affirmation” because there’s nothing on the screen to help you to align your touch properly. Tactus’s bubbles provide passive location and passive affirmation. (The bubbles can be eliminated or raised elsewhere, but that’s typically done in the context of creating a new “form” – once that form is up, they don’t change based on key clicks.) Redux’s own technology is listed as active location and affirmation.
They list eight specific technologies, four of which provide location only:
- Programmable friction using ultrasonics (sounds crazy cool; I will have to come back to it some other time if it comes out of the research phase)
- Programmable shear force – the ShiverPad (which additionally makes use of programmable friction)
- Electrostatic “vibration” – electrostatics used to attract the surface of the skin and simulate a variety of sensations, including vibration
- Microfluidics (Tactus)
The other four provide location and affirmation:
- Whole-panel movement in the Z direction (perpendicular to the surface)
- In-plane whole-panel movement (ViviTouch applies here; more below)
- Bending waves (Redux Labs uses this; more below)
- Pixilated (i.e., localizable, addressable, not whole-panel) z-axis motion – uses a semi-soft panel.
I’d like to add one further contradictory clarification (if that makes any sense). The whole notion of “tactile feedback” or “sending information to our fingers” may sound complex and futuristic, but, for the most part, it’s not. It’s nothing more than a click or vibration or other simple event that provides a non-visual cue. We rely on them all the time in our daily lives; we just don’t think about them.
Having said that, ViviTouch also noted that humans have been shown to be able to differentiate up to 85 different haptic signals, suggesting that we could actually communicate in a much more sophisticated manner by touch. That has obvious promise for the sight-impaired, although I honestly wonder how many regular folks would bother to study the haptic “language” – just like few bother to learn Braille unless they need it.
So, with the basics in place, let’s take a brief closer look at at ViviTouch and Redux.
ViviTouch (a division of Bayer) depicts what I can only describe as looking like a miniature Oreo cookie. The cookie shell parts are stretchable electrodes; the cream center is an incompressible dielectric polymer. The “incompressible” bit is a little confusing to a non-specialist: that means that you can’t simply compress vertically and have the sides stay put; it’s still squishable. Stated differently, if you squish the cookie, then the cream must ooze out the sides.
So when the cookie is electrically compressed, the dielectric flattens and gets wider (which the electrodes can accommodate since they can stretch). This unit can be used at low frequencies to generate vibrations and other effects, and at higher frequencies to generate sound. Being programmable, you can create effects like heartbeats that can be both felt and heard. So it can go well beyond simple haptics and contribute to the overall ambience of a game, for example.
These can be made quite small, and ViviTouch is looking at various wearable installations as well as smartphones and headphones.
Redux, by contrast, relies on “bending waves.” Which, to me, sounds like a superpower for some Magic the Gathering character. But no, it’s far less bizarre than you might think: It’s a variant of sound waves, kind of.
We hear sound when longitudinal waves (aka pressure waves) in the air stimulate our eardrums. When those waves impinge on a flat surface, the vibrations travel into the solid as a pressure wave. But they also generate waves along the surface, and, unlike “normal” sound waves, these aren’t longitudinal; they’re basically ripples in the surface, and they contribute to the sound (but we don’t hear them directly; their effect is still delivered through air to our ears using pressure waves). These ripples are referred to as “bending waves” (I guess to contrast them with “squishing waves”).
The thing about panels like touchscreens is that, just like strings and drum-heads, they vibrate in modes. A Chladni plate, consisting of a metal plate with a rod that allows you to affix it in a vise, show these modes dramatically if you sprinkle them with salt or sugar and then stroke them with a violin bow or otherwise get them vibrating. The vibration patterns in the surface can be very complex – and not particularly helpful as feedback because multiple points will have the same vibration.
Redux gets around this issue of modes by using multiple actuators to focus these waves onto a specific spot on the touchscreen. And again, this can be used for haptic feedback or literally for turning your screen into a speaker. They recently announced the use of their technology on smartphones.
It feels like much of the aura surrounding these technologies gets to a more immersive experience for gaming or virtual reality. Both sound and tactile vibration play well in that environment. And it’s cool, and it animates well, and it can get the adrenaline and testosterone flowing. But this stuff can have prosaic benefits if worked into our devices to provide simple, low-power feedback that might even tell you, for instance, that you’ve typed two keys instead of one.
If they can use this to synthesize a Selectric effect, I’ll be a particularly happy dude.*
*What do you mean, “I need a life?”
More info:
How do you see haptics working into systems you design?