We’re used to touch being about locating one or more fingers or items on a surface. This is inherently a 2D process. Although much more richness is being explored for the long-term, one third dimension that seems closer in is pressure: how hard are we pushing down, and can we use that to, for instance, grab an object for dragging?
At the 2011 Touch Gesture Motion conference, one company that got a fair bit of attention was Flatfrog, who uses a light-based approach, with LEDs and sensors around the screen to triangulate positions. At the 2012 Touch Gesture Motion conference, when 2D seemed so 2011, pressure was a more frequent topic of conversation. But clearly a visual technology like Flatfrog’s wouldn’t be amenable to measuring pressure since there is nothing to sense the pressure.
Unless…
If you have a squishy object like a finger, then you can use what I’ll call the squish factor to infer pressure. This is what Flatfrog does: when a finger (for example) touches down, they normalize the width of the item, and then they track as that width widens due to the squishing of the finger (or whatever). Which means that this works with materials that squish. Metal? Not so much.
You might wonder how they can resolve such small movements using an array of LEDs that are millimeters apart. For a single LED and an array of sensors, for example, the resolution might indeed be insufficient. But because they have so many LEDs, the combined measurements from all of them allow them to resolve small micro-structures.
There is a cost to this, of course, in processing: it adds about 100 million instructions per second to the processing. “Ouch!” you say? Actually, it’s not that bad: their basic processing budget without pressure is about 2 billion instructions per second, so this is about a 5% adder.
More information at their website…