These days we talk about touch technology as if it’s a new thing. Actually, we’ve been using touch for years (since 1994, to be specific) – on our laptops. (OK, I know, a lot of you/us don’t like the touchpad, but it certainly does predate the touchscreen.)
I talked with Fred Caldwell from one of the oldest names in that field, Synaptics, at the Touch Gesture Motion conference last month. They used to make the touchpads; you may remember their name on a laptop you might have had long ago. They actually stopped making touchpads when they realized that their FAEs were spending too much time trying to solve manufacturing yield problems for their customers.
So they got out of the hardware business and focused on the software business. They can now work with any touch sensor – clear or opaque.
It’s actually a bit misleading to say they only do software. It’s probably more accurate to say that they do the calibration and signal conditioning, and they have their own ASIC for that purpose. It’s a custom architecture with a dedicated instruction set. They need this because there’s simply far too much data to send to a host if the host were to do the work. The incremental hardware cost is minimal, and it’s also far more power-efficient.
When dealing with a new display, they have a calibration wizard that maps out the sensor and establishes a rough baseline. Beyond that, their system auto-calibrates in real time – once per frame – to account for changing conditions (location, grounding, etc.)
“How do they do that,” you wonder? Ah… that would be a secret. They’re keeping mum about that.