This year’s Interactive Technology Summit took the place of what has been the Touch Gesture Motion conference of the last two years. It was expanded to four days instead of two, and it was moved to downtown San Jose from last year’s Austin retreat (a posh golf resort a $30 cab ride from anything except golf). The content also took on displays in general as a topic to which the last day was dedicated.
So, with a broader brief, it braved the vagaries of being located where all the engineers are. In other words, where engineers can easily attend, but on their way to the conference, they can quick stop off at the office to take care of one little thing, and then this call comes in and then that email arrives and… well… maybe they won’t make it after all. In other words, attendance did seem a bit sparse. (And I’m speculating on the reasons.)
I had to pick and choose what I checked out, given that the TSensors conference was happening in parallel with it (I don’t seem to make a good parallel processor). My focus has generally been on the “smart” aspects of this technology, namely touch, gestures, and motion. There wasn’t much new in the motion category; we’ll look at touch today and then update gestures shortly.
The last two years seemed to be all about the zillion different ways of recording touches. This year there was less of that, but Ricoh’s John Barrus dug deeper into the whole pen/stylus situation. For so long, it seems that we’ve been so wowed by touch technology that has enabled more or less one thing: point and click. (OK, and swipe and pinch.) We’ve been all thumbs as we mash our meaty digits into various types of touch-sensitive material (mostly ITO).
The problem is, our fingers are too big to do anything delicate (well, speaking for myself anyway, as exemplified in my abortive attempt to play a word scramble game on the back of an airplane seat). And they cover up what we’re trying to touch. Which is largely why I’ve resisted the wholesale switch from things like keyboards and mice to everything-index-finger. (Yeah, some teenagers think they can two-finger type quickly… perhaps they can, for two fingers, but I’ll blow them away any day with a real keyboard, which is important when what you do is write for a living…)
So I was intrigued to see this presentation, which took a look at a wide variety of stylus and pen approaches, both for personal appliances as well as large-format items like whiteboards. Two clear applications are for note-taking and form-filling. I needed a new laptop recently, and I researched whether stylus/touchscreen technology was fast and fine enough for me to get rid of my notebooks and take notes onscreen. (I don’t like to type in meetings – it’s noisy, and I somehow feel it’s rude.) The conclusion I came to was that no, it’s not ready for this yet. So it remains an open opportunity.
He also noted that tablets with forms on them were easily filled out in medical offices by people that had no computer experience; just give them the tablet and a stylus, and it was a very comfortable experience. (Now that’s intuitive.) (And if you wonder why this works for forms but not for notes, well, you haven’t seen my handwriting…)
The technologies he listed as available are:
- Projected capacitance (“pro-cap”) using electromagnetic resonance with a passive pen (i.e., no powered electronics in the pen)
- The pen needs no battery
- Good palm rejection – ignores the surface when the pen is in use
- Good resolution
- Pressure-sensitive
- But higher cost
- Samsung has tablets using this (Galaxy Note)
- Pro-cap with an active pen
- Can be done in a large format (Perceptive Pixel has done up to 82”)
- Similar benefits to the prior one
- But again, higher cost plus the pen needs a battery
- “Force-sensitive resistance”
- Grid of force-sensitive resistors
- Pressure-sensitive
- Multi-touch
- Scales up well
- But it’s not completely transparent.
- Tactonic Technologies uses this.
- There’s an interesting “time-of-flight” approach where the pen sends a simultaneous IR blip and ultrasonic chirp; the delays are used to triangulate the pen position
- The cost of this is lower
- Multiple pens can be tracked independently (say, for whiteboards with more than one person)
- But it’s not really a touch technology; it’s pen-only
- The Luida Ebeam Edge and Mimio Teach use this
Then there are a bunch of light-based approaches, some of which we’ve seen before. But, unlike the screens that carry light through the glass, most of these project and detect light above the surface. One simple such approach is to use light and detect shadows. This approach is used by Smart Technologies and Baanto.
Other large-format (i.e., whiteboard) installations rely on a camera and some kind of light source.
- In some cases, the source is the pen itself, which emits an IR signal (Epson BrightLink).
- In another, a projector casts a high-speed structured pattern onto the surface that’s sensed by the pen (TI’s DLP technology in Dell’s S320wi).
- Another version sends an IR “light curtain” down over the surface; a camera measures
reflections as fingers (or whatever) break that curtain (Smart Technologies LightRaise) - There’s even a whiteboard with a finely-printed pattern and a pen with a camera that detects the position and communicates it via BlueTooth (PolyVision Eno).
The general conclusion was that the various technology options work pretty well; it’s mostly cost that needs to be solved.
There are also some issues with collaboration for whiteboard and video conferencing applications, but we’ll cover those later.