I recently got a chance to talk with Hillcrest Labs, another big mover in the motion market. In fact, their pedigree sounds remarkably like that of Movea, whom we’ve looked at in the past: starting with interactive TV and transitioning to broader motion.
They’ve gone on to develop a gesture library (released, but, to date, un-officially-announced) that includes more than 50 gestures, including the typical control gestures, numerals, letters, cardinal directions, and rotations.
We discussed the usual sensor fusion stuff, which, of course, is now bread and butter for them. They do this in a sensor-agnostic way, writing drivers from the data sheets and using an internal lab to characterize the sensors to obtain data not available in the datasheets.
The drivers exist at the lowest level of the software stack, protecting the upper layers from the specifics of different sensors. Sensor fusion is above that. But there is actually something they point to between the driver and the fusion that they feel themselves to be particularly strong in: calibration.
And this isn’t just individual sensor calibration; they do it across sensors (so, for instance, a compass and gyroscope can cross-calibrate each other). This might sound like fusion, but the goal is different. The goal of fusion is to use multi-sensor data to get some higher-order information. The goal of calibration – even if done across sensors – is to make sure the data itself is accurate.
A few months ago they announced their official entry into the mobile market; you can see the details of that in their release.