We’ve looked at QuickLogic’s sensor hub solution in quite some detail in the past. It’s programmable logic at its heart, but is sold as a function-specific part (as contrasted with Lattice, who sells a general-purpose low-power part into similar applications). QuickLogic recently announced a wearables offering, which got me wondering how different this was from their prior sensor hub offering.
After all, it’s really kind of the same thing, only for a very specific implementation: gadgets that are intended to be worn. Which are battery-powered and require the utmost in power-miserliness to be successful.
You may recall that QuickLogic’s approach is an engine implemented in their programmable fabric. They’ve then put together both a library of pre-written algorithms and a C-like language that allows implementation of custom algorithms; in both cases, the algorithms run on that engine. So the question here is, did the engine change for the wearable market, or is it just a change in the algorithms?
Image courtesy QuickLogic
I checked in, and they confirmed that the engine has not changed – it’s the same as for the general sensor hub. What they have done is focus the libraries on context and gesture algorithms most applicable to the wearables market.
Sometime back, we looked at how different sensor fusion guys approach the problem of figuring out where your phone is on you. A similar situation exists for wearables in terms both of classifying what the wearer is doing and the gadget’s relationship to the wearer. QuickLogic’s approach supports 6 different states (or contexts): walking, running, cycling, in-vehicle, on-person, and not-on-person.
They’ve also added two wearable-specific gestures for waking the device up either by tapping it or by rotating the wrist.
Critically, they do this with under 250 µW when active.
You can read more in their announcement.