Sensors are literally taking over the world. Projections vary as to actual numbers – some say we will reach a trillion – but it is safe to say that we are in the middle of an exponential explosion in the number of sensors deployed in the world. Beyond the obvious gajillions in smartphones, sensors are being designed into just about every kind of embedded system you can imagine. All those sensors promise a revolution in the real-world intelligence of the systems we all design.
One of the biggest problems with putting sensors into our systems is – they’re kinda high-maintenance from a processing point of view. All day long, like cranky little toddlers, they keep dribbling out data and wanting us to deal with it. If you get very many sensors in your system, you really need a nanny to take care of them all, so you can get back to the important business of letting your embedded computer do some embedded computing.
That nanny is called a sensor hub, and that babysitting task is known as ‘sensor fusion” – yeah, sounds sexier than babysitting, huh? To accomplish sensor fusion, you need some hardware that can sit and constantly monitor your sensors, extracting and analyzing the information when needed. You don’t want to waste the time (and, more importantly, the power) of your big’ol applications processor looping through boring data collection or handling interrupts from pesky, needy sensors. Most people use something like a microcontroller for this babysitting job. That’s cool if you can live with the amount of energy a microcontroller burns for long stints of sensor monitoring, and if the microcontroller can do enough computing to meet your sensor data analysis needs.
Unfortunately, that is often not the case.
What if you want to automatically figure out whether the dude wearing your sensors is standing or sitting – walking, doing pushups, or riding a bike, climbing a mountain or riding in a car. This is called “determining context,” and, as you can imagine, it takes quite a bit of fancy computation on data from a number of different sensors.
Uh, oh – we have now entered a domain where we want an ultra-low power processor that is almost always on but can do fancy computations to determine context. And when we want to accelerate computation while maintaining low power consumption, we have entered the domain of FPGAs. As we have often discussed here, FPGAs can implement costly computations in hardware, saving incredible amounts of power while crunching numbers at incredible speed.
As we have also discussed here, one doesn’t find many FPGAs suitable for use in battery-powered devices. In fact – the list of mainstream FPGAs that one would dream of using as a sensor hub is basically empty. Most FPGAs would burn your whole daily power budget just getting configured. But, the concept of using programmable logic is a sound one, provided you can accomplish two things: getting an ultra-low-power FPGA suitable for battery-powered applications, and accomplishing the programming of that FPGA with smart, efficient sensor fusion algorithms.
Enter QuickLogic, who has just announced ultra-low-power sensor hubs based on their brand-new PolarPro 3 and ArcticLink 3 programmable logic devices. We’ll talk more about ArcticLink 3 in a bit, but picture a programmable logic device with microwatt-scale power consumption and built-in hardware to detect context changes. That device could sit in your system all day, sipping microwatts and babysitting sensors, and awaken the application processor only when it detects that context has changed.
“Hey apps processor – our guy is doing something different now, what is it?”
“Yawn, Uh, give me your latest data and, let me see… OK, now he’s walking. I’m going back to sleep. Wake me up when something different happens.
“OK, have a nice nap.”
The QuickLogic device then resumes quietly collecting and storing sensor data and monitoring it for signs of context change. When it detects another change, it can briefly awaken the applications processor, which will punch out a quick analysis to figure out the new context. This means that the system can be always aware of context changes, while only occasionally awakening the application processor.
QuickLogic has done an analysis of what it takes to fit into this very challenging socket. Brian Faith (QuickLogic VP of marketing) explains that smartphone makers are willing to allocate about 1-2% of system power to implement always-on, context-aware sensor monitoring. When you do the math, taking the battery capacity of your smartphone, dividing by the standby time, taking 1-2% of that – you get a number smaller than what most MCUs burn, and that doesn’t even include the power consumed by 9-axes worth of sensors (3-axis accelerometers, gyros, and magnetometers). When you factor that in – you’ll quickly conclude that you can’t do always-on, context-aware sensor fusion with an MCU.
QuickLogic’s new devices are designed specifically to solve this problem within this power budget. The new Ultra-low-power Sensor Hubs (ULPSH), ~300 microWatt compute power combined with current-generation sensors, weighs in just under the 2%-of-system-power number. With next-gen sensors, the power consumption drops to around 1%. The company claims that these new devices are the only solutions that can accomplish always-on, context-aware sensor fusion within the required 1-2% power budget for smartphones.
Of course, using an FPGA as a heterogeneous computing device can require some tricky programming. As we all know, partitioning up an algorithm between hardware and software components, creating HDL to implement the hardware acceleration parts, simulation, synthesis, place and route – are not for the faint of heart. QuickLogic solves this problem by doing your design for you. Several years ago, the company abandoned the “usual” FPGA-supplier practice of predominantly selling and supporting do-it-yourself design tools for a more turnkey approach – which the company calls “Customer-Specific Standard Products” (CSSPs).
So, even if the programming isn’t a barrier – where do these fancy context algorithms come from? Ah, good question, fictitious straight man. And, here’s a great answer: QuickLogic has partnered with a company called Sensor Platforms, Inc. – a leading developer of context-awareness algorithms. Their algorithms are working behind the scenes to help your device figure out the current context in conjunction with the QuickLogic part.
How do these newfangled sensor hubs work? QuickLogic’s two new device families – PolarPro 3 and ArcticLink 3 – can provide two different levels of capability. ArcticLink 3 gives you full-boat context-aware always-on sensor fusion. PolarPro 3 can give you multi-axis sensor monitoring and data buffering – without the context awareness capability.
Starting at the sensors, a microcoded state machine constantly monitors your multi-axis sensor array, using a trivial amount of power and, with the flexibility of microcode, adapting to whatever specific configuration of sensors your system requires. On the ArcticLink solution, the next layer is what the company calls its “Flexible Fusion Engine,” which processes and stores the incoming data while looking for context changes. In the PolarPro 3 solution, this level simply buffers ten seconds of data.
The next level of processing is a communication manager that coordinates handoff with your application processor. Again, taking advantage of the flexibility of programmable logic, the device can be customized for your particular system’s communication needs.
The whole thing is delivered in a tiny ~2mmx2.5mm form factor – right in line with your needs for a compact, battery-powered mobile device.
QuickLogic supplies everything you need to design-in one of these babies – development kits, software for custom algorithm development, operating system drivers, tuned algorithms, and app notes and function libraries. The devices are sampling now. As for the details on the two new logic families AcrticLink 3 and PolarPro 3? Stay tuned.
QuickLogic is carving out a unique position in sensor fusion with their new family of devices. Can you think of other ways to accomplish full-time context-aware sensor monitoring on such a small power budget?