feature article
Subscribe Now

Minimizing Idle Time

Sensor Hubs Rapidly Gain Ground

Any of you that are as much a fan of a good roadtrip as I am will have stopped off at more than your fair share of truck stops. And you’ll know that there’s no such thing as a quiet truck stop. Even if all of the traffic stopped and the fuel pumps stopped and the insipid inside music stopped, it wouldn’t be quiet.

When everything else goes away, you’re left with the constant thrum of diesel engines idling for hours on end. And you might wonder, isn’t that a waste of fuel? Why don’t they just shut off their engines if they’re going to be gone for more than ten minutes or so?

Well, the internet, font of all knowledge when it comes to idle questions like this, will serve up a number of reasons why they do this. The explanations usually fall into two camps: cab comfort – the drivers often sleep in them – and the fact that diesels are hard to start, especially when cold. An unsubstantiated fact that I saw more than once was that idling for ten minutes uses less fuel than restarting.

So these things make sense as reasons, and yet you’re (or, at least, I am) left with the nagging feeling that keeping an entire monster diesel engine that’s capable of pulling a mighty load running simply to run the heater for 8 hours kinda seems like overkill. Apparently the economics aren’t too bad (or else they wouldn’t do it), but that could change, and exhaust adds to the issue, so, without going off into detail, efforts are underway to provide auxiliary ways of offloading those small tasks, leaving the main engine for the primary task of hauling a load.

So that means that electronics isn’t the only domain where power consumption, offloading, and sleep modes are an issue. Especially when it comes to battery-powered devices like smartphones, the situation has been evolving over the last few years in a way that’s striking in its similarity to the diesel truck example.

One of the things that differentiates smartphones from… not-so-smartphones is the rapidly growing number of sensors available for all kinds of applications. There didn’t used to be so many, and they weren’t that hard to handle. But as they’ve exploded in number and as the computing expectations have blown up commensurately, there’s an active trajectory towards managing the sensors more efficiently, resulting in the concept of the “sensor hub.”

You might think of a sensor hub as something like a microcontroller – and it can be, but it’s not that simple. So let’s back way up to see what the issues are and how we got to where we are. I’m not going to claim outright that this is a historically accurate rendition of what happened when, but it’s a logical flow that at least I can get my head around.

Managing the load

The earliest sensors simply provided data, and it was up to the processor to read that data. You can imagine a polling loop that checks the data at a frequency prescribed by or negotiated with the sensor. While such a “busy” loop sounds, well, busy, you have to realize that the frequencies involved are on the order of hundreds of hertz or perhaps kilohertz – the time between reads is like a lifetime to a nanometer-class processor.

The obvious answer to simplifying this is to provide an interrupt. At the very least, the interrupt can tell the processor that new data is ready so that the processor doesn’t have to time itself to the sensor.

At the most rudimentary level, some application – or even the OS – is going to ask for sensor data and then use that data to calculate or do something. Exactly what is entailed by that calculation has been both growing and – as we’ll see – dropping.

The really interesting stuff comes with “sensor fusion” – taking the data from multiple sensors and munging it together to provide either more accurate fused data or higher-level information that isn’t available from any of the sensors individually. And much of this is pretty computationally intensive. Even relatively basic stuff like using an accelerometer to provide tilt compensation for an eCompass involves math that many of us left behind many years ago.

But if you start adding more and more sensors, then your processor has to respond to all of the various interrupts: this starts to chew up bandwidth, especially if the data has to be fused each time. Some of that load is being pushed back into the sensors, however. While any sensor is going to provide access to the raw data, many are providing a level of computation within the sensor (presumably via the co-packaged ASIC).

So, for example, the OS may want to know whether it needs to rotate the screen due to the phone itself being rotated. One way to do that is to monitor the accelerometer data so that, if a particular angle is reached, the OS can rotate the screen. That could involve a lot of monitoring (even if done through interrupts) during the long periods of time where the phone might be stationary.

The alternative is to build that logic into the sensor so that, in addition to a “data ready” interrupt, there’s also a “rotated” or “tilt” event interrupt. If that’s all you care about (unlikely, but for the sake of discussion), then the other interrupts could be disabled or masked or ignored, and only when the event of interest occurs is the processor distracted from whatever else it’s doing. You can do this with many different events, depending on the intended application. For example, Kionix provides “tap” and “double-tap” events, among others, in addition to a “tilt” event.

Such events can be configured to a limited extent by writing to specific registers in the sensor, but users might want to add more intelligence to customize the event of interest. Kionix, for instance, is incorporating state machines into some of their sensors. This gives users more flexibility both in defining the event that causes an interrupt as well as in the form of the data presented to the processor. In Kionix’s case, there are two state machines, and the allowed state trajectories are rather simple (either a single next state or a reset), but it provides a level of offload beyond the factory-defined events.

Managing the power

These higher-level events help with the bandwidth issue, but relying on the main application processor to handle the interrupts – even if more judiciously selected – can have an even more significant impact. If that processor on the phone must be constantly watching the sensors, then it can never go to sleep. It’s like the big diesel engine: it has to sit there idling away for hours even though it’s handling only minor stuff like checking on the sensors. Which is huge overkill and a waste of power. You’d like to let the processor sleep, but if the processor is responsible for deciding whether to wake itself, well, yeah, that doesn’t work so well.

The obvious answer to this is to offload the decisions needed to wake the processor (or the things that don’t really require the processor). And the most obvious solution for this is to use a lower-power microcontroller as a “sensor hub,” although SiliconBlue (now Lattice) was positioning their mobile-friendly FPGAs (stop laughing – yes, they were – and are? – putting FPGAs into small consumer items) into sensor-hub applications.

There are a million microcontrollers to pick from out there, of all shapes and sizes. Some have more interrupt inputs; some have fewer. Most speak I2C and SPI (although there seems to be a decided tilt towards I2C – more on that another time). But now microcontrollers are being announced explicitly as sensor hubs. Atmel, for example, recently announced a sensor hub that incorporates both touch and other sensor functionality (they claim to be the first single-chip solution that includes touch). This summer I Googled “sensor hub,” and most links were about the device that Lapis (part of ROHM) released earlier this year.

So now the less-demanding microcontroller can stay awake and monitor the sensors, waking the main processor only if it matters. It’s like using an auxiliary power unit of some sort in the truck to power the creature comforts while the truck is in sleep mode.

Managing the space

Of course, this adds a component to the design (either another chip on the board or another core in the SoC). I was party to a conversation the other day where it was asked whether the microcontroller could actually be integrated with the sensor. My initial reaction was that it’s nice in theory – if that sensor (or that package with multiple sensors) was the only sensor that had to be managed. Otherwise, if there were additional external sensors needed in the system, then you’d still need an external sensor hub to handle those – you’d end up double-hubbing, which is wasteful.

I then noticed a feature that InvenSense provides (I’m not sure that they’re the only ones, but they help to make the point). They have a nine-axis IMU with what they call their Digital Motion Processor (DMP) that does some of the calculation offload that we’ve been discussing. But they also allow a pressure sensor – which isn’t included in the package – to be worked into the calculations. That external pressure sensor isn’t connected to the I2C bus that the IMU is using; it’s driven directly into the InvenSense IMU on what you might call a private I2C bus. The IMU then can present the pressure data as well as the IMU data to the processor or sensor hub.

In this respect, rather than all sensors being more or less “equal,” this IMU starts to act like something of a hub. Given that, you can see it as the logical next step to go all the way to integrating a microcontroller so that one of the sensor chips ends up also being the hub. Instead of the other sensors being connected to the I2C bus going to the processor, they’re connected to a bus going to that uber-sensor, and only the uber-sensor talks to the processor.

Of course, if it’s really possible to add value like that, the obvious question becomes, “Which sensor is the one that should have the microcontroller?” Given the fact that most of the integration is happening with motion- and location-related sensors, giving us 10-axis units (which doesn’t even count the temperature data that they can provide), using the IMU does seem to make the most sense.

The IMU has solidified itself as a permanent member of the smartphone sensor team (including several of the sensors required by Windows 8). Beyond those and a few other required sensors, there’s much discussion of other possible additions. Things like humidity, radiation, or substance detection (think breathalyzer in the phone, for example).

Given an IMU that also acts as a hub, it becomes easy to tack on other sensors – as long as that hub functionality is broad enough to allow any random code to be used. The current hub-like capabilities are much more specific than that (requiring, for example, a pressure sensor; not just any other random sensor).

Having sorted through that, I went back to another recent sensor hub announcement, this one from ST Microsystems. When I first looked at it, my focus was entirely on the “hub” part of it and the fact that it could manage various ST sensors as well as ones they didn’t make. What had escaped my notice was the fact that those ST sensors were included with the hub. And there it was: a bunch of sensors and a microcontroller integrated into a single package.*

The other thing that occurs to me is that cursory discussions of sensors might deal with them in the abstract, suggesting that configurations need to allow for any random combination of sensors. But with the rapid adoption of sensors into high-volume consumer items like phones and tablets, it starts to make sense to create targeted configurations. Indeed, the ST device is aimed squarely at tablets running Windows 8. Whether or not it can handle some obscure sensor that’s likely not to show up in a tablet is an entirely academic discussion.

And that brings us to the present. From a sensor or two talking directly to the application processor to a separate hub interlocutor to a sensor/hub combo. With distributed processing – some in the sensor ASIC, some in the hub, and some in the main processor. I would expect to see much more pushing and pulling between competitors to see how these configurations continue to evolve.

But at the very least, you should expect to see fewer and fewer trucks idling at the truck stop.

 

 

*Yeah, I feel sorta like a dork now at my reaction to that conversation. Teachable moment: just shut up and listen.

 

More info on companies mentioned:

Atmel sensor hub

InvenSense

Kionix

Lapis sensor hub

ST Microsystems sensor/hub

12 thoughts on “Minimizing Idle Time”

  1. Today we looked at how sensor hubs are evolving. Space and time constraints meant that we couldn’t necessarily cover all variants and vendors; what other devices or approaches have you seen to sensor data management?

  2. Pingback: Petplay
  3. Pingback: www.cpns2016.com
  4. Pingback: from this source
  5. Pingback: ADME Services
  6. Pingback: jogos friv

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
38,593 views