I have been exposed to two navigational extremes over the last month or so. These aren’t specifically competing approaches (although I suppose they could be), but rather represent navigation with a minimal set of sensors and with a full complement of assistance.
On the more minimal side, Movea put together a demo for CES that led me on a pedestrian voyage, courtesy of the guidance of a cell phone. The phone had 10 sensor axes (3X accelerometer, gyroscope, and magnetometer, plus pressure). They had also mapped out the hotel they were in based on blueprints they got. (That must have been a fun one for security to vet…)
The idea was that we’d go from near the entrance of the building to the elevator, up to the right floor (OK, the phone didn’t try to push elevator buttons…), and then continue on to the room. We used the phone as a guide or orienting device, holding it out in front as it showed us the way.
The sensor results and map mostly worked together to factor out errors, although there appeared to be a couple of “checkpoints” where the phone “viewed” a poster or image (I frankly don’t remember what the specific icon was). Such a checkpoint, if accurately placed on the map database, could zero out accumulated errors and give the sensors a restart.
If the TV had been on and properly set when we entered the room, then the phone would have automatically coupled with the TV to provide a welcome message or something.
The trip wasn’t without incident; the route was rife with magnetic anomalies (like inside the elevator), but, as an early demonstrator, we did make it through using this minimum of information.
The other extreme is a chip from CSR called SiRFstarV. It can work with a broad set of inputs to provide navigation. Its focus appears to be satellite, including GPS and GLONASS as well as other GNSS systems, satellite augmentation (which appears to me to be a side-system that sends what I would call meta-data between satellites to improve the quality of calculation), and “extended ephemeris” (being able to download ephemeris (star chart) data for dates as much as a month out).
But they also handle IMU and pressure sensor inputs as well as cellular and WiFi signals for triangulation, and they have a cloud-based CSR Positioning Center from which the device can obtain other information to assist in determining position.
The idea here is also to allow constant navigation, indoors and out, in open terrain and surrounded by tall buildings, relying on every possible source of data, implementing this in an SoC.
Part of the reason you can’t directly compare these two examples as competing is the fact that the Movea demo was specifically about indoor navigation, and so the GNSS data simply doesn’t apply. It highlights the challenges and progress trying to exploit and augment the IMUs so many of us already own.
Indoor and pedestrian navigation are getting their fair share of development effort these days, as numerous different companies (and certainly more than the two just mentioned) tune algorithms in different ways to optimize cost, power, and flexibility.
Another recent conversation further illustrated some of the nuances of IMU-based navigation; I’ll talk about that in a future post or two.
You can find out more about Movea on their site and about the SiRFstarV on the CSR site.