We’ve all seen some of the crappy pictures that cell phones have allowed us to take and distribute around the world at lightning speed. (Is there such a concept as “photo spam” – legions of crappy pictures that crowd out the few actual good ones?).
Now… let’s be clear: much of the crappiness comes courtesy of the camera operator (or the state of inebriation of the operator). But even attempts at good composition and topics of true interest can yield a photo that still feels crappy.
Part of the remaining crappiness is a function of resolution: phone cameras traditionally have had less resolution than digital SLRs. So we up the resolution. And, frankly, phone resolution is now up where the early digital SLRs were, so the numbers game is constantly shifting as we pack more pixels into less space on our imaging chips.
But that comes with a cost: smaller pixels capture less light. Because they’re smaller and have fewer impinging photons. So higher-res chips don’t perform as well in low-light situations. (Plus, they traditionally cost more – not a good thing in a phone.)
There is an alternative called Super Resolution (SR), however, and to me it’s reminiscent of the concept of dithering. I also find the name somewhat misleading: it isn’t a super-high-res camera, but rather takes several low-res images and does some mathematical magic on them to combine them into a single image that has higher resolution than the originals. Like four times the resolution. It’s part of the wave of computational photography that seems to be sweeping through these days.
The way it works is that the camera takes several pictures in a row. Each needs to be slightly shifted from the others. In other words, if you take a static subject (a bowl of fruits and flowers) and put the camera on a tripod, this isn’t really going to help. One challenge is that, with too much shifting, you can get “ghosting” – if a hand move between shots, for example, you might see a ghosty-looking hand smeared in the combined version.
It’s been available as a post-processing thing on computers for a while, but the idea now is to make it a native part of cameras – and cameraphones in particular. Which is good, since I can’t remember the last time I saw someone taking a still life shot with a phone on a tripod. (Besides… fruits don’t do duckface well.)
In this case, the slight shaking of the holding hand may provide just the movement needed to make this work. But, of course, you need the algorithms resident in the phone. Which is why CEVA has announced that it has written SR code for its MM3101 vision-oriented DSP platform. They claim that this is the world’s first implementation of SR technology for low-power mobile devices.
Their implementation allows this to work in “a fraction of a second.” Meaning that it could become the default mode for a camera – this could happen completely transparently to the user. They also claim that they’ve implemented “ghost removal” to avoid ghosting problems (making it less likely that the user would want to shut the feature off… although for action shots? Hmmm…).
You can get more detail in their release.