As a recent writeup in ExtremeTech rightfully notes, the $25-35 price point of the Raspberry Pi board in combination with its impressive capabilities at that cost:
-
Broadcom BCM2835 SoC which includes an
- ARM1176JZF-S 700 MHz processor overclockable to 1 GHz, and a
- VideoCore IV GPU, and
- either 256 MBytes or 512 MBytes of DRAM
have made it quite popular with hobbyists around the world over its short life to date.
But Raspberry Pi's imaging capabilities have only more recently begun to mature. Hardware-accelerated decoding of various imaging codecs was joined by hardware-accelerated encoding just three months ago, for example. And reflective of that software algorithm achievement, just a few days ago the Raspberry Pi Foundation publicly demonstrated the prototype of an upcoming 5 Mpixel camera module (approximately £16, or $25) which leverages CSI pins on the Pi board versus the USB port harnessed in prior imaging experiments.
At present, the primary focus (pun intended) of the camera add-in board is still and video image capture. But does Raspberry Pi have enough horsepower to also handle more meaningful embedded vision functions? I asked BDTI senior software engineer and embedded vision and robotics specialist Eric Gregori that question yesterday, and here's how he responded:
The answer depends on how you define vision processing. I can do color blob tracking and navigation via edges on a 350 MHz ARM9 (i.e. Chumby). For the hobbyist market this meets about 50% of the functional requirements.
On a 600Mhz ARM11 CPU I can do Cam shift-based tracking, which gets me another 25%. Finally, with some heavy algorithm modifications, I was able to get face detection working usefully on an 800 Mhz ARM Cortex-A8 CPU. Using the LBP cascade instead of the HAAR, I think I can make face detection work reasonably well on the Raspberry PI.
So although you cannot do bleeding-edge (or even semi-sharp edge) computer vision on the Raspberry PI, you can do a lot of things considered useful by the hobbyist community.