I'm currently working on a technical article on image sensors, which is out for review in first-draft form. Perhaps obviously, one key focus area deals with the respective strengths and shortcomings of CCDs and CMOS sensors. CCDs implement a 'global shutter' function, wherein all pixels's photodiodes and other associated circuitry terminate photon accumulation at the same point in time, with charge values subsequently and sequentially read out of the device in a serial manner and converted from the analog to digital domain via external circuitry.
While 'global shutter' CMOS sensors have been demonstrated in prototype form, the 'rolling shutter' alternative architecture is dominant (if not universal) with production devices. Each pixel row's worth of data sequentially transfer to a buffer register prior to pixel line reset; the buffered (and subsequently A/D-converted) information then passes to the system processor via external I/O cycles. The advantage of this line-scan approach is reduced silicon overhead versus the 'global shutter' alternative, which would require an incremental multi-transistor and light-shielded circuit structure within each pixel to store the accumulated-photon value. The rolling shutter downside, on the other hand, is that different regions of the sensor capture the image at different points in time, leading to objectionable distortion artifacts with fast-moving subjects.
The Wikipedia entry that I link to in the above paragraph gives some good image examples of 'rolling shutter' still image artifacts (which embedded vision implementers will need to comprehend and correct for). And with respect to video, I'll pass along an interesting video that I came across, wherein a musician placed a rolling shutter CMOS sensor-inclusive iPhone inside his guitar and subsequently captured a clip of him strumming the instrument's strings:
One other conceptual correlation I make in my to-be-published writeup is between an image sensor and the human eye's retina; I suppose one could even extend the analogy to encompass the brain as image processor, and the optic nerve as the I/O path between sensor and SoC. I mention that, at minimum, camera manufacturers often insert a filter between the lens and image sensor to counteract the latter's sensitivity to infrared light (a filter which infrared photography fans subsequently strive to remove or otherwise circumvent).
After finalizing my writeup first draft yesterday afternoon, I perused some back issues of Discover and Scientific American magazines last night before drifting off to sleep. In the August issue of the latter publication, I found an interesting writeup titled 'A Skill Better Than Rudolph's'. From the piece:
Being able to see UV light confers some rich benefits on the reindeer. Its primary winter food source, lichens, and the fur of its main predator, the wolf, both absorb UV light, which makes them stand out against the UV-reflecting snowy landscape. UV vision actually has deep roots in the mammalian family tree: hundreds of millions of years ago early mammals had a short-wave-sensitive visual receptor, called SWS1, that could detect UV rays. That sensitivity is thought to have shifted toward longer waves—away from short UV wavelengths—because mammals were mainly nocturnal and UV vision was of little use to them at night. This shared ancestral UV sensitivity may explain why a small yet diverse set of mammals has regained the ability to see UV light.
Conversely, during my article research, I'd previously noted this section of the Wikipedia entry for retinal cone cells:
The S cones [editor note: S=short, responding most to short-wavelength light of a bluish color] are most sensitive to light at wavelengths around 420 nm. However, the lens and cornea of the human eye are increasingly absorptive to smaller wavelengths, and this sets the lower wavelength limit of human-visible light to approximately 380 nm, which is therefore called 'ultraviolet' light. People with aphakia, a condition where the eye lacks a lens, sometimes report the ability to see into the ultraviolet range.
More generally, as I suspect some readers have already figured out, I correlate a 'bare' panchromatic image sensor to the retina's rod cells, while drawing an analogy between cones and a sensor that's been augmented by a Bayer- or otherwise-patterned color filter array, or alternatively one implementing Foveon-championed X3 multi-photodiode-per-pixel techniques. As such, I thought you might enjoy another intriguing Scientific American article that I came across last night, this one from the July issue and entitled 'Evolution Of The Eye'. Alas, the full reprint of one's only available by subscription, or alternatively by buying a digital copy of the July edition.
Happy reading!