An Idiot, a digital camera, and a PC +

My name is Seung Chan Lim, a clueless noob trying to make sense of the subject of computational photography by re-teaching the material learned in class....

Taking Advantage of the Human Eye Deficiency for Fun and Profit

posted by dJsLiM on Friday, September 16, 2005 ::
click for class reference material ::

Now that we know how it is that we're able to see what we see, let's talk about how the digital camera records the same information.

Instead of rods and cones, the sleek digital cameras you see on the market today contain a bunch of CMOS sensors, and their bulkier ones contain another type of sensors called CCD sensors. These

sensors collect light information much like the film does, but since we're in the digial realm, the information gets collected at discrete points on the sensor array, and not in a continous manner as it is in the case of the film. These points are called pixels, and the more pixels you have the more more light information you can gather at finer points along the sensor array. That's why you see those flashing advertisements about how the new 5 megapixel cameras are sooooo much better than the 3 megapixel ones from the yester years. (*SIGH* mine only has 2 mega piexels...)

Now, if you remember from our previous discussion, there are two variables that we care about when it comes to recording light: the number of photons and their respective wavelengths. To make our lives a bit easier, since we're only sensitive to light that reside within the blue and red boundaries of the electromagnetic spectrum, we actually only care about the photons that have wavelengths that fit within those boundaries. So the goal of a digital camera boils down to recording the number of photons and their wavelengths encountered at every pixel where the wavelength is within the visible spectrum.

If we were to model the digital camera after the fact that the human eye has 3 cones responsible for the red, green and blue colors respectively, we'd use 3 sensor arrays to store the 3 different ranges of the color spectrum separately. The problem with that approach would be that it requires lots of sensors. In reality, what you'll find in most consumer digital cameras is a single-chip design that contains just one array of sensors. So we're obviously going to have to make some trade offs here. To store all three channels of color information on a single array of sensors, we use a technique called the Bayer filter.

The Bayer filter encodes 3 different color informations on a single grid of pixel arrays in such a way that a decent reconstruction algorithm can be run to produce an acceptable replica of the real scene being captured. Notice how each pixel only encodes

information from a single color channel. (Don't you feel robbed now that you realize the 5 mega pixel camera you own really isn't a 5 mega pixel camera, after all? =P) You might have also noticed that there are more pixels dedicated to green than the other two colors. The idea is that since we're most sensitive to green, we'd be better able to deduce the brightness of the overall image through the particular range of colors available in the green spectrum. These feable machines are just trying to piggyback on our brain, you see. ;)

So your digital camera, apart from the crappy single chip design, sounds like a damn fine piece of machinery capable of recording all the light information we need, eh? Well, not even close. The main limitation of your digital camera is that it is completely incapable of capturing the vast range of light intensities found in the real world.

You see, the range of light intensity a digital camera can capture at any given pixel is from 0 to 255. That's a mighty small number compared to the high dynamic range found in the real world. So what happens is that once the number of photons that are found at a certain pixel goes beyond 255, your digital camera will consider it to be as saturated as it can be. Try taking a picture outside on a

sunny day with long exposure, and your camera will end up gathering so much photons per pixel that your image will probably be mostly white. You could simply shorten the exposure, but then you wouldn't have had ample time to collect enough photons on certain pixels, leading parts of the image way too dark. So whatever mapping the digital camera does to take the actual number of photons colleted at a given pixel and turn it into a value between 0 and 255, that is going to be a major limiting factor in producing the same image as perceived by our eyes.

As much as you now see how crappy your digital camera really is, that isn't to say that our eyes are leaps and bounds better. While it's true that our eyes have a much better range, it still isn't enough to cover the entire range found in the real world. So our eyes have to depend on some sort of a remapping as well. We can take a look at the corn sweet illusion to illustrate this phenomenon. If our eyes are able to accurately record the true intensity values of light, then we should have no problem perceiving the absolute brightness of the image at any given position. But, as you can see, we can't. =P

Then again.... As it is with other things in life, perhaps too much of everything isn't the best thing to have. Heck, if it weren't for the limitations of our eyes, the whole impressionism movement of the 1870s wouldn't have even existed, and that, in my humble opinion, would have put a quite a damper on the onslaught of very interesting non-photo-realistic art forms to come. ;)

0 Comments:

Post a Comment

<< Home