Photography (Level 4)

Bayer Array and Sampled Yosemite Image
The array of colors on the left part of this image represents a Bayer filter array (named after a Kodak scientist who was one of the people to develop this particular arrangement of colors). This pattern of filters is placed on top of a black and white image sensor to make each element of the sensor respond to either red, green, or blue light. Notice that there are more green elements. This has to do with our eyes' better sensitivity to fine detail in the green region of the light spectrum. The middle panel shows a scene in Yosemite National Park and the rightmost panel shows how that scene would be sampled by a typical digital camera with the individual red, green, and blue pixels (or picture elements). A lot of computer processing takes place to convert these raw detected images into the pictures that you enjoy viewing.

How Do Digital Cameras Detect Colors?

The "heart" of a digital camera (or perhaps more accurately, the "retina") is a sensor array made out of silicon. These sensors have small individual detectors (usually several million on a single sensor) that respond to light that falls upon them. What happens is the energy in the light causes a small electrical current in the detector at that particular location. That current is then measured electronically and converted to digital values that represent the amount of light detected. Those digital values, indexed to their location on the image, provide the information needed to draw an image on a computer monitor or printer. However, the sensor itself responds to the whole visible spectrum of light energy and some infrared energy as well. This overall response can only produce black and white images. To produce color images, multiple sensors are required to detect and discriminate the different color regions of the spectrum like our eyes do. The first step is to place an infrared filter in front of the sensor to get rid of that energy that we cannot see at all.

The next step is to figure out how to separately detect red, green, and blue images in order to have all the information needed to create the different colors we can see. One way to do this is to use three image sensors and put red, green, and blue filters in front of each of the three respectively. This gives us the needed red, green, and blue images, but it also makes the cameras very bulky and expensive because three image sensors are required. Instead, most cameras use a filter array as illustrated in the above picture. The filter array results in a single image sensor that has some pixels that respond to each of the three red, green, blue, primary colors. Since we really want red, green, and blue information at every location in the image, fairly complicated computer processing is done to convert the detected image (with the filter array information superimposed) to a single full-color image. This process is known as demosaicking since the filter array can be considered a mosaic of colors. There is also a lot of other processing that goes on before we see the images to adjust the color, exposure, contrast, sharpness, noise levels, and other image attributes.

Finally, the combination of the image sensor, the color filter array, and the computer processing result in a set of three images. One represents the red information in the scene, one the green information, and the third the blue information. These can be combined on monitors or printers to give us the beautiful full-color images we are used to seeing when we simply push a button. This process is theoretically the same one that Scottish scientist James Clerk Maxwell developed in the 1800s when he is credited with inventing color photography (in reality he was trying to show that the human visual system detects colors by separating the information into just three images corresponding roughly to red, green, and blue information).

Icon 6-4

Explore the NEXT TOPIC at this level.

Explore the NEXT LEVEL on this topic.

Ever wonder ... What are the primary colors?


Updated: May 26, 2010