You are here: Home > RGB-to-grayscale projection

### RGB-to-grayscale projection

#### Short tutorial

Humans perceive color through wavelength-sensitive sensory cells called cones. There are three different types of cones, each with a different sensitivity to electromagnetic radiation (light) of different wavelength. One type of cone is mainly sensitive to red light, one to green light, and one to blue light. By emitting a controlled combination of these three basic colors (red,green and blue), and hence stimulate the three types of cones at will, we are able to generate almost any perceivable color. This is the reasoning behind why color images are often stored as three separate image matrices; one storing the amount of red (R) in each pixel, one the amount of green (G) and one the amount of blue (B). We call such color images as stored in an RGB format.

In grayscale images, however, we do not differentiate how much we emit of the different colors, we emit the same amount in each channel. What we can differentiate is the total amount of emitted light for each pixel; little light gives dark pixels and much light is perceived as bright pixels.

When converting an RGB image to grayscale, we have to take the RGB values for each pixel and make as output a single value reflecting the brightness of that pixel. One such approach is to take the average of the contribution from each channel: (R+B+C)/3. However, since the perceived brightness is often dominated by the green component, a different, more "human-oriented", method is to take a weighted average, e.g.: 0.3R + 0.59G + 0.11B.

A different approach is to let the weights in our averaging be dependent on the actual image that we want to convert, i.e., be adaptive. A (somewhat) simple take on this is to form the weights so that the resulting image has pixels that have the most variance, since pixel variance is linked to the contrast of the image. In the applet above, the "optimal projection" calculates how we should combine the RGB channels in the selected image to make a grayscale image that has the most variance. [For the more technically advanced; we find the weights by taking the principal eigenvector of the sample covariance matrix of the RGB channels.]