The human eye contains three types of colour-sensitive cells, known as cones, located on the retina. They are separately sensitive to red, green and blue light. Therefore, a given pixel in a colour image can be represented as a combination of three values. In simple terms, these specify the amount of red, green and blue light that needs to be emitted by the equivalent pixel on a computer monitor in order to display that colour. The human visual system operates by superposition of the signals from the cones, and so combines the red, green and blue light to give a perception equivalent to the original colour. Note that the actual light emitted by the monitor does not replicate the full spectrum of the original light captured to produce the image: in order for the human visual system to perceive it as the same colour, it only needs to closely match the original intensities in the red, green and blue parts of the spectrum. Since light of these three colours can be added together to reproduce any colour, they are referred to as the additive primaries.
The additive primaries are only applicable to systems that work by emission of light. Systems that work by the absorption of light, such as oil painting, are based on dyes that absorb a specific part of the spectrum. They use a different set of primary colours: cyan, magenta and yellow, which are used in pairs. For example, in order to reflect red light from a white source of illumination, all but the red light must be absorbed, by combining dyes that absorb both cyan and yellow. The physics of the combination process is more complicated than that for emission-based systems, and strictly speaking is multiplicative, but these primary colours are usually referred to as the subtractive primaries.
We are deal here only with the additive primaries red, blue, and green. The colour of any given pixel is represented by a vector of three values, one for the intensity of each primary colour. The colour can therefore be considered as a point in a three-dimensional space, where the x-axis represents red, the y-axis green, and the z-axis blue. This space is known as the RGB colour space (or RGB colour cube). Clearly, we can then rotate to any set of three non-degenerate axes in the space, producing an alternative representation of colour that still spans the same space of possible colours (known as the gamut of the colour space). Various colour spaces have been designed for different tasks: Tina provides functions to convert images into a number of the more well-known colour spaces.
Considering the RGB colour space, the point (0,0,0) represents black, and the point (1,1,1) on the opposite corner of the cube white. All points on the axis between these two points represent shades of grey. The information encoded along this axis is called the luminance, represents the total intensity of light received from a scene. Grey-scale images of colour scenes are produced by recording only the luminance information. The actual colour (e.g. red, orange yellow etc.) is represented by a vector away from this axis in a plane orthogonal to it, and is called the chrominance. To a first approximation, luminance is dependent on the level of illumination of a scene, whereas chrominance is not. In many colour analysis tasks, such as colour segmentation, it is useful to discard the luminance information in order to remove the effects of varying levels of illumination, retaining only the chrominance information. Therefore, many colour spaces involve a rotation such that one axis of the space lies along the luminance axis. It is worthwhile to note that, whilst discarding the luminance axis of a colour space in order to normalise for illumination differences usually works to some extent, the actual situation is far more complicated: shadows, for instance, represent areas that may be partly illuminated by coloured light reflected from other objects in the scene, changing their chrominance.