Seeing red

Can you explain why the colour red is so difficult to photograph?

It doesn’t seem to matter whether the subject itself is red or if it is bathed in red light, as the result is nearly always the same – a red area with no detail. Videos seem to suffer very badly and I have noticed the effect on television programmes such as Strictly Come Dancing, especially when the contestants wear red costumes.

I wonder why the television programme makers use it so much when the result is so poor. - Chris Watkis

This problem is to do with the capability of the human eye. At some stage, mammals lost the ability to distinguish three colour stimuli, reducing it to two. Primates have re-evolved this ability by splitting the long wavelength cones into two types (called M and L). The result is that instead of distinguishing well-separated spectra (as do birds, which can see red and green separately), our red and green vision largely overlaps.



To replay colour images we need to stimulate the red and green cones separately to reproduce the same stimuli that the original mix of photon wavelengths would have produced. Because of the closeness of the M and L cone response – G (green) and R (red) respectively – this becomes very hard. Ideally, we would choose emitters with wavelengths that excited one and not the other, so where one would have a strong response the other would have a weak one.

If we were birds, this would be easy: we’d use a green emitter at 500nm wavelength (where the green is at a peak and the red gives very little response) and the red at 580nm (where the red cones are at their peak and the greens give little response).

However, if we used the peaks for human vision we’d need to use 550nm for green and 580nm for red. The result, however, would be that a ‘green’ stimulus would give almost as much red response as green, and vice versa.

A further problem is that we don’t want to compromise the green, because this is where most of the luminance information resides – the area where we see the most detail.



So, what is usually done is that the red emitters are placed lower down the spectrum, where they can give more separated excitation of the red cones. The cost is that the red cones are not very efficient at those wavelengths, so the red vision will suffer from higher noise, and will tend to saturate at a lower perceived brightness (because it is using more of the available brightness range just to register).

Reproduction of reds is always problematic, but it’s a human vision problem more than a technological one. Where modern digital methods have helped is computing the excitations required, taking into account the overlaps in response.