You are here: Nature Science Photography – Contrast – Contrast in photography
In order to determine the contrast behavior of a digital imaging system, we need to obtain some data that the operating instructions do not reveal. To do this, we need images that are as free as possible from artifacts of all kinds. This means all non-uniformities of the exposure field, dust particles on the sensor, and sensitivity differences between individual pixels. We can easily neutralize these effects by sequentially taking two images and calculating their difference. All that’s left is the useful signal, along with the readout and recording noise. To ensure uniform illumination of all pixels, we choose a subject that receives light from all spectral ranges. A white sheet of paper or a white wall are well suited to be photographed under the midday light of a clear day or, even better, photolamps tuned to daylight. The camera is mounted on a tripod and is set up to capture the subject full frame. We focus on infinity to iron out inconsistencies as much as possible. Then, with a medium aperture setting and manual exposure control and in-camera noise reduction turned off, we take two images for each exposure time (from shortest to longest) and ISO level, which are saved as .raw. For cameras that do not allow .raw saving, this test is not useful because when writing the .jpeg format, a tone curve is included that distorts the result. Wait between shots until the camera has written the data to the memory card to avoid possible resulting distortions. For the purpose of comparing the data, also note the ambient temperature during the test.
The images are then processed as described below. It is important to use a program that can process the .raw data linearly with at least 16-bit and unsigned integer values. For example, ImagesPlus or IRIS are able to do this. Photoshop, on the other hand, uses signed integers. For .raw conversion, the free program DCRAW is well suited because it transfers the outputs of the camera unchanged.
Perform .raw conversion: setting without white balance and Bayer interpolation (non-demosaiced).
Saving the images as 16-bit .tif
Creating image crops: Open each frame and crop it to 200 x 200 pixels from the center of the image. This is 40,000 pixels, and so the accuracy of the calculations is the square root of 2/number of pixels = 0.35%.
Determining the average data value: Open a pair of images from one of the two green channels (or, if desired, from each channel) of the same exposure and ISO level and take the average data value (also: Data Number/DN or Analog-to-Digital Number/ADU) from the histogram. For older Canon cameras, you still need to subtract the bias offset to get the correct figure for the average signal strength.
Keyword bias offset: Canon adds a constant voltage to the signal before quantization to keep the value above zero in any case. Without this, negative values would become zero during quantization because the output of the A/D converter is a positive integer. The trick preserves the full noise spectrum, making it easier to analyze and remove this. You determine the bias value as follows: Take two dark images, that is, images without the optics attached but with the housing cover on, and expose them for the shortest possible duration. To prevent any light from entering, it is best to do this under a blanket. Create the difference image from both, following the instructions provided above, and then open the histogram window. It will display a bell curve with a single peak. The data value under this peak corresponds to the bias offset. For example, for the Canon 40D and the 1D3s, the bias value is 1024 DN. Nikon does not add such an offset to the signal. Thus, as the exposure time decreases, the histogram shifts to a peak at the left edge because voltage values < 0 are clipped to the quantized RAW value 0.


Determine the standard deviation: Create the difference image for each matched pair of images using ImagesPlus. Open the Image Math Tool – Click the Select Source 1 box and select the 1st image – Click the Select Source 2 box and select the 2nd image – Click Subtract to create the difference image). Use the histogram to check if the minimum value is greater than 0. If it is not, enter an addition for the value of the 2nd source so that the subtraction does not result in a negative value (e.g. 5000). The subtraction eliminates all fixed pattern noise.
If you want data including fixed pattern noise, you must determine the standard deviation before subtracting. However, the issue with using a single image is that contrast differences and vignetting, which cause intensity variations across the image field, can blur the data, particularly at higher exposure levels. In addition, the fixed noise components vary widely even between cameras of the same type.
Then take the value of the standard deviation from the histogram. It corresponds to the combination of readout noise and photon noise. The given standard deviation represents the noise value for both images. To find the noise value for one image, divide the standard deviation value by the square root of 2 = 1.4142.
You now have data for the exposure time in seconds, the average signal strength, and the standard deviation of the data values. The most convenient way to enter the values is to use a spreadsheet and add two additional columns, one for the log2 of the exposure time and another for the log2 of the average data values. Many calculators calculate only the natural logarithm, that is, the one to the base 10, but here we want to know by what number we must exponentiate 2 to get the output value X. This works like this:
Formula 12

For the log2 of 1000 therefore applies:
Formula 13

Your table will then look like the excerpt below:
Exposure Time in seconds | log2 Exposure Time | Average Data Value | Log2 Average Data Value | Standard Deviation |
0,333 | -1,585 | 15438,3 | 13,91 | 54,999 |
0,250 | -2,000 | 12038,3 | 13,55 | 49,787 |
0,200 | -2,322 | 9633,4 | 13,23 | 48,500 |
0,166 | -2,585 | 7658,8 | 12,90 | 43,271 |
0,077 | -3,700 | 3841,9 | 11,91 | 31,015 |
0,040 | -4,644 | 1893,0 | 10,89 | 22,199 |
0,020 | -5,644 | 967,2 | 9,92 | 15,782 |
0,010 | -6,644 | 480,3 | 8,91 | 11,525 |
0,005 | -7,644 | 225,8 | 7,82 | 8,641 |
To visualize the contrast behavior of the sensor, have the spreadsheet plot the average data values for each exposure level in a graph with logarithmic axis division. You can do this for all three color channels or limit yourself to the most informative green channel. The result is curves like in figure 36.

Interpretation of the curve
The curve shows that electronic image carriers like CCD and CMOS chips have a linear characteristic throughout the usable range. This is different from AgX carriers, whose density curves have nonlinear regions because of how they were developed. This is because the silicon responds to light incidence by emitting electrons at an average ratio of 2:1. However, this linearity – twice as much exposure becomes twice as much voltage and, after quantization, twice as much data – presents us with problems. This is because our visual system processes differences in brightness in a quasi-logarithmic fashion, as we learned in the section, „The minimum size of brightness differences.“ This fundamental difference repeatedly causes confusion and misunderstanding in practice. We must correct digital data to make it appear „correct“ to us. The following sections on gamma correction demonstrate this process.
One more thing that we can read from the characteristic curve in figure 36 : Its linear course leads at both ends straight into the area of underexposure and overexposure, respectively. Unlike silver film, which transitions smoothly from white to light gray and dark gray to black, this beast mercilessly cuts off light values beyond the sensitivity range. This results in abruptly eroding highlights or inky black shadows. These white or black areas contain no information at all, so that nothing can be extracted later. While enlarging with silver film, dodging or post-exposing sag or shoulder areas makes them visible. Excessively bright areas also tend to overradiate, causing neighboring image areas to appear white (blooming). Since this jerky tearing off in bright areas is more noticeable (for example through structureless parts in clouds) than the loss in the shadow area, you should rather focus on the exposure of the highlights with digital cameras. It is helpful to highlight the clipping on the screen or in the viewfinder, i.e., the areas that become pure white. Some digital cameras make these areas flash alternately in black and white when playing back the image that has just been photographed. The histogram also provides information about the distribution of brightness.
Next The dynamic range of electronic image carriers
Main Contrast
Previous The requirement 0 in analog photography: Practical consideration negative film plus paper
If you found this post useful and want to support the continuation of my writing without intrusive advertising, please consider supporting. Your assistance goes towards helping make the content on this website even better. If you’d like to make a one-time ‘tip’ and buy me a coffee, I have a Ko-Fi page. Your support means a lot. Thank you!