Intermediate

Exposure to the right (ETTR)

This post is also available in: Fran├žais

Introduction

There have been many debates around this sometimes misunderstood technique.

What’s the point ?

It is used to optimize the use of the physical abilities (hardware) of your sensor to be able to record the maximum amount of contrast information from the scene you are taking a picture of.

How do we do ?

This involves distorting the exposure to bright areas (over-exposing) without saturating the sensor to retain as much data as possible.

Why does it work?

Because of the way the sensor works and the way we store data.

Why is it not automatic?

It is not always possible, there are limitations.

In practice it is used but in a more flexible way and with some approximations.

I want to know everything

Keep reading.

In the following, I will regularly make an analogy between light and sound. Sound is a wave that we capture with an electronic system, it’s the same for light (even if they are not the same type of waves) and its measurement scale (the famous decibels) is a logarithmic scale , same principle with light.

Part I: The sensor

As you know your sensor is made of pixels (or photosites) sensitive to light.

Imagine that each pixel is a small microphone. It is therefore able to record a zero signal which corresponds to perfect silence (black) but it also has a maximum capacity beyond which it saturates (and displays white).

Your pixel is therefore this small microphone and it sends a value between 0 and 100 or between 0 and 100,000. The higher this value, the more your pixel will be able to differentiate between shades from black to white.

This is called the pixel depth.

Different pixel depth and shades of gray
We can clearly see the difference between 3 bits and 16 bits in terms of shades of gray

Imagine that your pixel-depth is 16 bits. He can therefore send a number between 0 and 2 ^ 16-1 = 65’535

If you make your light metering on a dark element and your brightest element contains a lot of details but which all require a number greater than 65’535, your pixel is saturated, it sends 65’535 and you have lost the information.

Note: This has nothing to do with the number of bits for the colors here we talk about the “raw bit depth”, it is really a physical capacity of the pixels to send the signal on 12, 14 or 16 bits.

Part II: Your brain, this logarithmic engine

You are in a dimly lit room in front of your computer consulting Bush Pixel, you look out the window, there are clouds but it is bright.

It’s brighter outside than inside, but how much? 2 times more ? Three times more ? Well no, somewhere between 128 and 256 times more!

Your brain transforms this exponential difference into a linear difference so that you don’t have a strong and disturbing feeling, so it kind of applies a natural logarithm.

Your brain does the same for sound with your ear.

Part III: Exponential storage

Unfortunately, in computer / electronics, we have nothing to recreate this, the membrane of your microphone will vibrate 2x more if the sound is 2x stronger and the signal will therefore be 2x more important.

Ditto in light, if the light is 128x stronger outside, the pixel signal will be 128x stronger.

If we use our 16-bit gray scale on the image above.

The question is how are the shades of gray distributed over these 65,536 possible values. Does the middle point (65’536 / 2 = 32’768) correspond to the middle gray? Well no ! This point is almost completely white, on the right side of the diagram above.

Half is not (2 ^ 16) / 2 but 2 ^ (16/2) = 2 ^ 8 = 256

Middle point

You therefore understand that around this midpoint, you will have a much lower accuracy than on the right. With a few less values you are on very black areas and a few more values on very white areas.

At 256 you are in the middle. Between the complete black and the middle you therefore have 256 possible values, 256 shades. On the right, you have the rest, 65’535-256 = 65’279 shades. So you have a lot more nuance on the light side.

That’s the exposure to the right. You are going to make sure that your subject is not exposed correctly but overexposed to store more details of contrasts. You will reduce the exposure and recover all the details in post-processing.

Part IV: Concretely

To play on the exposure you know your exposure triangle, Aperture, Speed and ISO.

Forget the ISO.

Do not raise the ISO. Don’t touch it.

ISO is not going to bring up new information on your pixels, it is signal amplification, it is artificial, it will just increase the value of all pixels in the same way but will not bring new data.

ISO is useful when looking for a style effect (depth of field or speed) and the light is too low to obtain these effects without amplification.

Your only two variables are aperture and speed.

If your subject is moving and you are trying to freeze it, you cannot lower the speed and you will play on the aperture.

Conversely, if you are looking for a certain depth of field, you will play on speed.

Objective: Expose to the right without saturating too many pixels

How ? With your histogram of course.

Histrogram in the viewfinder
The histogram at the bottom right shows a peak on the dark colors (almost the whole scene) and a peak on the whites (all on the right) which are already saturating, the sky is saturating.

You will shift the histogram as far as possible to the right, trying not to have a vertical peak on the white line on the far right.

Some cameras also allow you to display the saturated areas with red or black flashes.

In practice you will often saturate part of your scene but it is a compromise, it does not matter that the interior of the sun appears saturated or that a small cloud in the distance is saturated if your subject is not the sky.

To use all the time?

No, sometimes your constraints of opening / speed / objective do not allow you to shift more to the right.

In safari you will not necessarily have time to adjust, with a large focal length you reduce the depth of field a lot, by increasing the aperture too much you risk having a part of your subject blurred and by increasing the time of exposure you may have motion blur.

Also keep in mind that you are using one histogram, it is generally unique without detailing the 3 colors red / green / blue, it is possible that your histogram appears unsaturated because the green and the red are not saturated but that the blue is completely toasted. You then get more details on the greens and reds not on the blue ones, this can distort the colors.

Finally, even if in the end your photos will contain more details in post-processing, they will all appear on your screen as over-exposed. Unless you do post-processing every night, you will not enjoy your photos during the trip. What a pity ! I love to debrief with my wife in the evening on what happened during the day and the photos are a good opportunity to exchange.

My advice is to take your photo normally and if you have time or the scene is very pleasant then take a second photo where you expose to the right as much as possible without grazing with the right peak of your histogram because never forget a saturated pixel is screwed up, a dark pixel always contains information.

Leave a Reply

Your email address will not be published. Required fields are marked *