What is "Good" Skin Tone?

This is the photo that caused me to return a camera. Which is kind of crazy because I really do like this photo a lot.

Mila

I've spent a lot of time thinking about photographing skin tone - what is good or bad skin tone? And most importantly of all - does shooting RAW mean you can always get good skin tone?

Evolution

We evolved to see green first. Green actually covers a pretty broad spectrum, but most importantly it's the color of foliage - of plants we can eat.

enter image description here

We then evolved to see red. The red spectrum overlaps with green considerably, which means we're very attuned to seeing subtle differences in this range. The benefit of this is that we can now see fruits and predators against the foliage.

Most importantly for our discussion - we evolved to see subtle differences in skin tones. What fruits or meats were good to eat or not, what people were healthy or not. Millions of years of evolution made us very sensitive to "good" or "bad" skin tone - whether we're consciously aware of it or not.

enter image description here

Above: Notice how skin tone is an interplay of green + red to make yellow.

Finally we evolved to see blue. Blue is further away from red and green, and we have the least sensitivity to it, meaning we're less capable of discerning subtle differences in blues.

enter image description here

Above: Notice how good skin tone is an interplay of yellow + blue and how quickly it can become too yellow, too red, too magenta. Also notice how the green sections seem the most solid, we still perceive it as the most important color.

The addition of blue also gave us the strange color magenta - which is the absence of green, it triggers the red and blue receptors on opposite sides of our frequency spectrum. This is why magenta is so rare in nature.

Human skin tone is a very delicate balance of these three colors - get it right and someone looks healthy. Get it wrong and - disaster.

What is Bad Skin Tone?

Before we get to good skin tone, let's take a look at bad skin tone. I've stumbled on a few cameras that I think give consistently bad skin tone and found some examples on Flickr.

enter image description here

Bad skin tone manages to be too yellow and too magenta/red at the same time. If you look closely at your own skin - your palm or the inside of your wrist are often good examples. Your skin will likely have bits where it's thinner or thicker - veins and capillaries closer to the surface or not. This creates a subtle variation in hue. Looking at my own hands in good (indoor daylight) light, I can see there's a leathery spot that comes from using a computer mouse all day long and very blue veins next to fairly red arteries in my wrists.

Not everyone will have this much variation (nor this transparent skin) but getting this balance right is important no matter what your skin tone.

enter image description here

We all have these subtle variations in skin tone, but some cameras exaggerate it more than others. What makes this difficult to work with is that red and yellow are adjacent on the color wheel, and magenta is almost but not quite opposite yellow.

enter image description here

When I start to complain about this, usually someone chimes in with a very helpful "You get good skin tone by shooting RAW and with post processing skills." Translation: the equipment is never to blame, it's all your fault.

I hate this argument. It's neither helpful nor informative. It just places blame without offering a solution.

So let's try to fix this skin tone. I just have the JPGs of these files, but I can still open them up in Photoshop and do the usual editing moves.

First let's try to reduce the reds with curves. Well that's a little better.

enter image description here

Though red is opposite cyan so by reducing red I've introduced cyan and he's looking a little sickly. Maybe I should have reduced the saturation of the red channel instead. You can see how this gets complicated & these decisions require knowledge and skill.

Mind you, this is a particularly strong example and this gentleman's skin may well be "either red or yellow" with little in between. My contention is that some cameras emphasize this more than others.

This is the kind of thing that would drive me mad - it's impossible to get rid of one color without emphasizing another. Getting rid of yellow introduces blue, which makes the reds more magenta. It's all connected.

If only there were a tool that could just sort of suck the colors from different points on the color wheel and bring them together, reducing the variation in hue.

I always assumed that I could reduce the variation in color if I decided to work in LAB color mode, but could never quite figure out what to do to achieve the results I wanted.

Enter Capture One. It was tremendously validating to me when I found out that Capture One had a Skin Tone tool that did exactly this - increase uniformity in Hue, Saturation and Lightness around a selected range. This edit took maybe 30 seconds - another 30 seconds and it would be dialed in even better. (I went a bit too far in making things uniform and I would want to further tweak the hue. I just wanted to show the power of this tool to quickly resolve the "too yellow or too red" issue.)

enter image description here

Finding this tool I realized I was right - "good" skin tone is reducing variation around a central "good skin tone" range.

And of course this is all very subjective - "good" skin tone is a subjective term after all. All color is subjective - it only exists inside our minds.

RAW is RAW

This is my argument against "if you shoot raw and have post processing skills, you can get good skin tones with any camera."

Before discovering the Capture One tool, my method for this would have been to create a Curves layer in photoshop, correct for both the "too yellow" and the "too red" on different layers, and then painted in on each section as needed to reduce the variance between the two.

This is a technique I learned from a Lee Varis tutorial who - as far as I can tell has some skill in post processing.

While the results were good, it was sort of tedious and annoying and not something I could do in batch.

Because "good skin tone" as I'm defining it is such a narrow target "neither too red nor too yellow" it's very easy to overshoot it - not only that but you can't pull it in one direction without pushing it in the other.

I contend that some sensors are overly sensitive to this. The color filter array that allows them to detect color is itself tuned to be sensitive or not to specific frequencies & if they don't overlap in just the right way - it exaggerates what should be a subtle difference in color.

I've never shot Leica, but my impressions of Leica are that this is the case.

The following photographs are images I've come across from Leica cameras that display this tendency.

This is a very heavily edited photo taken with a Leica M 240. To the point where you may wonder whether or not the red/magenta vs yellow issue is a purposeful effect

enter image description here

What about this, presumably unedited photo of a popular youtube camera reviewer taken with the Leica SL2.

enter image description here

Or this, more subtle image from another popular youtube camera reviewer. At first glance, this image looks fine.

enter image description here

But what is all the more interesting about this is that it's a video, so you can track the same patch of skin across time.

enter image description here

A small change in luminosity leads to a large shift in hue* and it unfolds in real time in front of your eyes.

* no pun intended

So if you're shooting RAW and you import a file whose colors are simultaneously too red/magenta and too yellow - how do you edit them?

My contention is that "RAW is RAW" is a fallacy because while they do offer a lot of latitude in how they're processed, it's possible to overshoot the mark and it takes a delicate hand to dial it back.

Most people think the RAW file and RAW conversion process are equal - each having a 50% impact on the final result.

I agree that they're equal, but 50/50 would be additive, and it's not, it's multiplicative.

It's not 50/50, it's 10/10.

10% x 10% = 100%.

This means that any tiny difference in the RAW file is multiplied by any tiny difference in the RAW conversion process. Which means that if both are 1% off (9.9% x 9.9%) the end result is 2% off (9.9 * 9.9 = 98).

And when editing skin tone, 2% off is a lot.

Which brings us back to the photo that made me return a camera. I like to think I have some skill when it comes to editing photos. If a camera RAW file and the way Adobe treats it doesn't get me to within 2% of what I want, I can only envision a future of me fighting with that camera to get the colors I want.

Was I irrational in returning it? Maybe. Could I have tamed it? Maybe, but I had a limited window to return it and I made my decision.

What Photography can learn from Videography.

In video editing, they have something called a "skin tone line" - a line on the Hue/Saturation vector-scope that represents "good" skin tone. This gets you to a ballpark region where most people's skin tones will look good, regardless of ethnicity. You can think of it as white balance for skin tones.

Then there's the amount of saturation - darker skin tones tend to be more saturated, which means they're also more sensitive to slight variations in hue overall - if the camera gets the skin tone a little bit wrong on a pale person, they'll get it a lot wrong on a person of darker skin tone. Though subtle variations in hue tend to get exaggerated more with lighter skin tones.

Modern cameras tend to go for a more "punchy" saturated look, which can exaggerate subtle differences in color processing for darker skin tones. Take a look at the difference between "Standard" (left) and "Portrait" (center) color modes on this camera. To the right is the X-Rite Color Checker calibrated RAW version

In each row there are differences between each image, but the more pigmented the skin, the greater the difference from image to image. All images have the same white balance.

enter image description here

In no case would the X-Rite version have been my preferred starting point - it's too green.

Check out this video to learn more about the "skin tone line" and how videographers use it to get "good" skin tones regardless of the camera used or the lighting situation.

Final Thoughts.

This post won't put to rest the "RAW is RAW" argument, nor the "getting good skin tone is a matter of skill in post processing" argument.

What I hope is that it introduces the idea that subtle variations in our real skin tones can be exaggerated by the imaging pipeline. That some cameras are overly sensitive to this and exaggerate that difference. That regardless of skill, some some things are difficult to fix in post.

PS - the RAW files from some of the Leica SL2 images are available to download on the DP Review website, so you can use them to debunk me all you want. :)

Credits

Model for first image.

https://www.instagram.com/milabogofficial/

Open License Images

https://www.flickr.com/photos/jaap_spiering/35280792671/

https://www.flickr.com/photos/rossap/8553766697/

Copyright Reserved Images

https://www.flickr.com/photos/32681588@N03/26665342303/

https://www.youtube.com/watch?v=yilmHb7gx10

https://www.youtube.com/watch?v=ajCU94KAUzU

These images are used under fair use.

If you are one of the owners of these images, and want me to remove them under DMCA, please notify me by using the contact form on this website.

Camera JPG Portrait Shootout

Straight out of Camera Shootout

Which camera is the best for skin tones? Does it vary from skin tone to skin tone? Does it vary in different lighting conditions?

In 2019 I decided to put this to the test. I bought cameras from (almost) all of the major camera manufacturers and photographed a number of different models with different skin tones, in studio and natural lighting conditions.

These are entirely subjective tests.

The Photos

In the following grids, each camera manufacturer is on a single line. All of the photos are straight out of the camera using various color profiles. (Standard, Portrait, etc.)

The goal is for you to choose the line whose skin tones you prefer the most. This is entirely subjective.

Each poll opens in a new tab, close the tab to come back to this page. If you want you can vote from multiple devices.

Models presented in alphabetical order.

Anastasia

enter image description here

Vote Anastasia (opens in new tab)

Jewelz

enter image description here

Vote Jewelz (Studio) (opens in new tab)

Jewelz Natural Light

(Row A is missing intentionally)

enter image description here

Vote Jewelz (Natural Light) (opens in new tab)

Tatiana

enter image description here

Vote Tatiana (Studio) (opens in new tab)

Tatiana Natural Light

enter image description here

Vote Tatiana (Natural Light) (opens in new tab)

Yesenia

enter image description here

(Sadly row F is over exposed.)

Vote Yesenia (opens in new tab)

Methodology

Cameras were set to factory default prior to shooting.

I use the zone system rather than ETTR for exposure. That is - I expose for the amount of light hitting the subject and not the subject itself.

All models were told to show up without makeup.

This is my general process for setting up cameras for photos & the equipment I used.

I own an Alien Bee ARB800 Ring Flash and for this photo it is the only light source for studio photos.

I set the camera to base ISO and to a relatively moderate aperture (around f/5.6) and set the flash output with a Lastolite grey card so that the resulting back-of-the-camera histogram is in the middle. I then set the white balance in camera using either a WhiBal or an ExpoDisc.

I then take photos using the various color modes on the camera. If the camera allows for in-camera RAW editing, then I set the different color modes after the fact.

When I switch between cameras, I use the aperture to dial in an exposure again using the Lastolite grey card to within 1/3 stop from center - I try not to change the flash intensity if I don't have to just to keep as few variables as possible.

All cameras were shot with the best lens I owned for that camera and were made by that camera manufacturer, with the exception of Sony, which I photographed with a Leica 90mm Summicron. (I don't own any Sony native lenses.)

For the natural light photos, I took them in a room that has good natural light, set the white balance to a Whibal card and set the exposure to a Lastolite grey card.

Cameras

Here are the answers for the most complete sets of rows sometimes images are missing, but it should be easy to guess which.

Please do not click until you've voted.

Camera List (opens in new tab)

The Shirley Project

Kodak used a model named Shirley to test their new film emulsions. If they made Shirley look good, it made people look good. The problem is, Shirley was white.

The resulting image became known as the Shirley Card.

It wasn't until the 1970's when chocolate and furniture makers started complaining that their products looked flat and boring that they started incorporating more models into their tests and developing film emulsions to make people with a wide variety of skin tones look good.

This is, in part, an exploration of whether or not this still holds true in the digital age. In-camera color profiles have opinions, and those opinions may impact people with different skin tones differently.

Voting Results

Final Thoughts

The differences and similarities between cameras is striking.

I'm a long time Nikon shooter and I love their colors for natural light - but now I see why I went away from them for studio work.

The difference between Standard and Portrait for Nikon and Canon seems to be "make things pink".

Fuji's X-Pro2 is interesting - it seems as you go through the color profiles from Provia to Astia to Pro Neg Hi to Classic Chrome, they turn down, specifically the warmer saturation (the photos get less orange).

In fact a lot of the shift from standard to portrait involves how much saturation they include, and that affects darker skin tones more than lighter skin tones.

The differences between the X-Pro1 and X-Pro2 are striking.

I am quite fond of Canon colors, and interestingly Olympus colors. Though the iEnhance mode was horrible on Jewelz under studio lighting (which is why I didn't include it.) I think iEnhance is more meant for landscape or something.

Some cameras are better geared towards studio photography than others. Olympus in particular doesn't have a histogram that helps me estimate where middle grey is. The Sony A7 makes it difficult to do in-camera white balance with strobes (a deficiency that has been addressed in later models).

Some people just shoot RAW and don't care about camera JPGs - personally if the camera can get me closer to something I like, I'm all for it. Less work for me before handing over files to a client.

Final Survey

If you would like to stay up to date on this project, please consider signing the mailing list where I notify you of new blog posts.

powered by TinyLetter

If you have a moment, please take this 4 question survey to provide feedback on this project.

Social media

Me

https://www.instagram.com/sodiumstudio/

Models

https://www.instagram.com/vershinina.anastasia/

https://www.instagram.com/_jewelz.alize/

https://www.instagram.com/tatulyatay/

https://www.instagram.com/yesenialinares/

Discussion

Discussion Thread Fuji Astia Comparison with voting

Calibration -  Can it eliminate differences between cameras?

A tale of two color profiles

In my previous blog post, I looked at RAW files and how they get turned into RGB values. Let's take a deeper dive into this.

According to a white paper published by Adobe, RAW conversion has the following steps.

  • Determine the White Balance

  • Colorimetric Interpretation - turning the RAW values into RGB values

  • Gamma Correction - converting linear RGB data into a non-linear color space (humans perceive the world in a non-linear manner)

  • Noise Reduction, Antialiasing and Sharpening

I'm guessing "Colorimetric Interpretation" is the step where color profiles come into play - different interpretations of the RAW data. (I'm not a scientist, I'm a photographer looking for practical answers.)

Cameras have color profiles - in-camera Standard, Portrait, Landscape etc. Adobe also has color profiles "Adobe Standard" etc. And you can create a custom profile with a ColorChecker.

The ColorChecker was invented in the 1970's. It contains colored patches with the six primary colors - Red, Green, Blue, Cyan, Magenta and Yellow - as well as color patches intended to match the sky, human skin tones and other common objects.

It's used to create a camera profile, intended to reduce differences between cameras - bringing cameras towards a neutral standard.

I profiled my Canon 5D mk2 - this is what the profile looks like.

enter image description here

The wheel in the profile is an HSL wheel. Hue, Saturation and Lightness.

Hue is represented by the compass direction. Saturation as distance from the center (center is completely desaturated - grey). Lightness is not shown but it would be shown as height. There is also an HSV wheel - Hue, Saturation and Value. Value is akin to Lightness - the more the Value, the less black is mixed in with the Hue.

enter image description here

Thanks to the work of Sandy from from the excellent ChromaSoft blog, we have the ability to visualize what color profiles are doing in the 3 dimensional HSV space.

Below are representations of the Standard and Portrait camera profiles from Adobe for the Canon 5D mk2 in the HSV space. The color disc represents the bottom of the HSV space and should be black - it's a limitation of the software (Apple Grapher) that it's not at the maximum Value.

enter image description here

Sandy calls these "hue twists" - the color "twists" around as Value increases. Interestingly, some colors like red and blue twist more than they rise, and most colors "twist" in a clockwise direction, which matches the color profile I produced with my ColorChecker.

The color profiles represented here are designed to have opinions - "Camera Standard" isn't a neutral color profile, nor is "Camera Portrait" - it's Adobe's representation of the camera manufacturer's color intention for those profiles.

I've Profiled My Camera - Now What?

Below are two photos - one taken with the Fuji X-Pro1 and the other with the Fuji X-Pro2. Both with custom white balance & calibrated to an X-Rite ColorChecker. (that's a lot of X's)

enter image description here

And here's proof of the custom white balance - this was taken with an ExpoDisc. The histogram isn't in the center because I find the ExpoDisc isn't good for exposure, but it's great for white balance.

enter image description here

Personally, I prefer the X-Pro1 version. I find the X-Pro2 version to be too warm. Perhaps I can use white balance to fix that. This is much closer to the X-Pro1 version (just eyeballing it, I didn't measure RGB values).

enter image description here

And these are those same values on the ExpoDisc image - you can see from the histogram that this is far from "neutral" now.

enter image description here

What the heck just happened?

I suspect it has to do with those hue twists - that clockwise rotation. Neutral is still neutral, but the colors shift around that neutral center. I've drawn arrows on the HSL chart to mimic what changing the white balance does. It moves the center somewhere else.

enter image description here

Which is perhaps why I get such wildly different colors when calibrating and setting custom white balance.

enter image description here

Notice the X-Pro2 is again much warmer than the X-Pro1.

So we have this paradox where in order to get a more "neutral" image, we need to make it less actually neutral. Thinking back to the RAW conversion steps - first determine the white balance, then perform the RAW to RGB conversion. It seems that there is something that happens between those two steps that affects "color" and "neutral" objects differently.

Final Thoughts

Honestly, I'm still processing what this means for color profiles & camera calibration using a ColorChecker, these blog posts are forcing me into new territory.

The rotational nature of color profiles is fascinating ("hue twists").

How do White Balance & Color Profiles interact really?

Does the fact that hues twist differently according to "Value" have implications for light vs dark skin tones?

My goal with all this testing was to

a) figure out if calibration could eliminate differences between cameras (it seems not)

b) figure out which camera I like the most (I'm not here to offer gear reviews so I'll leave my choices to myself)

c) provide you with tools to help you choose a camera (more on this in an upcoming blog post)

d) see if there was any "bias" that affected different skin tones differently

At this point, I think I have more questions than answers, but I'm definitely having fun with all this.

Sign up for email updates here, I’ll only email occasionally with blog updates.

https://tinyletter.com/sodiumstudio

Credits & References

https://chromasoft.blogspot.com/

https://commons.wikimedia.org/w/index.php?curid=9801673

https://www.instagram.com/_jewelz.alize/

https://www.instagram.com/vershinina.anastasia/

Camera Color Science - Does it Exist? (and if so what to do about it)

What is Color Science Anyway?

Color doesn't exist in the world - it only exists in the mind. Light exists in the world and it has a properties like frequency and intensity. It strikes an object (like a plant) and some frequencies of light are absorbed, and others reflected.

That light reaches the eye and an electro-chemical process transforms that light (photonic energy) into a signal that's received by the brain. The brain interprets that signal as color.

Color Science is the study of this phenomenon - how the human (and other animals') eye and mind perceives this thing we call color.

What does that have to do with cameras?

Since the dawn of time, humans have mixed materials together to produce certain colors. Whether it's the burnt umber of iron oxides or the Indian yellow of the urine of cows fed only mango leaves or the carmine red of the cochineal insect ground down into a powder - humans have created pigments from the materials of the world around them & mixed them together to create all of the colors that we see in works of art from cave paintings to ancient pottery to the paintings of the great masters.

In the 20th century, color was standardized. We lost the origin stories of the colors around us and reduced them to numbers. The RGB monitor you're likely reading this on combines red, green and blue light to produce all of the colors you see on that device. (These devices do not - yet - produce all of the colors the human eye can see.)

Cameras are the tool that we use to photograph the world and turn it into images that we see in print or on devices.

Modern digital cameras are transducers. This is a fancy way of saying that it takes one kind of energy (light) and turns it into another kind of energy (electrical). Some light enters a camera, the camera turns the light energy into electricity, measures the voltage, and turns that into a number. Or as I like to say "it counts photons." (This is an inaccurate, but useful joke.)

If you have a 20 megapixel camera, there are 20 million photosites, and each photosite "counts" the photons that reach it to turn that into a number.

How Cameras "See" Color

These photosites can only count - they produce a single number. In order to turn this into color information, we need 3 numbers - Red, Green and Blue.

In order to "See" color, camera manufacturers put a Color Filter Array (CFA) atop the photosites, each corresponding to the three "primary" colors - red, green and blue.

Bayer Ftiler Array

A photosite under a "red" filter will only "see" red and can only therefore "count" red photons. This produces a mosaic of red or green or blue pixels that are stored in a RAW file.

In order to turn this into a full color image, the RAW file goes through a process known as demosaicing  - taking the mosaic of red or green or blue pixels and using data from neighboring pixels to construct a full RBG color at each photosite.

If you're reading this article, I assume you understand and agree with everything said up until this point. This is where things start to get controversial.

A Definition for Color Science when it Comes to Cameras

Before we proceed any further, we need a common definition for "Color Science" as it pertains to cameras and camera manufacturers.

My definition is the entire imaging pipeline - from when the light comes off of the back of a lens to the final RGB image - usually a JPG.

A photon must first pass through IR and UV filters to cut the amount of Infrared and Ultraviolet light that gets through (since humans, by definition, can't see this). Then the Color Filter Array we just discussed. Finally the photons must reach the photosite and be converted (transduced) into an electrical impulse. That electrical impulse must then be measured (converted into a number) and stored in a RAW file. Finally, that RAW file goes through some sort of RAW conversion process to be turned into an RGB image.

That whole process is what I define as "color science" when I'm talking about cameras. From the photon leaving the rear of the lens to producing an RGB value. I stop at the RGB value because this allows us to mathematically compare cameras. If I went beyond the RGB value to monitor and printer calibration and the ambient light in the room - that's too many variables the camera manufacturer can't account for.

Similarly I'm not including the lens in this definition because it's another variable and we have enough things to worry about.

So for the purposes of this discussion a camera's "color science" stops at the RGB values produced. (I'm going to assume the sRGB color space throughout this article - for those of you who care about such things, but everything should apply to other color spaces.)

Do Different Cameras "See" Color Differently?

Whenever someone says they like one camera company's colors, or dislike another company's colors people tend to say "Just shoot RAW and it doesn't matter." The oft cited refrain is "RAW is RAW" - meaning that there is little to no difference between the sensors of different camera manufacturers - at least when it comes to interpreting colors.

This is at the heart of the color science debate.

Do different camera manufactures produce sensors that "see" color differently, and even if they do - can we just eliminate those differences by "calibrating" the RAW file using something like an X-Rite Color Checker.

Canon 5D vs 5Dmk2 Quantum Efficiency

Check out the image above. I've seen it around the internet for a while now and have managed to track down the source to a 2011 paper about using Bayer sensor cameras for photographing the sun's corona. It measures the "quantum efficiency" of the Canon 5D vs the Canon 5D MarkII. Quantum efficiency is a fancy way of saying "how many photons get counted" with "1" being "100%."

If I'm reading this right - it says that the Canon 5D MarkII is more "efficient" at counting photons. Light that comes in with a wavelength of 5500 Ångströms was measured by the green photosite at 0.22 in the Canon 5D and in the Canon 5D MarkII - it's measured at 0.3. Better quantum efficiency.

What's interesting about this graph is that the improvement in quantum efficiency moves the color response to the left. Both the peaks and the crossover points are shifted to the left. And this is just two different camera models from the same manufacturer.

Note the second red "hump" - I'm going to bring it up later.

enter image description here

Cameras See Colors Differently - Canon vs Nikon Edition

I wanted to test this some more - could I quantify the differences between cameras? I'm a long-time Nikon shooter and I'm used to Nikon's colors.

I purchased  -  for the purpose of testing   - Nikon and Canon cameras. Specifically the Nikon D700 and the original Canon 5D. Both of them are excellent cameras - but both approach color in a very different way - at least to my eyes.

I also own and have tested cameras from Fuji, Olympus and Sony. (Sorry Panasonic, I tried but the camera I got was a dud - not your fault I just got very old used cameras for the test.)

I then purchased Wratten filters from Tiffen. The color response of each is shown below.

Tiffen Wratten Filters

Specifically I purchased a Red #29, Green #58 and Blue #47 that are intended for color separation. You may be familiar with Wratten filter numbers from Photoshop   - The #85 and #81 Warming filters and the #80 and #82 Cooling filters are mimicked in Photoshop's "Photographic Filter" menu.

Photoshop Photo Filters

My idea is to use these filters to measure the amount of color separation going on in the color filter array.

If I put a red filter on, I expect to see some green response since red and green overlap in the way the human eye responds to color - but how much? And how strongly does blue respond?

And if I put a blue filter on, how much do the red photosites respond?

Here are the results.

Note: There are twice as many green filters in a Bayer filter array than Red or Blue - that's why the green is shown twice in these graphs.

Canon 5D - Red Filter (#29 Wratten)

Red Peak: +2.5 EV; Blue Peak: -4.8 EV; Difference: 7.3 EV

Canon 5D Red

Nikon D700 - Red Filter (#29 Wratten)

Red Peak: +2.3 EV; Blue Peak: -6.3 EV; Difference: 8.6 EV

Note: The histograms in this image are more spread out because of some vignetting in the lens.The numbers should work out the same.

Nikon D700 Red

Canon 5D - Blue Filter (#47 Wratten)

Blue Peak: 2.3 EV; Red Peak -6.0 EV; Difference: 8.3 EV

Canon 5D Blue

Nikon D700 - Blue Filter (#47 Wratten)

Blue Peak: +2.2 EV; Red Peak: -2.1 EV; Difference: 4.3 EV!

Nikon D700 Blue

The result is astonishing.

Canon 5D vs Nikon D700 Wratten Filter Chart

How could the Nikon pick up so much more in "red" energy from the "blue" filter?

Strong vs Weak Color Filter Arrays 

In addition to the shift we've already established from a change in "quantum efficiency", some sensors have a red filter that intrudes far into the blue spectrum… Much like the human eye.

Human Eye

The red cones in the human eye are actually a little bit sensitive to blue. When we see violet, it triggers the red and blue cones, but not the green cones. This is why this color is so rare in nature. It's also why "red + blue" make purple even though they're on opposite ends of the rainbow - purple can be produced by triggering the red and blue cones and not the green ones.

If you scroll back up to the Canon quantum efficiency chart, you'll see there's a secondary red hump in the green region. Other cameras have a different red frequency response that extends further into the blue region.

Below is the Quantum Efficiency of the Kodak sensor that was used in the Leica M8 camera. As you can see, the red filter extends well into the blue region. That is - when looking at "blue" light, the red photosites also receive energy.

Leica M8 Qauntum Efficiency

It may be that the Nikon CFA is tuned in the same way - where the red filter is designed to pick up some energy in the "blue" range. Meanwhile the blue filter is shifted to the left (responds less to red light than the Canon).

I want to be clear here - the Leica M8 camera had a weak IR filter. This isn't the same as a weak IR filter, and that doesn't mean that Nikon's colors are necessarily like Leica's colors - this is just one datapoint in a much more complicated formula.

All of these things interact with each other and must be accounted for in the RAW conversion.

Side note: This is also likely why I see very different Kelvin values when manually setting white balance in camera to a grey card - balancing the three values means different things in different cameras.

Fix it in Post

Hopefully I've been able to establish that camera sensors are not all created "equal" - I'm not saying one is better or worse than another, they're just different. What remains then is the RAW conversion software.

If the statement "RAW is RAW" is to hold true, then the RAW conversion software must be able to compensate for any differences in how sensors see colors.

Whether that software is in camera (in camera JPG) or in some third party some software like Adobe Lightroom or Capture One or RawTherapee or something else - that program is what's responsible for creating an RGB image.

An idea persists that if we calibrate different cameras, we can get them to produce the same results.

I've put this to the test too - let's see the results.

Sidebar - Cameras Have Opinions

Cameras have opinions on colors. The "Standard" picture profile from a Canon camera will be different from a Nikon camera and they will both be different from a Sony or Olympus or Fuji camera.

It should be non-controversial to say that cameras have different "opinions" on colors when it comes to in-camera JPGs. Hopefully by now you'll also agree that cameras have opinions on colors when it comes to the sensor & the RAW file as well.

Engineering is all about compromise. As photographers we're well familiar with this concept for lens design - if you optimize for sharpness and reduction of aberrations, then lenses increase in weight and price. Sensor design and RAW processing is no different - engineers are forced to make compromises and different camera manufacturers choose different compromises for different reasons. It isn't inherently good or bad - it's just a fact of the engineering process.

Photo Comparisons

I'm primarily a portrait photographer, so I really care about how people look in photos. I have a standard process for taking photos - I set in-camera white balance to a grey card (either an ExpoDisc or a WhiBal) and I set exposure to a different grey card (a Lastolite). See my article on White Balance for how I use each. I tried to get exposure on each camera within about a third of a stop.

All of this is to say - I have a repeatable process when it comes to taking photos and I followed that process here.

(pro tip: look at these images on multiple devices.)

Below are two photos. The Canon 5D is on the left, the Nikon D700 is on the right. Both are the "Standard" color profiles from the camera. To my eye - they are very different. The Nikon is very "warm" with richer red/orange hues, while the Canon is more yellow/green with more subdued reds. At the same time, the lips are more in the magenta range.

Canon 5D vs Nikon D700

Below are the Canon and Nikon again - this time edited from RAW using the Adobe Color profile in Lightroom. Theoretically, the colors should be the same in both photos because Adobe profiled each camera model and took that into account when creating the Adobe Color profile.

Again - I find the Nikon to be more orange in tonality, with the Canon balancing that out with more blue/magenta. They are closer than before, but are still visibly different.

Canon 5D vs Nikon D700 Adobe Color

According to Adobe, their color profiles are intended to provide a "unified look and feel regardless of which camera was used."

"All of the Adobe Raw profiles, from Adobe Standard to the six new profiles, were created with the intention of providing a unified look and feel, regardless of which camera was used."- Adobe Blog

But these cameras are old and there are manufacturing differences between camera runs you say - you can't just rely on Adobe's generic profile of each of these cameras - you need to create calibrate the cameras with something like an X-Rite Color Checker and Adobe's DNG Editor.

The below photos were calibrated using an X-Rite color checker and processed from RAW in Lightroom. The colors are indeed closer - though I can still tell which is which - minor exposure differences aside.

Canon 5D vs Nikon D700 X-Rite Calibrated

All in all - yes calibiration did bring the two photos closer together, but did not eliminate differences. I repeated this experiment with 7 different models.

The X-Rite Color Checker did reduce differences between cameras, but it was never my preferred starting point if I were to edit the photos.

The photos below were taken with different cameras - all calibrated to an X-Rite Color Checker, all with custom white balance set in camera. As you can see the colors are far from identical.

X-Rite Color Comparison

Final Thoughts, Conclusions

I initially did this test because  -  like many of you  -  I'm considering switching camera systems. I'm currently mostly a Fuji photographer but now that Nikon and Canon have mirrorless options, I wanted to test them against each other. So I went out and bought old (read: affordable) cameras and took a bunch of photos with them.

I even went ahead and bought multiple models from the same camera manufacturers - I have a Nikon D700 and Nikon D600 (which got a higher Dx0 mark for "portraits" than the D700). I have a Canon 5D Classic and a Canon 5D Mark 2. I also have a Sony A7, Fuji X-Pro1, Fuji X-Pro2, Olympus EPL-5 and Olympus Pen-F. Some of them are my personal cameras and some I bought for the test & I will be selling off the ones I don't like now that I'm done.

This was bound to be a controversial topic, so I went to great pains to produce numbers whereever I could and make this as repeatable a process as possible. I expected I would find some differences, but didn't know what they would be.

I'm nearing the end of the test - but I will be publishing the full results (all cameras) and all photos so you can pick the camera whose colors best suit your style of shooting, I just need time to complete the tests and compile the results.

The Shirley Project

The Shirley Card was a card used by Kodak to test their film emulsions. If Shirley looked good, then the film was deemed good for skin tones.

Shirley Card

The problem was - Shirley was white. Film emulsions from Kodak weren't good at picking up darker skin tones - and indeed it wasn't until furniture and chocolate manufacturers complained that the rich dark tones of their products weren't coming out in photos that Kodak started to produce film that could differentiate between darker tones.

Sadly, I suspect this led to the "they all look the same" problem when it came to - for example - National Geographic photographing African tribes. Part of my motivation for this project was to see how true this was today.

This is why I chose models of different skin tones for this project.

I'm calling this aspect of my tests The Shirley Project and will be sharing my findings on instagram @theshirleyproject

Stay tuned for further analysis.

Shirley Project

Sign up for email updates here, I'll only email occasionally with blog updates.

https://tinyletter.com/sodiumstudio