Camera Color Spaces Explained – sRGB vs Adobe RGB vs RAW
By:
Herb Paynter
Your camera is probably able to capture color images in a variety of
different color containers called “spaces.” These camera color spaces
collect colors in one of several size light buckets labeled sRGB,
AdobeRGB, and RAW.
Each bucket gathers slightly increased varieties of light, similar to
the way Crayola crayons are packaged and sold in increasingly inclusive
collections of colors; small, large, and jumbo.
Camera color spaces offer photographers a variety of different size boxes.
Camera color spaces
Scenes that include both brilliant colors and bright lighting are excellent candidates for capture with
AdobeRGB color space.
F/3.5, 1/1000, ISO 400, Lumix G Vario 2.8, 35mm
A debate in the photo community usually arises over which camera
color spaces to choose in the camera’s preferences. Some color spaces
capture more of the hues and saturated colors than others. Pictures
captured in one space may include more colors than another.
Each space is ideally suited for certain purposes, and the question
of which camera color space to choose needs a bit of explanation. In
addition to the capture question, choosing a color space for
post-production editing will depend on the image’s ultimate usage.
Your camera’s color spaces involve not just color data, but
additional parking space on the drive. Larger color spaces provide more
bit-depth (explained below), which occupies more digital real estate on
the memory card. So, the choice of which to use does have practical
importance.
What camera color space to use
There is no singularly perfect color space choice, so let’s examine which is best for specific situations.
Images
that do not include highly-saturated color but contain significant
detail in the shadow areas
will benefit from RAW format capture and
high-bit processing. F/10, 1/1600, ISO 800, Lumix G
Vario 2.8, 200mm
Unless
the sole purpose of a photo is to display as a high-resolution digital
image, you might want to convert the file’s original color space for a
less demanding result. However, keep in mind that every time a file
mutates from a larger color space to a smaller color space (RAW to
AdobeRGB, or AdobeRGB to sRGB), the image’s color intensity and
integrity may diminish in the process. Some imaging applications are
less demanding than others.
While copies of digital files remain identical in size and intensity
to the original regardless of how many times they have been copied, when
a digital file mutates to a lesser color space, it will always lose
some critical color information. Your camera color spaces in general,
and device color spaces, in particular, are all unique. Each serves a
particular purpose.
The extreme
dynamic range
and saturated skies benefitted from the RAW capture and editing in
AdobeRGB. Detail buried in the shadows was possible because of the
14-bit capture. F/14, 1/300,
ISO 3200, Lumix G Vario 2.8, 12mm
It’s a matter of depth
The difference between camera color spaces boils down to an issue
called bit depth. Bit depth is a mathematical description of how many
visible distinctions between shades of color can be recognized and
reproduced by different devices (a techie term for scanners, cameras,
computer monitors, and printing machines). Unfortunately, not all
devices can reproduce all
colors the same (which is the primary stumbling block amidst all color issues).
Every device reads and reproduces color using a different process.
While this sounds like a fixable problem, there is a sad and unsolvable
reality behind the problem. There are at least three different
interpretations of color at play in every capture-display-print cycle.
These
colorful seat cushions and deep shadows were captured in RAW format,
edited in AdobeRGB,
and saved in sRGB for upload to our camera club’s
server for display as part of a club field trip
slideshow. F/7.1, 1/320,
ISO 400, Lumix G Vario 2.8, 19mm
First, cameras capture
color
by recording intensities of light as electrical signals and
interpreting those signals as colors. Each color is assigned a specific
number.
Second, these numbers are then sent to the computer. Here, they get
translated into another process that interprets those electrical signals
into a process that turns on tiny lights (called pixels) on a backlit
screen.
And third, those pixels are then sent to a printing machine that
instructs those pixel values to spit tiny splatters of colored ink onto
paper.
It’s a very complicated process that color scientists have tried for
years to make simple. Unfortunately, it just ain’t that simple!
Anyway, during this hair-on-fire digital transition, different
methods are employed that utilize the various color spaces in a way that
transforms the colors from one device to another as accurately as
possible. Sometimes the color translations don’t convey the colors as
accurately as we would like, which is why sometimes the monitor colors
don’t match the printer colors.
Science
uses charts like this to plot the characteristics of camera color
spaces. While these charts are
referred to as “theoretical” because they
are not visible to the human eye but represent what each
color “bucket”
can capture versus what the eye can see.
The ultimate referee
The only comprehensive color space that plots the full scope of what the
human eye can see is what the science community calls L*a*b* (inverted horseshoe diagram) space.
The human eye is the ultimate arbitrator in the color wars, and all
device capabilities (camera, display, and printer) are defined by how
they match up to the eye’s master gamut. This is why this strange
horseshoe shape is referred to as the Reference Space. All other
devices, whether camera, display, or printer, can only recognize and
utilize portions of this “reference space,” and they usually disagree
with each other.
Color
is a very diverse and dysfunctional family. Each device speaks a
different dialect of a similar language. Each produces colors that
cannot be faithfully reproduced on other devices. Color is a very messy
topic.
Crayola
crayon boxes contain varying numbers of colors just as color spaces
collect varying amounts
of color. The lightest and darkest color crayons
are the same value, but larger boxes contain more
colors than smaller
ones.
Some devices can express color more completely than others.
Unfortunately, no device created by humans can reproduce all the colors
that can be seen by humans. Also, the colors captured by one device that
fall outside the gamut (Crayola box size) of other devices, get
clipped, lost, or compressed during the handoff. Those colors never come
back home.
This is the tragic truth about digital color reproduction. The trick
to color reproduction is in retaining as much of the common color as
possible during the process. Fortunately, this same human eye (and
brain) are very forgiving about accepting the limitations of non-human
devices.
Color reproduction is a true application of the law of diminishing
returns and the visual science of physics. Photographers understand this
law quite well.
Very rarely can a camera actually capture all the color and dynamics
of an original scene. Moreover, nature’s color gamut extends even
further than the colors that the human eye can identify. Any time a
digital image gets transposed from one form into any another form, that
transformation is a diminished-value exchange.
As an image is transferred from one device to another, those pixel
values located outside the color gamut of the destination device always
get lost in the translation. The object of color management is to
mitigate color loss and maintain as much of the appearance of the
original as possible, all the way through the reproduction process.
RGB spaces (sRGB, AdobeRGB, ProPhoto RGB)
It all begins with the camera’s color settings that are in place when
you capture the scene. All cameras capture light through red, green,
and blue filters (RGB color space). While there are a number of RGB
color spaces to choose from, each sports a slightly different color
gamut.
Each
device in the photography chain interprets colors slightly differently,
and each responds to the
individual color spaces uniquely.
Each color space (sRGB, AdobeRGB, ProPhoto RGB, etc.) provides a
unique collection of color attributes, and each space satisfies specific
display and reproduction requirements.
Gamuts are descriptions of the range of colors that a device can recognize, record, display, or print.
Shooting a vibrant, saturated scene with the camera requires a larger
color space. Using a camera color space with a smaller gamut could
significantly diminish the raw, harsh emotion of the scene. This is why
most photography experts encourage photographers to set their cameras to
capture images in AdobeRGB.
sRGB
Almost all digital cameras are factory-set to capture colors using
sRGB as the default color space for a plausible reason; most of the
pictures we take never get printed! At best, we view them on computer
monitors or
social media.
Quite honestly, most of the pictures we capture never make it past the
initial glance at the camera’s LCD screen. Capturing those images in
higher-bit color space is a total waste of disk space.
sRGB
color space remains largely unchanged since it was defined in the 1950s
to compress video
images into a manageable size for broadcast. While
the format has been updated slightly, the basic
intent is the same.
sRGB was developed by HP, Microsoft (and others) back in the early
days of television to address the color gamut needs of most televisions
(early versions of computer monitors), and the standard was set long
ago. The airwaves and Internet browsers live on an sRGB diet. As such,
the sRGB color space standardizes the way images are still viewed on
monitors and televisions.
Adobe RGB
If the ultimate destination for your picture is monitor or
display-based presence (presentations, Internet, or television
displays), this is probably the best choice to capture images. However,
if you shoot for print on paper, both AdobeRGB 1998 and ProPhoto RGB RGB
contain a wider gamut of colors and are thus more suited for
preparing images for print.
The
brilliant dynamics and saturated colors are always captured best in the
deepest color bucket of
all – RAW. The degree of adjustments provided
by RAW capture and ProPhoto RGB editing is perfect
for images like this.
F/6.3, 1/800, ISO 400, Lumix G Vario 2.8, 26mm
RAW
Actually, the most ideal bucket for capturing images actually exceeds
the gamuts of all three of these camera color spaces. I’m speaking of
course of your camera’s ability to capture images in
RAW format. This is a format that supersedes any defined color spaces.
RAW files capture color in the highest bit depth possible; up to
14-bits per color. RAW is not an acronym; it is more of a description.
It is the recording of all the limited color depth and uncompressed
dynamic range of the original scene.
Start RAW and strip down from there.
Camera color spaces explained – Conclusion
Congratulations on sticking with this article through all the minutia.
By now, it probably seems like camera color space is more like outer
space, but it doesn’t have to remain this technical. Simply remember to
capture images in RAW format (perhaps in addition to capturing them as
JPG) and then transform the colors down the chain of reproduction as the
need dictates.
Edit images in the camera color spaces of ProPhoto RGB or AdobeRGB to
retain as much color elbow room as necessary. Those images destined for
print should be transposed to AdobeRGB, and reduce those images
destined for the Internet or slideshows to sRGB. Simple, enough!
Share this article.