Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Grayscale

From Wikipedia, the free encyclopedia
Image where each pixel's intensity is shown only achromatic values of black, gray, and white
For other uses, seeGrayscale (disambiguation).
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Grayscale" – news ·newspapers ·books ·scholar ·JSTOR
(March 2023) (Learn how and when to remove this message)

Grayscale image of a parrot
Color depth
Related

Indigital photography,computer-generated imagery, andcolorimetry, agrayscale (American English) orgreyscale (Commonwealth English)image is one in which the value of eachpixel holds no color information and only expresses a shade of gray. Pixel values are typically stored in the range 0 to 255 (black to white).[1]

Grayscale images, areblack-and-white or graymonochrome, and composed exclusively of shades of gray. Thecontrast ranges fromblack at the weakest intensity towhite at the strongest.[2] Grayscale images are distinct from one-bit bi-tonal black-and-white images, which, in the context of computer imaging, are images with only twocolors: black and white (also calledbilevel orbinary images). Grayscale images have many shades of gray in between.

Grayscale images can be the result of measuring the intensity of light at each pixel according to a particular weighted combination of frequencies (or wavelengths), and in such cases they aremonochromatic proper when only a singlefrequency (in practice, a narrow band of frequencies) is captured. The frequencies can in principle be from anywhere in theelectromagnetic spectrum (e.g.infrared,visible light,ultraviolet, etc.).

Acolorimetric (or more specificallyphotometric) grayscale image is an image that has a defined grayscalecolorspace, which maps the stored numeric sample values to the achromatic channel of a standard colorspace, which itself is based on measured properties ofhuman vision.

If the original color image has no defined colorspace, or if the grayscale image is not intended to have the same human-perceived achromatic intensity as the color image, then there is no uniquemapping from such a color image to a grayscale image.

Numerical representations

[edit]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

The intensity of a pixel is expressed within a given range between a minimum and a maximum, inclusive. This range is represented in an abstract way as a range from 0 (or 0%) (total absence, black) and 1 (or 100%) (total presence, white), with any fractional values in between. This notation is used in academic papers, but this does not define what "black" or "white" is in terms ofcolorimetry. Sometimes the scale is reversed, as inprinting where the numeric intensity denotes how much ink is employed inhalftoning, with 0% representing the paper white (no ink) and 100% being a solid black (full ink).

In computing, although the grayscale can be computed throughrational numbers, image pixels are usuallyquantized to store them as unsigned integers, to reduce the required storage and computation. Some early grayscale monitors can only display up to sixteen different shades, which would be stored inbinary form using 4bits.[citation needed] But today grayscale images intended for visual display are commonly stored with 8 bits per sampled pixel. This pixeldepth allows 256 different intensities (i.e., shades of gray) to be recorded, and also simplifies computation as each pixel sample can be accessed individually as one fullbyte. However, if these intensities were spaced equally in proportion to the amount of physical light they represent at that pixel (called a linear encoding or scale), the differences between adjacent dark shades could be quite noticeable as bandingartifacts, while many of the lighter shades would be "wasted" by encoding a lot of perceptually-indistinguishable increments. Therefore, the shades are instead typically spread out evenly on agamma-compressed nonlinear scale, which better approximates uniform perceptual increments for both dark and light shades, usually making these 256 shades enough to avoid noticeable increments.[3]

Technical uses (e.g. inmedical imaging orremote sensing applications) often require more levels, to make full use of thesensor accuracy (typically 10 or 12 bits per sample) and to reduce rounding errors in computations. Sixteen bits per sample (65,536 levels) is often a convenient choice for such uses, as computers manage 16-bitwords efficiently. TheTIFF andPNG (among other)image file formats support 16-bit grayscale natively, although browsers and many imaging programs tend to ignore the low order 8 bits of each pixel. Internally for computation and working storage, image processing software typically uses integer orfloating-point numbers of size 16 or 32 bits.

Converting color to grayscale

[edit]
Examples of conversion from a full-color image to grayscale usingAdobe Photoshop'sChannel Mixer, compared to the original image and colorimetric conversion to grayscale

Conversion of an arbitrary color image to grayscale is not unique in general; different weighting of the color channels effectively represent the effect of shooting black-and-white film with different-coloredphotographic filters on the cameras.

Colorimetric (perceptual luminance-preserving) conversion to grayscale

[edit]

A common strategy is to use the principles ofphotometry or, more broadly,colorimetry to calculate the grayscale values (in the target grayscale colorspace) so as to have the same luminance (technically relative luminance) as the original color image (according to its colorspace).[4][5] In addition to the same (relative) luminance, this method also ensures that both images will have the sameabsolute luminance when displayed, as can be measured by instruments in itsSI units ofcandelas per square meter, in any given area of the image, given equalwhitepoints. Luminance itself is defined using a standard model of human vision, so preserving the luminance in the grayscale image also preserves other perceptuallightness measures, such asL* (as in the 1976 CIELab color space) which is determined by the linear luminanceY itself (as in theCIE 1931XYZ color space) which we will refer to here asYlinear to avoid any ambiguity.

To convert a color from a colorspace based on a typicalgamma-compressed (nonlinear)RGB color model to a grayscale representation of its luminance, the gamma compression function must first be removed via gamma expansion (linearization) to transform the image to a linear RGB colorspace, so that the appropriateweighted sum can be applied to the linear color components (Rlinear,Glinear,Blinear{\displaystyle R_{\mathrm {linear} },G_{\mathrm {linear} },B_{\mathrm {linear} }}) to calculate the linear luminanceYlinear, which can then be gamma-compressed back again if the grayscale result is also to be encoded and stored in a typical nonlinear colorspace.[6]

For the commonsRGB color space, gamma expansion is defined asClinear={Csrgb12.92,if Csrgb0.04045(Csrgb+0.0551.055)2.4,otherwise{\displaystyle C_{\mathrm {linear} }={\begin{cases}{\frac {C_{\mathrm {srgb} }}{12.92}},&{\text{if }}C_{\mathrm {srgb} }\leq 0.04045\\\left({\frac {C_{\mathrm {srgb} }+0.055}{1.055}}\right)^{2.4},&{\text{otherwise}}\end{cases}}}

whereCsrgb represents any of the three gamma-compressed sRGB primaries (Rsrgb,Gsrgb, andBsrgb, each in range [0,1]) andClinear is the corresponding linear-intensity value (Rlinear,Glinear, andBlinear, also in range [0,1]). Then, linear luminance is calculated as a weighted sum of the three linear-intensity values. ThesRGB color space is defined in terms of theCIE 1931 linear luminanceYlinear, which is given by[7]

Ylinear=0.2126Rlinear+0.7152Glinear+0.0722Blinear.{\displaystyle Y_{\mathrm {linear} }=0.2126R_{\mathrm {linear} }+0.7152G_{\mathrm {linear} }+0.0722B_{\mathrm {linear} }.}

These three particular coefficients represent the intensity (luminance) perception of typicaltrichromat humans to light of the preciseRec. 709 additive primary colors (chromaticities) that are used in the definition of sRGB. Human vision is most sensitive to green, so this has the greatest coefficient value (0.7152), and least sensitive to blue, so this has the smallest coefficient (0.0722). To encode grayscale intensity in linear RGB, each of the three color components can be set to equal the calculated linear luminanceYlinear{\displaystyle Y_{\mathrm {linear} }} (replacingRlinear,Glinear,Blinear{\displaystyle R_{\mathrm {linear} },G_{\mathrm {linear} },B_{\mathrm {linear} }} by the valuesYlinear,Ylinear,Ylinear{\displaystyle Y_{\mathrm {linear} },Y_{\mathrm {linear} },Y_{\mathrm {linear} }} to get this linear grayscale), which then typically needs to begamma compressed to get back to a conventional non-linear representation.[8] For sRGB, each of its three primaries is then set to the same gamma-compressedYsrgb given by the inverse of the gamma expansion above as

Ysrgb={12.92 Ylinear,if Ylinear0.00313081.055 Ylinear1/2.40.055,otherwise{\displaystyle Y_{\mathrm {srgb} }={\begin{cases}12.92\ Y_{\mathrm {linear} },&{\text{if }}Y_{\mathrm {linear} }\leq 0.0031308\\1.055\ Y_{\mathrm {linear} }^{1/2.4}-0.055,&{\text{otherwise}}\end{cases}}}

Because the three sRGB components are then equal, indicating that it is actually a gray image (not color), it is only necessary to store these values once, and we call this the resulting grayscale image. This is how it will normally be stored in sRGB-compatible image formats that support a single-channel grayscale representation, such as JPEG or PNG. Web browsers and other software that recognizes sRGB images should produce the same rendering for such a grayscale image as it would for a "color" sRGB image having the same values in all three color channels.

Luma coding in video systems

[edit]
Main article:luma (video)

For images in color spaces such asY'UV and its relatives, which are used in standard color TV and video systems such asPAL,SECAM, andNTSC, a nonlinearluma component(Y) is calculated directly from gamma-compressed primary intensities as a weighted sum, which, although not a perfect representation of the colorimetric luminance, can be calculated more quickly without the gamma expansion and compression used in photometric/colorimetric calculations. In theY'UV andY'IQ models used by PAL and NTSC, therec601 luma(Y) component is computed asY=0.299R+0.587G+0.114B{\displaystyle Y'=0.299R'+0.587G'+0.114B'}where we use the prime to distinguish these nonlinear values from the sRGB nonlinear values (discussed above) which use a somewhat different gamma compression formula, and from the linear RGB components. TheITU-R BT.709 standard used forHDTV developed by theATSC uses different color coefficients, computing the luma component asY=0.2126R+0.7152G+0.0722B.{\displaystyle Y'=0.2126R'+0.7152G'+0.0722B'.}Although these are numerically the same coefficients used in sRGB above, the effect is different because here they are being applied directly to gamma-compressed values rather than to the linearized values. TheITU-R BT.2100 standard forHDR television uses yet different coefficients, computing the luma component asY=0.2627R+0.6780G+0.0593B.{\displaystyle Y'=0.2627R'+0.6780G'+0.0593B'.}

Normally these colorspaces are transformed back to nonlinear R'G'B' before rendering for viewing. To the extent that enough precision remains, they can then be rendered accurately.

But if the luma component Y' itself is instead used directly as a grayscale representation of the color image, luminance is not preserved: two colors can have the same lumaY but different CIE linear luminanceY (and thus different nonlinearYsrgb as defined above) and therefore appear darker or lighter to a typical human than the original color. Similarly, two colors having the same luminanceY (and thus the sameYsrgb) will in general have different luma by either of theY luma definitions above.[9]

Grayscale as single channels of multichannel color images

[edit]
icon
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(March 2023) (Learn how and when to remove this message)

Color images are often built of several stackedcolor channels, each of them representing value levels of the given channel. For example,RGB images are composed of three independent channels for red, green and blueprimary color components;CMYK images have four channels for cyan, magenta, yellow and blackink plates, etc.

Here is an example of color channel splitting of a full RGB color image. The column at left shows the isolated color channels in natural colors, while at right there are their grayscale equivalences:

Composition of RGB from three grayscale images

The reverse is also possible: to build a full-color image from their separate grayscale channels. By mangling channels, using offsets, rotating and other manipulations, artistic effects can be achieved instead of accurately reproducing the original image.[10]

See also

[edit]

References

[edit]
  1. ^"Grayscale Range - an overview | ScienceDirect Topics".www.sciencedirect.com. Retrieved2025-11-06.
  2. ^Johnson, Stephen (2006).Stephen Johnson on Digital Photography. O'Reilly.ISBN 0-596-52370-X.
  3. ^Poynton, Charles (2012).Digital Video and HD: Algorithms and Interfaces (2nd ed.).Morgan Kaufmann. pp. 31–35,65–68, 333, 337.ISBN 978-0-12-391926-7. Retrieved2022-03-31.
  4. ^Poynton, Charles A. (2022-03-14). Written at San Jose, Calif.. Rogowitz, B. E.; Pappas, T. N. (eds.).Rehabilitation of Gamma(PDF). SPIE/IS&T Conference 3299: Human Vision and Electronic Imaging III; January 26–30, 1998. Bellingham, Wash.: SPIE.doi:10.1117/12.320126.Archived(PDF) from the original on 2023-04-23.
  5. ^Poynton, Charles A. (2004-02-25)."Constant Luminance".Video Engineering.Archived from the original on 2023-03-16.
  6. ^Lindbloom, Bruce (2017-04-06)."RGB Working Space Information".Archived from the original on 2023-06-01.
  7. ^Stokes, Michael; Anderson, Matthew; Chandrasekar, Srinivasan; Motta, Ricardo (1996-11-05)."A Standard Default Color Space for the Internet – sRGB".World Wide Web Consortium – Graphics on the Web. Part 2, matrix in equation 1.8.Archived from the original on 2023-05-24.
  8. ^Burger, Wilhelm; Burge, Mark J. (2010).Principles of Digital Image Processing Core Algorithms. Springer Science & Business Media. pp. 110–111.ISBN 978-1-84800-195-4.
  9. ^Poynton, Charles A. (1997-07-15)."The Magnitude of Nonconstant Luminance Errors"(PDF).
  10. ^Wu, Tirui; Toet, Alexander (2014-07-07)."Color-to-grayscale conversion through weighted multiresolution channel fusion".Journal of Electronic Imaging.23 (4) 043004.doi:10.1117/1.JEI.23.4.043004.ISSN 1017-9909.
Color topics
Color science
Color physics
Color perception
Color psychology
Color reproduction
Color
philosophy
Color scheme
Color theory
Color terms
Basic English terms
Cultural differences
Color dimensions
Color
organizations
Names
Lists
Shades of:
Related
Retrieved from "https://en.wikipedia.org/w/index.php?title=Grayscale&oldid=1322007924"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp