Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
164 views
63 pages
Color Image Processing PDF
Uploaded by
Nagendra Hebbar
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save color image processing.pdf For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
164 views
63 pages
Color Image Processing PDF
Uploaded by
Nagendra Hebbar
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save color image processing.pdf For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 63
Search
Fullscreen
282 Color Image Processing Iris only after years of preparation that the young artist should tove... color—not color uséd descriptively, thot is, but as a means of personal expression. Henri Matisse For a long time | limited myself to one color—as a form of discipline. Pablo Picasso Preview ‘The use of color in image processing is motivated by two principal factors. First. color is a powerful descriptor that often simplifies object identification and ex- traction from a scene. Second, humans can discern thousands of color shades and intensities, compared to about only two dozen shades of gray. This second factor is particularly important in manual (i.e., when performed by humans) image analysis. Color image processing is divided into two major areas: full-color and pseudo- color processing. In the first category, the images in question typically are ac- quired with a full-color sensor, such as a color TV camera or color scanner. In the second category, the problem is one of assigning a color to a particular monochrome intensity or range of intensities. Until recently, most digital color image processing was done at the pseudocolor level. However, in the past decade, color sensors and hardware for processing color images have become available at reasonable prices. The result is that full-color image processing tech- niques are now used in a broad range of applications, including publishing. visualization, and the Internet It will become evident in the discussions that follow that some of the gray-scale methods covered in previous chapters are directly applicable to color images. Others require reformulation to be consistent with the properties of the color spaces developed in this chapter. The techniques described here at’far from exhaustive: they illustrate the cange af methods available for color imave processing [. Color Fundamentals Although the process followed by the human brain in perceiving and inter preting color is a physiopsychological phenomenon that is not yet fully under- stood. he physical nature of color can be expressed on a formal basis supported by experimental and theoretical results. In 1666, Sir Isaac Newton discovered that when a beam of sunlight passes through a glass prism, the emerging beam of light is not white but consists in- stead of a continuous spectrum of colors ranging from violet at one end to red atthe other. As Fig. 6.1 shows. the color spectrum may be divided into six broad regions: violet, blue. green, yellow, orange, and red, When viewed in full color (Fig. 6.2), no color in the spectrum ends abruptly. but rather each color blends smoothly into the next Basically, the colors that humans and some other animals perceive in an object are determined by the nature of the light reflected from the object. As illustrated in Fig, 6.2, visible light is composed of a relatively narrow band of fre~ quencies in the electromagnetic spectrum. A body that reflects light that is bal- anced in all visible wavelengths appears white to the observer. However. body that favors reflectance in a limited range of the visible spectrum exhibits some shades of color. For example, green objects reflect light with wavelengths pri- marily in the 500 to 570 nm range while absorbing most of the energy at other wavelengths. Characterization of light is central to the science of color. If the light is, achromatic (void of color), its only attribute is its intensity. or amount. Achro- atic light is what viewers see on black and white television set, and it has been an implicit component of our discussion of image processing thus far. As defined in Chapter 2.and used numerous times since, the term gray level refers toa scalar measure of intensity that ranges from black. to grays, and finally 10 white FIGURE 6.1. Color spectrum seen by passing white light through a prism. (Courtesy of the: General Elecitic Co,, Lamp Business Division, Color Fundamenta284 Chapter 6 ‘Color Image Process FIGURE 6.2 Wavelengths comprising the visible range of the electromagnetic sp (Courtesy of the General Electric Co,, Lamp Business Division.) Chromatic light spans the electromagnetic spectrum from approximately 400 to 700 nm. Three basic quantities are used to describe the quality of a chromatic light source: radiance. luminance. and brightness, Radiance is the total amount f energy that flows trom the light source, and it is usually measured in w (W). Lanninance. measured in lumens (Im). gives a measure of the amount of en- ergy an observer perceives from a light source. For example, light emitted from ‘a source operating in the far infrared region of the spectrum could have signif. icant energy (radiance). but an observer would hardly perceive it; its luminance would be almost zero. Finally, brightness is a subjective descriptor that is prac tically impossible to measure. It embodies the achromatic notion of intensity and is one of the key factors in describing color sensation ‘As noted in Section 2.1.1, cones are the sensors in the eye responsible for color vision, Detailed experimental evidence has established that the 6 to 7 million cones in the human eye can be divided into three principal sensing categories, corresponding roughly to red, green, and blue, Approximately ofall cones are sensitive to red light, 33% are sensitive to green light, and only about 2% are sensitive to blue (but the blue cones are the most sensitive). Figure 6.3 shows average experimental curves detailing the absorption of light by the red. green, and blue cones in the eye, Due to these absorption charac~ teristics of the human eye, colors are seen as variable combinations of the so- called primary colors red (R), green (G), and blue (B). For the purpose of standardization, the CIE (Commission Internationale de Eelairage—the In ternational Commission on Illumination) designated in 1931 the follow specific wavelength values to the three primary colors: blue = 435.8 nm. green = 546.1 nm, and red = 700 nm, This standard was set before the de tailed experimental curves shown Fig. 6.3 became available in 1965. Thus, the CIE standards correspond only approximately with experimental data, We note from Figs. 6.2 and 6.3 that no single color may be called red. green, oF blue. Also. itis important to keep in mind that having three specific primary color wavelengths for the purpose of standardization does not mean that these three fixed RGB components acting alone ean generate all spectrum colors Use of the word primary has been widely misinterpreted to mean that (he hice stindard primaries, when mixed in various intensity proportions, ea" produce all visible colors, AS we will see shortly. this interpretation is not6.1 Color Fundamentals 448.0 Sasnm 578mm Bive Red Blue green Redish orange FIGURE 6.3 Absorption of light by the red, green, and blue cones in the human eve as & function of wavelength, correct unless the wavelength also is allowed to vary, in which case we would no longer have three fixed, standard primary colors, ‘The primary colors can be added to produce the secondary colors of light— magenta (red plus blue), cyan (green plus blue), and yellow (red plus green). Mining the three primaries, or a secondary with its opposite primary cotor.in the right intensities produces white light. This result is shown in Fig. 6.4(a), which also illustrates the three primary colors and their combinations to produce the secondary colors, Differentiating between the primary colors of ight and the primary colors of pigments or colorants is important. In the latter, a primary color is defined as one that subtracts or absorbs a primary color of light and reflects or transmits the other two. Therefore, the primary cofors of pigments are magenta, cyan, and yellow. and the secondary colors are red, green, and blue, These colors are shown in Fig. 6.4(b). A proper combination of the three pigment primaries, or a sec- ondary with its opposite primaty, produces black. Coior television reception is an example of the additive nature of light col- ots. The interior of many color TV tubes is composed of a large array of trian- gular dot patterns of electron-sensitive phosphor. When excited, each dot in a triad is capable of producing light in one of the primary colors. The intensity of the red-emitting phosphor dots is modulated by an electron gun inside the tube, which generates pulses corresponding to the “red energy” seen by the TV camera,’The green and blue phosphor dots in each triad are modulated in the same manner. The effect, viewed on the television receiver. is that the three primary colors from each phosphor triad are “added” together and received by 285286 Chapter 6 1 Color Image Provessing b FIGURE 6.4 Primary and secondary colors of light and pigments, (Courtesy of the Gen. eral Bleetrie Co.. Lamp Business Division.) the color-sensitive cones in the eye as a full-color image. Thirty successive image changes per second in all three colors complete the illusion of a contin- uous image display on the screen. The characteristics generally used to distinguish oF brightness, hue, and saturation. As indicated earlier in this section, brightness ‘embodies the chromatic notion of intensity, Hue is an attribute associated with the dominant wavelength in a mixture of light Waves. Hue represents dominant color as perceived by an observer. Thus, when we call an object red, orange. oF yellow, we are specifying its hue. Saturation refers to the relative purity or the amount of white light mixed with a hue. The pure spectrum colors are fully sat urated, Colors such as pink (red and white) and lavender (violet and white) are less saturated. with the degree of saturation being inversely proportional to the amount of white light added, Hue and saturation taken together are called chromaticity. and. therefore. @ color may be characterized by its brightness and chromaticity, The amounts of red, grven. and blue needed to form any particular color are called the tristimn lus values and are denoted, X. ¥, and Z.respectively. A coloris then specified e color from another are6.1 & Color Fundamentals by its rrichromatie coefficients, defined as x (14) y (6.1.2) and ws 3) ¥eYtZ 13) It is noted from these equations that" rtytz (61-4) For any wavelength of light in the visible spectrum, the tristimulus values need- ed lo produce the color corresponding to that wavelength can be obtained di- rectly from curves or tables that have been compiled from extensive experimental results (Poynton [1996]. See also the easly references by Walsh [1958] and by Kiver [1965)).. Another approach for specifying colors is 10 use the CIE chromaticity diagram (Fig. 6.5). which shows color composition as a function of x (red) and y (green). For any value of x and y, the corresponding value of = (blue) is ob- tained from Eq, (6.1-4) by noting that z= f — (x + y). The point marked green in Fig. 65, for example, has approximately 62% green and 25% red con: tent, From Eq, (6.1-4), the composition of blue is approximately 13%. ‘The positions of the various spectrum colors—from violet at 380 nm to red at 780 nm—are indicated around the boundary of the tongue-shaped chro- maticity diagram. These are the pure colors shown in the spectrum of Fig. 6.2. Any point not actually on the boundary but within the diagram represents some mixture of spectrum colors. The point of equal energy shown in Fig. 6.5 corre: sponds to equal fractions of the three primary colors:it represents the CIE stan- dard for white light. Any point located on the boundary of the chromaticit chart is fully saturated. Asa point leaves the boundary and approaches the point of equal energy, more white light is added to the color and it becomes less sat- uraied. The saturation at the point of equal energy is 7er0, ‘The chromaticity diagram is useful for color mixing because a straight-line segment joining any two points in the diagram defines all the different color variations that can be obtained by combining these two colors additively. Con- sider, for example.a straight line drawn from the red to the green points shown in Fig. 65. If there is more red light than green light, the exact point represent- ing the new color will be on the line segment, but it will be closer to the red point than to the green point. Similarly, line drawn from the point of equal en- ergy Lo any point on the boundary of the chart will define all the shades of that particular spectrum color. he wwe a, 5 inthis contest falls notational convention. hese Shovld ot be coated wth the we 287288 Ghapter 6 @ Color Image Processing FIGURE 6.5 Chromaticity diagram, (Courtesy of the General Flectrie Cob: Business Division.) erat uaa L0 Preares aca) (WAVELENGTH, NANOMETERS) Extension of this procedure to three colors is straightforward, To determine the range of colors that can be obtained from any three given colors in the chro- icity diagram. we simply draw connecting lines to each of the three color points The result isa triangle. and any color inside the triangle can be produced by various combinations of the three initial colors, A triangle with vertices at any three fived colors cannot enclose the entire color region in Fig, 6.5. This obser- vation supports graphically the remark made earlier that not all colors ean be obtained with three single. fixed primaries The triangle in Figure 6.6 shows a typical range of colors (called the euler gamut) produced by RGB monitors. The irregular region inside the triangle & representative of the color gamut of today’s high-quality color printing devices6.2.» Color Models faa ae q FIGURE 6.6 “Typical color gamut of color monitors (triangle) and color printing devices irregular region), ‘The boundary of the color printing gamut is irregular because color printing is combination of additive and subtractive color mixing. a process that is much more difficult to control than that of displaying colors on a monitor, which is based on the addition of three highly controllable light primaries. £52) Color Models ‘The purpose of a color model (also called color space or color system) is to fax cilitate the specification of colors in some standard, generally accepted way. In essence, color model is a specification of a coordinate system and a subspace within that system where each color is represented by a single point, 289290 Chapter 6 FIGURE 6.7 Schematic of the RGB color eube, Pointsakong the matin diagonal Ihave uray vahtes, from black al dhe ‘origin f© white it pont (111) Color Image Processing Most color models in use today are oriented either toward hardware (such as for color monitors and printers) or toward :ipplications where color manip- ulation is a goal (such as in the creation of color graphics for animation). In terms of digital image processing. the hardware-oriemted models most. com- monly used in practice are the RGB (red. green. blue) mode! for colar monitors and a broad cls af color video cameras: the CMY (cyan. magenta, yellow ) and CMYK (cyan. magenta. yellow, black) models for color printing: and the HST hue. saturation, intensity) model, which corresponds closely with the way hu- mans describe and interpret color, ‘The HSI model also has the advantage that it decouples the color and gray-scale information in an image, making it suitable for many of the gray-scale techniques developed in this book. There are nu- merous color models in-use today due to the fact that color science is a broad field that encompasses many areas of application. It is tempting to dwell on some of these models here simply because they are interesting and informa. tive. However, keeping to the task al hand. the models discussed in this chap- ter are leading models for image processing. Having mastered the material in this chapter. the reader will have no difficulty in understanding additional color models in use today The RGB Color Model In the RGB model. each color appears in its primary spectral components of red. green. and blue. This model iy based on a Cartesian coordinate system. The color subspace of interest is the cube shown in Fig. 6.7, in which RGB values are at thre corners: eyan, magenta, and yellow are at three other corners: black is at the origin;aand white is at the comer farthest from the origin. In this model. the gray scale (points of equal RGB values) extends from blick to white along the ine joining these two points. The different colors in this model are points on oF inside the cube. and are defined by Vectors extending from the origin, Far Cyan Mayenta 1.09 Rol Yellow we62° Color Models 201 convenience. the assumption is that all color values have been normalized so that the cube shown in Fig. 6.7 is the unit cube. That is all values of R, Gand B are assumed to be in the range (0.1 Images represented in the RGB color model consist of three component sone for each primary color. When fed into an RGB monitor, these three color image in images combine on the phosphor screen to produce a composite The number of bits used to represent each pixel in RGB space isealled the pixel depth. Consider an RGB image in which each of the red, green. and blue images isan S-bit image. Under these conditions each RGB cofor pixel [that isa triplet df values (A, G, B)] is said to have a depth of 24 bits (3 image planes times the number of bits per plane)..The term full-color image is used often to denote a 24-bit RGB color image. The total number of cofors in a 24-bit RGB image is XY)" = 16.777.216 Figure 6.8 shows the 24-bit RGB color cube corresponding 67. io the diagram in Fi a solid, composed of the (2°)' = 16.777.216 col. EXAMPLE Gut jh. A convenient way to view these col. Generating the hidden face ‘The cube shown in F ‘ors mentioned in the preceding paragr: color planes (faces or cross sections of the cube). This is accomplished simply by fixing one of the three colors and allowing the other WO Leeson of the to vary. For instance, a cross-sectional plane through the center of the cube and RGB eolor cube parallel to the GB-plane in Figs. 68 and 6.7 is the plane (127.G, B) for G. B = 0. 1......255. Here we used the actual pixel values rather than the mathemati panes and a eros ily convenient normalized values in the range {0. 1) because the former val- tues are the ones actually used in a computer to generate colors. Figure 6.9(a) shows that an image of the cross-sectional plane is viewed simply by feeding the three individual component images into a color monitor. In the component «.() represents black and 255 represents white (note that these are gray- scale images). Finally, Fig. 6.9(b) shows the three hidden surface planes of the cube in Fig. 68, generated in the same manner IL is of interest to note that aequiring a color image is basically the process shown in Fig, 6.9 in reverse. A color image can be acquired by using three fil ters, sensitive to red. green. and blue. respectively, When we view a color scene with a monochrome camera equipped with one of these filters, the result is @ monochrome image whose intensity is proportional to the response of that filter. FIGURE 6.8 RGD 24-bit color cubeb FIGURE 6.9 (a) Generating the RGB image of the cross-sectional color plane 121.6. B). (hy The three hidden surface planes in the color ube of Fig. 638, 12 Chapter 6 9 Color Image Processing Gren imonitor Blue (R=) G=0) Repeating this process with each filter produces three monochrome in are the RGB component images of the color scene. (In practice, RGB color image Sensors usually integrate this process into a single device.) Clearly. dis playing these three RGB component images in the form shown in Fig. 6.%a) would yield an RGB color rendition of the original color scene White high-end display cards and monitors provide a reasonable rendition of the colors in a 24-bit RGB image, many systems in use today are Timited (0 256 colors. Also, there are numerous applications in which if simply makes no sense to use mote than a few hundred, and sometimes fewer. colors, A good example of this is provided by the pseudocolor image processing techniques discussed in Section 6,3. Given the variety of systems in current use. itis of considerable i" tetest Lo have a subset of colors that are likely to be reproduced faithfully. ree sonably independently of viewer hardware capabilities. This subset of colors called the set of safe RGB colors.or the set of allsystenty-safe colors, In Inter net applications. they are called safe Web colors or safe browser colors (On the assumption that 256 colors is the minimum number of colors that ea be reproduced faithfully by any system in which a desired result is likely to be displayed, itis useful to have an accepted standard notation to refer to these 6° Ors Forty of these 256 colors are known to be processed differently by wuriee® operating systems, leaving only 216 colors that are common to most s¥st62 © Color Models 293 nts yw 153204 Hey 0 Decimal 0 st These 216 colars have become the de facto standard for safe cators, especially in Internet applications. They are used whenever it is desired that the colors viewed by most people appear the same. Each of the 216 safe colors is formed from three RGB values as before, but e value can only be 0,51, 102, 153. 204, or 255, Thus, RGB triplets of these values give us (6)° = 216 possible values (note that all values are divisible by 3). It is customary to express these values in the hexagonal number system, as shown in Table 6.1. Recall that hex numbers 0. {.2.....9. A, B.C. D, E. F correspond to decimal numbers 0,1,2.....9. 10,11.12,13,14.15. Recall also that (0). = (0000) and (F), = (1111s. Thus, for example, (FF), = (255) ) = (I1ITII1)2 and we see that a grouping of two hex numbers forms an 8-bit byte. Since it takes three numbers to form an RGB color. each safe color is formed from three of the two digit hex numbers in Table 6.1. For example, the purest sed is FFOO00, The values 000000 and FFFFFF represent black and white, respectively. Keep in mind that the same result is obtained by using the more familiar decimal notation. For instance, the brightest red in decimal notation has R = 255 (FF) and G = B = 0. Figure 6.10(a) shows the 216 safe colors, organized in descending RGB val- ues, The square in the top left array has Value FFFFFF (white), the second Aa BBB ceecee pppppp sass TABLE 6.1 Valid values of cach RGB. component in 3 safe color, a b FIGURE 6.10 (a) The 216 safe RGB colors, (b) All the grays, in the 256-color RGB system (grays that are part of the safe color group are shown underlined294 Ghopter 6B Color Image Processing, FIGURE 6.11 The RGB safe-color cube, square to its right has value FEFFCC, the third square has value FPFF99, and so on for the first row. The second row of that same array has values FFCCFF FECCCC, FFCC99, and so on. The final square of that array has value FFO000 (the brightest possible red). The second array to the right of the one just ex- ‘amined starts with value CCFFFF and proceeds in the same manner. as do the other remaining four arrays The final (bottom right) square of the last array has value 000000 (black). It is important to,ote that not all possible 8-bit gray col ors are included in the 216 safe colors. Figure 6.10(b) shows the hex codes for all the possible gray colors in a 256-color RGB system. Some of these values are outside of the safe color set but are represented properly (in terms on theit rel: ative intensities) by most display systems. The grays from the safe color group. (KKKKKK) yo, for K = 0.3,6,9,C, Fare shown underlined in Fig, 6.10(b). Figure 6.11 shows the RGB safe-color cube. Unlike the full-color cube in Fig. 6.8, which is solid, the cube in Fig. 6.11 has valid colors only on the surface planes. As shown in Fig. 6.10(a), each plane has a total of 36 colors, so the entire surface of the safe-color cube is covered by 216 different colors, as expected. 6.2.2 The CMY and CMYK Color Models As indicated in Section 6.1,cyan. magenta, and yellow are the secondary colors of light or.alternatively, the primary colors of pigments, For example, when a sur face coated with cyan pigment is illuminated with white light, no red light is te- flected from the surface. That is, cyan subtracts red light from reflected white light, which itself is composed of equal amounts of red, green, and blue light Most devices that deposit colored pigments on paper, such as color printers sand copiers, require CMY data input or perform an RGB to CMY conversion internally. This conversion is performed using the simple operation c d R m{=|1/-|G@ (20) % 1 B where, again, the assumption is that all color values have been normalized (0 the range [0. 1), Equation (6.2-1) demonstrates that light reflected from a surface ied with pure eyan does not contain red (that iC = 1 ~ Rin the equation?6.2 & Color Models Similarly. pure magenta does not reflect green. and pure yellow does not reflect blue. Equation (6.2-1) also reveals that RGB values can be obtained easily from set of CMY values by subtracting the individual CMY values from 1. As in- dicated earlier. in image processing this color model is used in connection with generating hardcopy output, so the inverse operation from CMY to RGB gen- erally is of little practical interest According to Fig. 6.4. equal amounts of the pigment primaries. cyan, magen- ta,and yellow should produce black. In practice, combining these colors for print- ing produces a muddy-looking black. So, in order to produce true black (which is the predominant color in printing). a fourth color, black, is added, giving rise to the CMYK color model. Thus. when publishers talk about “four-color printing.” they are referring to the three colors of the CMY color model plus black, 6.2.8 The HSI Color Model As we have seen, creating colors in the RGB and CMY models and changing from one model to the other is a straightforward process As noted earlier, these color systems are ideally suited for hardware implementations. In addition, the RGB system matches nicely with the fact that the human eye is strongly per- ceptive to red, green, and blue primaries. Unfortunately, the RGB, CMY, and other similar color models are not well suited for describing colors in terms that are practical for human interpretation. For example, one does not refer to the color of an automobile by giving the percentage of each of the primaries com- posing its color. Furthermore, we do not think of color images as being com- posed of three primary images that combine to form that single image. When humans view a color object, we describe it by its hue, saturation, and brightness. Recall from the discussion in Section 6.1 that hue is a color attribute that describes a pure color (pure yellow, orange, or red), whereas saturation gives a measure of the degree to which a pure color is diluted by white light. Bright- ness is a subjective descriptor that is practically impossible to measure, It em- bodies the achromatic notion of intensity and is one of the key factors in describing color sensation. We do know that intensity (gray level) is a most use- ful descriptor of monochromatic images. This quantity definitely is measurable and easily interpretable, The model we are about to present, called the HSF(hue, saturation, intensity) color modet, decouples the intensity component from the color-carrying information (hue and saturation) in a color image. AS a result, the HSI model is an ideal too! for developing image processing algorithms based on color descriptions that are natural and intuitive to humans, who, after all, are the developers and users of these algorithms We can summarize by saying that RGB is ideal for image color generation (as in image capture by a color camera or image display in a monitor screen). but its use for color description is much more limited.’ The material that follows provides a very effective way to do this. As discussed in Example 6.1, an RGB color image can be viewed as three monochrome intensity images (representing red. green, and blue), so it should come as no surprise that we should be able to extract intensity from an RGB image. This becomes rather clear if we take the color cube from Fig. 6.7 and stand it on the black (0.0, 0) vertex. with the white vertex (1. J, 1) directly 295296 Chapter Color Image Processing ite Yellow Black ab FIGURE 6.12 Conceptual relationships between the RGB and HSI color models, above it, as shown in Fig, 6.12(a). As noted in connection with Fig. 6.7, the inten sity (gray scale) is along the line joining these two vertices. In the arrangement shown in Fig. 6.12, the line (intensity axis) joining the black and white vertices is vertical. Thus, if we wanted to determing the intensity component of any color point in Fig. 6.12, we would simply pass a plane perpendicular to the intensity axis and containing the color point. The intersection of the plane with the intensity axis would give usa point with intensity value in the range (0, 1),We also note with a little thought that the saturation (purity) of a color increases as a function of dis- tance from the intensity axis. In fact, the saturation of points on the intensity axis is zero, as evidenced by the fact that all points along this axis are gray. In order to see how hue can be determined also from a given RGB point. consider Fig. 6.12(b), which shows a plane defined by thrce points (black, white. and cyan). The fact that the black and white points are contained in the plane tells us that the intensity axis also is contained in the plane. Furthermore, we see that alt points contained in the plane segment defined by the intensity axis and the boundaries of the cube have the sume hue (cyan in this case). We would arrive at the same conclusion by recalling from Section 6.1 that all colors generated by three colors lie in the triangle defined by those colors. If two of those points are black and white and the third is @ color point, all points on the triangle would have the same hue because the black and white components cannot change the hue (of course, the intensity and saturation of points in this triangle would be di ferent). By rotating the shaded plane about the vertical intensity axis, we would obtain different hues. From these concepts we arrive at the conclusion that the hue, saturation, and intensity values required to form the HSI space can be ob- tained from the RGB color cube. That is, we can convert any RGB point to a cor responding point is the HSI color model by working out the geometrical formulas describing the reasoning outlined in the preceding discussion. ‘The key point to keep in mind regarding the cube arrangement in Fig. 6.12 and its corresponding HSI color space is that the HSI space is represented by62 © Color Models 297 Blas agents Green Yellow Green se a opin 7 Red cyan 5_\ Yellow Sy Be Magenta Magenta a bed FIGURE 6.13 Hue and saturation in the HSI color model. The dot is an arbitrary color point. The angle from the red axis gives the hue. and the length of the veetor isthe sat- uration. The intensity of all colors in any of these planes is given by the position of the plane on the vertical intensity axis, 4 vertical intensity axis and the locus of cotor points that lie on planes perpen: dicular to this axis. As the planes move up and down the intensity axis, the boundaries defined by the intersection of each plane with the faces of the cube have either a triangular or hexagonal shape. This can be visualized much more readily by looking at the cube down its gray-scale axis, as shown in Fig. 6.13(a). In this plane we see that the primary colors are separated by 120°. The sec- ondary colors are 600° from the primaries, which means that the angle between -condaries also is 120°, Figure 6.13(b) shows the same hexagonal shape and an arbitrary color point (shown as a dot). The hue of the point is determined by an angle from some reference point. Usually (but not always) an angle of 0° from the red axis designates 0 hue, and the hue increases counterclockwise from there. The saturation (distance from the vertical axis) is the length of the vec- tor from the origin to the point, Note that the origin is defined by the intersec- ion of the color plane with the vertical intensity axis. The important components of the HSI color space are the vertical intensity axis. the length of the vector to a color point,and the angle this vector makes with the red axis. Therefore, it is not unusual to sce the HSI planes defined is terms of the hexagon just discussed, a triangle, or even a circle, as Figs. 6.13(c) and (d) show. The shape chosen real- ly does not matter. since any one of these shapes can be warped into one of the other two by a geometric transformation. Figure 6.14 shows the HSI model based on color triangles and also on circles.298 Chopter 6 24 Color Image Processing a b FIGURE 6.14 The I1SI color model based on (1) triangular and (b) eircular coor planes The triangles and circles are perpendicular to the vertical intensity axis,62 & Color Models 299 Converting colors from RGB to HSI Given an image in RGB color format. the H component of each RGB pixel is obtained using the equation a psc _ 624 \300-@ ifB>G 622) with 3/(R - G) + (R - B)) [(R- GY + (R = BYG — By!” 6 = cos The saturation component is given by 3 : (R4 GTB j [min(R.G. B)). (6.2-3) Finally. the intensity component is given by 1=1(R+G+_8) (62-4) 11 is assumed that the RGB values have been normalized to the range (0, 1] and that angle ¢ is measured with respect to the red axis of the HSI space,as in- dicated in Fig, 6.13, Hue can be normalized to the range [0,1] by dividing by 360° all values resulting from Eq. (6.2-2). The other two HS] components already are in this range if the given RGB values are in the interval (0, 1) ‘The results in Eqs. (6.2-2) through (6.2-4) can be derived from the geometry shown in Figs. 6.12 and 6.13. The derivation is tedious and would not add si nificantly to the present discussion. The interested reader can consult the book's references or weh site for a proof of these equations, as well as for the follow- ing HSI to RGB conversion results. Converting colors from HSI to RGB Given values of HSI in the interval (0. 1]. we now want to find the corresponding RGB values in the same range. The applicable equations depend on the values of 11. There are three sectors of interest. corresponding to the 120° intervals in the separation of primaries (see Fig, 6.13). We begin by multiplying H by 360°, which returns the hue to its original range of [0°, 360°] RG sector (0° = I < 120°): When H is in this sector, the RGB components, are given by the equations (625) (62-6) and G=31~(R+ B). (62-7) fone dette densa Tne ete Red300 Ghapler6 3 Color Image Processing EXAMPLE 6. The HSI values corresponding to the image of the RGB color cube. GB sector (120° = I < 240°): Ifthe given value of H is in this sector. we lirst subtract 120° from it bp raise (6.2-8) ‘Then the RGB components are R= i(1- 5) (6.2-9) G= ifr i aes (62-10) and M-(R+ G), (62-11) BR sector (240° = H = 360°): Finally. if /1 is in this range, we subtract 240) from it HH - 240° (62-12) ‘Then the RGB components are G=10-s) (6.2.13) Sos H : pee 6214 ant + sara] wan and I~ (G + B). Uses of these equations for image processing are discussed in several of the fol- Jowing sections @ Figure 6.15 shows the hue. saturation. and intensity images for the RGB val ues shown in Fig, 6.8. Figure 6.15(3) is the hue image. Its most distinguishing feature is the discontinuity in value along a 45° line in the front (red) plane of the cube, To understand the reason for this discontinuity. refer to Fig. 6.8. draw @ line from the red to the white vertices of the cube. and select a point in the middle of this line, Starting at that point. draw a path to the right. following the cube around until you return to the starting point. The major colors encoun- tered in this path are yellow, green, cyan, blue, magenta, and back 10 red. Ac- cording to Fig. 6.13, the values of hue along this path should increase from 0 to 360" (ie. from the lowest to highest possible values of hue). This is pr ly what Fig. 6.15(a) shows hecause the lowest value is represented as black and the highest value as white in the gray seale. In fuel, the hue image was original- ly normalized to the range [0,1 ].und then Sealed 108 hits: that is.it was converted to the range [0.255]. for display The saturation image in Fig.6.15(h) shows progressively darker values toward the white vertex of the Rt cube. indicating that colors become less and less62 © ColorModely 301 abe FIGURE 6.15 HSI components of the image in Fig, 68. (a) Hue, (b) saturation. and (¢) intensity images. saturated as they approach white. Finally, every pixel in the intensity image shown in Fig. 6.15(d) is the average of the RGB values at the corresponding pixel in Fig. 68 ® Manipulating HSI component images In the following discussion, we take a look at some simple techniques for ma- nipulating HSI component images. This will help develop familiarity with these components and also help deepen our inderstanding of the HSI color model Figure 6,16(a) shows an image composed of the primary and secondary RGB, colors. Figures 6.16(b) through (d) show the , Sand f components of this image. These images were generated using Eqs. (6.2-2) through (6.2-4). Recall from the discussion earlier in this section that the gray-level values in Fig. 6.16(b) correspond to angles: thus, for example, hecause red corresponds to 0". the red region in Fig, 6.16(a) mapped to a black region in the hue image. Similarly, the gray levels in Fig, 6.16(c) correspond to saturation (they were scaled to [0.255] for display), and the gray fevels in Fig. 6.16(d) are average intensities To change the individual color of any region in the RGB image. we change the values of the corresponding region in the hue image of Fig.6.16(b). Then we convert the new 7 image, along with the unchanged S and J images. back to RGB using the procedure explained in connection with Eqs. (6.2-5) through (6.2-15). To change the saturation (purity) of the color in any region. we follow the same procedure, except that we make the changes in the saturation image in HSI space. Similar comments apply to changing the average intensity of any region, Of course. these changes can be made simultaneously. For example, the image in Fig.6.17(a) was obtained by changing to 0 the pixels corresponding to the blue and green regions in Fig. 6.16(b). In Fig. 6.17(b) we reduced by half the saturation of the cyan region in component image $ from Fig. 6.16(c). In Fig. 6.17(¢) we reduced by half the intensity of the central white region in the intensity image of Fig. 6.16(d). The result of converting this modified HSI image back to RGB is shown in Fig, 6.17(d). As expected, we see in this figure that302 Chapter 6 Color Image Processing, ab ed HGURE 6.16 (2) RGB image and the components of its corresponding HSI im: ((o) hue, (e) saturation, and (d) intensity the outer portions of all circles are now red: the purity of the evan region was diminished, and the central region became gray rather than white, Although these results are simple, they illustrate clearly the power of the HSI color mode in allowing dependent control over hue, saturation, and intensity, quantities with which we are quite familiar when describing colors Pseudocolor Image Processing Pseudocolor (also called faise color) image processing consists of assigning col ors to gray vatlues based on a specified criterion. The term pseride or false color is used to differentiate the process of assigning colors to monochrome images from the processes associated with [rue color images. a topic discussed starting in Section 6.4, Phe principal use of pseudocolor is tor human visualization and interpretation of gray-seale events in an jmase oF seymenee OF images. As noted623 & Pseudocolor Image Processing a4 Vv FIGURE 6.17 (2)-(c) Modified HSI component images. (d) Resulting RGB image ‘See Fig. 6.16 for the original HS] images) a the beginning of this chapter. one of the principal motivations for using color is the fact that humans can discern thousands of color shades and intensities, compared 10 only two dozen or so shades of gray. 6.3.1 Intensity Slicing The technique of inzensity (sometimes called density) slicing and color coding is one of the simplest examples of pseudocolor image processing. If an image is interpreted as a 3-D function (Intensity versus spatial coordinates), the method can be viewed as one of placing planes parallel to the coordinate plane of the image: each plane then “slices” the function in the area of intersection. Figure 6.18 shows an example of using a plane at f(x. y) = J, toslice the image func~ tion into two levels. Ifa different color is assigned to each side of the plane shown in Fig. 6.18, any pixel whose gray level is above the plane will be coded with one color, and any pixel below the plane will be coded with the other. Levels that lie on the plane 303304 Ghapter 6:3 Color image Processing EXAMPLE 63: itensity sliem fae) Gray level axis i (sehtep = Sicing plane FIGURE 6.18 Geomettic interpretation of the intensity-slicing technique. ly assigned one of the two colors. The result is a two-color a the sticing plane itself may be arbitrari image whose relative appearance can be contratled by mov up and dowa the gray-level axis, In yeneral, the technique may be summarized as follows. Let [0. L ~ U repre: sent the gray seale (see Section 2.3.4), let level f, represent black [f(x y) = O)and level). represent white (f(x. y) = L— 1), Suppose that P planes perpendicu- lar to the intensity avis are defined at levels é,. f.....fn. Then, assuming that 0-< P< L ~ L.the P planes partition the gray seale imo P+ 1 intervals, ¥) Vacioe bp. Gray-lovel 10 color assignments are made according to the relation fesse itera Y (65-1) where ¢, is the color associated with the Ath intensity interval V, defined by the partitioning planes at! =k - Land = & ‘The idea of planes is useful primatily for a geometric interpretation of the intensity-slicing technique. Figure 6.14 shows an alternative representation thet! defines the same mapping as in Fig. 6.18, According to the mapping function Shown in Fig. 6.19, any input gray level is assigned one of Oo colors, depen ing on whether it is above or below the value of J, When more levels ate used ion takes on a sturease Lorne the mapping fun A simple but practical. use of intensity slicing ys shown in Fig, 6.20, Figuty 6.20(4) is Monochrome image of the Picker" Thyroud Phantom (ct radiation test pattern). and Fig, 6.20{b) is the result ob intensity shee thiy image inte cesht ons Regions that appear of constant iattenity ia the colors nonoehrome tage83 © Pecuidocolar Imag: ae ese i ea Gray levels HOURE 6.19 An alternative representation of the intensity slicing technique, are really quite variable. as shown by the various colors in the sliced image. The {eit lobe. for instance. isa dull gray in the monochrome image, and picking out variations in intensity is difficult, By contrast, the color image clearly shows eight different regions of constant intensity, one for each of the cotors used 8 In the preceding simple example, the gray scale was divided into intervals and. 1 different color was assigned to each region, without regard for the meaning of the gray levels in the image. Interest in that case was simply to view the different ray levels constituting the image. Intensity slicing assumes a much more mean- ingfut and useful role when subdivision of the pray scale is hased on physical char- acteristics of the image. For instance, Fig. 6.21(a) shows an X-ray image of a weld ab FIGURE 6.20 (a) Monochrome image of the Picker Thyroid Phantom. (b) Result of den- sity slicing into eight colors, (Courtesy of Dr. JL. Blankenship. Instrumentation and Controls Division, Oak Ridge National Laboratory.) scesnings 305306 Chapter 6 a b FIGURE 6.21 (a) Monochrom X-ray image of 0 weld (b) Result of color coding, X-TEK Systems. Lid) Color Image Processing, (ihe horizontal dark region) containing several cracks and porosities (the bright. white streaks running horizontally through the middle of the image}. Its known that when there is a porosity or crack in a weld, the full strength of the N-niys going through the object saturates the imaging sensor on the other side of the object (see Section 2.3). Thus geay levels of value 255 in an 8-bit ins such a1 system automatically imply a problem with the weld. Ia human were 10 be the ultimate judge of the analysis and manual processes were employed to inspect weds (still a common procedure today). a simple color coding tht assigns one color to level 255 and another to all other gray levels would siniplil tor's job considerably. Figure 6.21(b) shows the restll, No explana to arrive al the conclusion that human error rates would be Fowcr if images were liyplayed in the form of Fig. 6.21). instead af the form shown fn Fig, 6,210.19 forare Known. inet coming froin he ingpes ixrequired other words if the exact values of eray levels one is look’ sity slicin images are involved, The following is a significantly more comples example ‘t imple but powertul aid in visuatization. especially if mumcrous63 © Pseudocolor Image Processing 307 £- Measurement of rainfall levels, especially in the tropical regions of the Earth, is of imeercst in diverse applications dealing with the environment. Accurate measurements using ground-hased sensors are difficult and expensive to ac- quire. and total rainfall figures are even more difficult to obtain because a sie. nificant portion of precipitation occurs over the ocean. One approach for obtaining rainfall figures is to use a satellite. The TRMM (Tropical Rainfall Measuring Mission) satellite utilizes, among others three sensors specially de- signed to detect rain:a precipitation radar,a microwave imager, and a visible and infrared scanner (see Sections 1.3 and 2.3 regarding image sensing modalities) “The results from the various rain sensors are processed, resulting in estimates of average rainfall over a given time period in the area monitored by the sen- ages whose intensity values correspond directly to rainfall, with each pixel representing a physical land area whose size depends on the resolution of the sensors. Such an intensity image is shown in Fig. 6.22(a), where the area monitored by the satellite sors From these estimates.it is not difficuh to generate gray-scale it ab ed FIGURE 6.22 (a) Gray-scale ima, average monthly rainfall. (b) Colors assig America region. (Courtesy of NASA.) EXAMPLE 64 Use of color te highlight rainfall levels, in which intensity (in the lighter horizontal band shown) corresponds 10 ied to intensity values. (c} Color-coded image. (a) Zoom of the South308 Chopter 6 Color Image Processing is the slightly fighter horizontal band in the middle one-third of the pieture (these are the tropical regions). In this particular example. the rainfall values are average monthly values {in inches) over a three-year period, Visual examination of this picture for raintall patterns is quite difficult. not impossible. However. suppose that we code gray levels fron to 255 using the colors shown in Fig. 6.22¢b). Values toward the blues signif low yalues of rain. fall. with the opposite being true for red. Note that the scale tops out at pure red for values of rainfall greater than 20 inches Figure 6.22(c) shows the result of color coding the gray image with the color map just discussed. The results are much easier to imerpret, ay shown in this figure and in the coomed area of Fig. 6.22(d), In addition to providing global coverage. this type of data allows me: teorologists to calibrate ground-based rain monitoring systems with greater precision than ever before Gray Level to Color Transformations Other types of transformations are more general and thus are eapable of achiev ing a wider range of pseudocolor enhancement results than the simple slicing technique discussed in the preceding section. An approach that is particularly attractive is shown in Fig. 6.23, Basically. the idea underlying this approach is to perform three independent transformations on the gray level of any input pixel the three results are then fed separately into the red. green. and blue channels of a color television monitor. This method produces a composite image whose color content is modulated by the nature of the transformation functions. Note that these are transformations on the gray-level values of an image and are not functions of position The method discussed in the previous section is a special case of the technique just described. There. piecewise linear functions of the gray levels (Fig. 6.197 ire used to generate colors. The method discussed in this section.on the othe! hand, can be based on smooth, nontinear functions. which, ay might be expected ives the technique considerable flexibility Red IWansformavon FIGURE 6.23 Functional block hast ant lor pseiilscolar wag tue ted into the cortespondyie red ercen. and! blue mputs of ak RGR votos maw processinef6.23 © Pscudocolor Imay Figure 6.24(a) shows two monochrome images of luggage obtained from an airport X-ray scanning system, The image on the lefl contains ordinary ar ‘The image on the right contains the same articles, as well as a block of simulat ed plastic explosives The purpose of this example isto illustrate the use of gray [evel to color transformations to obtain various degrees of enhancement. tre 6.25 shows the transformation functions used. These sinusoidal fune tions contain regions of relatively constant value around the peaks as well as re gions that change rapidly near the valleys. Changing the phase and frequency of each sinusoid can emphasize (in color) ranges in the gray scale, For instance, ifall three transformations have the same phase and frequency, the output image will bbe monochrome, A small change in the phase between the three transformations produces little change in pixels whose gray levels correspond to peaks in the si- husoids, especially if the sinusoids have broad profiles (low lrequencies). Pixels with gray-level values in the steep section of the sinusoids are assigned a much stronger color content as a result of significant differences between the ampli- tudes of the three sinusoids caused by the phase displacement between them. The image shown in Fig, 6.24(b) was obtained with the transformation func tions in Fig. 6.25(a). which shows the gray-level bands corresponding {o the ex- plosive, garment bag, and background, respectively. Note that the explosive and background have quite different gray levels, but they were both coded with be FIGURE 6.24 Pscudocolor enhancement by using the gray-Ievel to color transformations in Fig. 625. (Original image courtesy of Dr. Mike Hurwitz, Westinghouse.) Processing 309 EXAMPLE 6. Use of pseudocolor for highlighting explosives: contained in Wugeage,310 Ghopter6 + Color Image Processing approximately the same color asa result of the periodicity of the sine waves. The image shown in Fig. 6.24(c) was obtained with the transformation functions in Fig. 6.25(b). In this ease the explosives and garment bag intensity bands were mapped by similar transformations and thus received essentially the same color assignments, Note that this mapping allows an observer to “see” through the explosives. The background mappings were about the same as those usc for Fig. 6.24(h). producing almost identical color assignments. Gray level ° t pte Explosive Garment Background bag 0 7 ie Ieyplowie Garment Background Gray level ba a » FIGURE 6.25 Iianslormatin functions used to obbun the mages an Fig, 6.2443 © Pseudocolor Image Processing 311 rip} tanoxmanon 1 ie S i (29D) Turstoomation ghee) Aion Pos yc») processing Eas) Transformation Ty haley fe} Ti FIGURE 6.26 A pscudocolor coding approach used when several monochrome images are available. The approach shown in Fig. 6.23 is based on a single monochrome image. Often, it is of interest to Combine several monochrome images into a single color composite. as shown in Fig, 6.26. A frequent use of this approach (illus- trated in Example 6.6) isin multispectral image processing, where different sen- sors produce individual monochrome images, each in a different spectral band, ‘The types of additional processes shown in Fig, 6.26 can be techniques such as color balancing (see Section 6.5.4), combining images, and selecting the three im- ages for display based on knowledge about response characteristics of the snerate the images. sensors used {0 Ly Figures 6.27(a) through (d) show four spectral satellite images of Washington, D.C. including part of the Potomac River. The first three images are in the vis- ible red, green. and blue, and the fourth is in the near infrared (see Table 1.1 and Fig. 1.10). Figure 6.27(e) is the full-color image obtained by combining the first three images into an RGB image. Full-color images of dense areas are difficult {0 interpret, but one notable feature of this image is the difference in color in ious parts of the Potomac River. Figure 6.27(f) is a little more interesting, This image was formed by replacing the red component of Fig. 6.27(e) with the near- infrared image. From Table 1.1. we know that this band is strongly responsive to the biomass components of a scene. Figure 6.27(f) shows quite clearly the dif- ference between biomass (in red) and the human-made features in the scene, composed primarily of conerete and asphalt, which appear bluish in the image. The type of processing just illustrated is quite powerful in helping visualize events of interest in complex images, especially when those events our beyond ‘our normal sensing capabilities. Figure 6.28 is an excellent illustration of this. ‘These are images of the Jupiter moon Io, shown in pseudocolor by combining sev. ceral of the sensor images from the Galileo spacecraft, some of which are in spec- tral regions not visible to the eye. However, by understanding the physical and chemical processes likely to affect sensor response, itis possible to combine the sensed images into a meaningful pseudocolor map. One way to combine the sensed image data is by how they show either differences in surface chemical composition or changes in the Way the surface refleets sunlight, For example, ia the pseudocolor image in Fig. 6.28(b), bright red depicts material newly ejected EXAMPLE 6.4: Color coding of ‘multispectral images,312 Chapter & © Color Image Processing, bands [in Fig. 1.10 (see'Table 1.1), [e) Color compos hive components of 30 manner. but using in the red channel the ctr images courtesy of NASAL ge obtained in the ayarintrared image in (4), (Original multfrom an active volcano on To, and the surrounding yellow materials are older sulfur deposits. This image conveys these characteristics much more seadily than would be possible by analyzing the component images individually. 2 EEE Basics of Full-Color Image Processing In this section be begin the study of processing techniques applicable to full- color images, Although they are far from being exhaustive, the techniques de- veloped in the sections that follow are illustrative of how full-color images are handled for a variety of image processing tasks, Full-color image processing ap- proaches fall into two major categories, In the first category, we process each component image individually and then form a composite processed color image FIGURE 6.28 (a) Pseudocotor rendition of Jupiter Moon To. (b) A close-up. (Courtesy of NASA.)314 Chapter 6 a Color Image Processing ab FIGURE 6.29 Spatiat masks for gray-scale and RGB color from the individually processed components, In the second category. we work with color pixels directly. Because full-color images have at least three compo- nents, color pixels really are vectors. For example, in the RGB system, each color point can be interpreted as a vector extending from the origin to that point in the RGB coordinate system (see Fig. 6.7) Let ¢ represent an arbitrary vector in RGB color space: ce R c=la|=|G (64-1) a B ‘This equation indicates that the components of are simply the RGB compo- nents of a color image at a point. We take into account the fact that the color components are a function of coordinates (.x, y) by using the notation eae y) Rwy) e(x.y) =| cates») | =| Gley) | (642) cals ¥) Bla. y) For an image of size M x N, there are MN such vectors, ¢(x, y).for.x = 0.1. M ~ try = O12... = 1 Itisimportant to keep clearly in mind that Eq, (64-2) depicts a vector whose com ponents are spatial variables in x and y. This is a frequent source of confusion that can be avoided by focusing on the fact that our interest lies on spatial processes That is.we are interested in image processing techniques formulated in.x and y. The fact that the pixels are now color pixels introduces a factor that, in its easiest formula- tion, allows us to process a color image by processing each ofits component images separately, using standard gray-scale image processing methods, However. the te- sults of individual color component processing are not always equivalent 10 direct processing in color vector space, in which case we must formulate new approaches In order for per-color-component and vector-based processing to be equiv- alent, two conditions have to be satistied: First, the process has to be applicable to both vectors and scalars. Second, the operation on each component of a vee tor must be independent of the other components, As an illustration, Fig, 6.29 (ol it Spatial mask Spat RGB calor image65 Color Transinrmations shows neighborhood spatial processing of gray-scale and full-color images, Sup: pose that the process is neighhorhood averaging, In Fig. 6.2%a). averaging would bo accomplished by summing the gray levels ofall he pixelsin the neighborhood and dividing by the fotal number of pixels in the neighborhood. In Fig. 629(b). jxeraging would be done by summing all the vectors in the neighborhood and di viding cach component by the total number of vectors in the neighborhood. But each component of the average vector is the sum of the pixels in the image cor responding to thal component, which is the same as the result that would be ob- tained if the averaging were done on a per-color-component basis and then the vector was formed, We show this in more detail in the following sections. We also show methods in which the resulis of the two approaches are not the same. Color Transformations The techniques described in this section, collectively called color transforma sions, deal with processing the components of a color image within the context of a single color model. as opposed to the conversion of those components, between models (like the RGB-to-HSI and HSI-to-RGB conversion transfor- mations of Section 6.2.3), Formulation As with the gray-level transformation techniques of Chapter 3. we model color transformations using the expression g(xy) = TF ¥) (65-1) where f(x. y) is a color input image. g(x, y) is the transformed or processed color output image. and 7"is an operator on f over a spatial neighborhood of (x. y)-The principal difference between this equation and Eq, (3.1-1) is in its in- terpretation, The pixel values here are triplets or quartets (ic.. groups of three or four values) from the color space chosen (o represent the images. as illus- trated in Fig, 6.29(b). ‘Analogous 10 the approach we used to introduce the basic gray-level trans formations in Scetion 3.2. we will restrict attention in this seetion to color trans. formations of the form y= Tr. Qn (65-2) where. for notational simplicity, and s, are variables denoting the color com: ponents of f(x. ») and g(x. y) at any point (x,y). 1s the number of color com ponents. and {7\,Ty..-..T,} is a set of transformation or color mapping functions that operate on r,t produce s,. Note that # transformations, 7,, com- bine to implement the single transformation function, Tin Eq.(6.5-1).The color space chosen to describe the pixels of f/and g determines the value of n, Ifthe RGB color space is selected. for example = 3 and r).r).and r; denote the red, gicen, and blue components of the input image, respectively. If the CMYK or HSI color spaces are chosen, t= 4 orn = 3. 315316 Ghopter 6 @ Color image Processing Figure 6.30(a) shows a high-resolution color image of a bowl of strawber ‘and cup of coffee that was digitized from a large format (4" x 5") color ne; ative, The second row of the figure contains the components of the initial CMYK scan. In these images, black represents (and white represents 1 in each CMYK color component, Thus, we see that the strawberries are composed of large amounts of magenta and yellow because the images corresponding to these two CMYK components ate the brightest. Black is used sparingly and is generally confined to the coffee and shadows within the bow! of strawberries. When the CMYK image is converted to RGB, as shown in the third row of the figure, the strawberries are seen to contain a large amount of red and very little (although some) green and blue. The last row of Fig. 6.30 shows the HSI com- ponents of Fig. 6.30(a)—computed using Eqs. (6.2-2) through (6.2-4). As expected, the intensity component is a monochrome rendition of the full-color original. In addition, the strawberries are relatively pure in color; they possess the highest saturation or least dilution by white light of any of the hues in the image. Finally, we note some difficulty in interpreting the hue component. The problem is compounded by the fact that (1) there isa discontinuity in the HSI model where 0° and 360° meet, and (2) hue is undefined for a saturation of (ic., for white, black, and pure grays). The discontinuity of the model is most apparent around the strawberries, which are depicted in gray level values near both black (0) and white (1). The result is an unexpected mixture of highly contrasting gray levels to represent a single color—red, Any of the color space components in Fig. 6.30 can be used in conjunction with Eg. (6.5-2). In theory, any transformation can be performed in any color model. In practice, however, some operations are better suited to specific mod- els, For a given transformation, the cost of converting between representations must be factored into the decision regarding the color space in which to imple ‘ment it. Suppose, for example. that we wish to modify the intensity of the image in Fig, 6.30(a) using a(x.y) = Af y) (6533) where 0 < k < 1. In the HSI color space, this can be done with the simple transformation kry (6.5-4) where s; = rand s) =r), Only HSI intensity component ; is modified, In the RGB color space, three components must be transformed: s= hy 123 (65-5) transformations: The CMY space requires a similar set of lis sky (Lk) = 1.2.3. (65-6) Although the HSI transformation involves the fewest number of operations. the computations required to convert an RGB or CMY(K) image to the HSE space more than offsets (in this case) the advantages of the simpler transformation65 # Color Transformations 317 PF bal Fol color Magenta Yellow Hue Saturation Intensity FIGURE 6.30 A full-color image and its various color-space components. (Original image courtesy of Med: Data Interactive.) the conversion calculations are more computationally intense than the intensity transformation itself. Regardless of the color space selected, however, the output is the same. Figure 6.31(b) shows the result of applying any of the transformations in Eqs (6.5-4) through (6.5-6) to the image of Fig. 6.30(a) using & = 0.7. The map- ping functions themselves are depicted graphically in Figs.6.31(c) through (e).BIS Chopter 6 ab cde FIGURE 6.31 Agustin intensity a an 1¢ using color transformations. 6) Original we (b) Result creasing its int fic. betting k= 02) (e-te) The required RGB, CMY. and HSI transformation functions. (Original i MedDats Interactive.) EXAMPLE 67: Comput Color Image Processing, It is important to note that each transformation defined in Eqs.(6.5-+} through (65-6) depends only on one component within its color space. For example, the red ‘output component, sy. in Eq. (6.5-5) is independent of the green (73) ane! blue (7! inputs; it depends only'on the red (r,} input, Transformations of this type ace among the simplest and most used color processing tools and can be carried out ona per color-component basis, as mentioned at the beginning of our discussion. In the re: mainder of this section we examine several such transformations and discuss case in which the component transformation functions are dependent on all the inet color components of the input image and, therefore, cannot be done on «! vidual color component basis, Color Complements The hues directly opposite one another on the coli circle’ of Fig, 6.32 a0 coniplements. Our interest in complements stems {rom the fact that they ate anitlogous lo the aray scale negatives of Seetion 3.2. Asin the gray-seale ease hancing detail shat is e.sbedded in color complements are useful for —particularly when the regions at a color im, Figures 6.33{a) and (c) show the image from Fig, 6.30() and its cole The RGB transformations used to compute the complemen comp! 63M). Hie the gr are plotted in Fi men They are identic65 F Color Transformations 319 FIGURE 6.32 Complements on the color circle. Cyan Red Yellow ab ed FIGURE 6.33 Color complement transformations, (2) Original image. (b) Complement transformation functions. (c) Complement ‘of (a) based on the RGB mapping functions (d) An approximation of the RGB complement using HSI transformations320. Ghapter6 m Color image Processing transformation defined in Section 3.2.1. Note that the computed complement is reminiscent of conventional photographic color film negatives, Reds of the original image are replaced by cyans in the complement. When the original image is black, the complement is white, and so on. Each of the hues in the complement image can be predicted from the original image using the color circle of Fig, 6.32. And each of the RGB component transforms involved in the computation of the complement is a function of only the corresponding input color component. Unlike the intensity transformations of Fig. 6.31, the RGB complement trans- formation functions used in this example do not have a straightforward HST space equivalent. It is left as an exercise for the reader (see Problem 6.18) to show that the saturation component of the complement cannot be computed from the saturation component of the input image alone. Figure 6,33(d) provides an approximation of the complement using the hue, saturation, and intensity transformations given in Fig. 6.33(b). Note that the saturation component of the input image is unaltered; it is responsible for the visual differences between Figs. 6.33(c) and (4). 6.5.3 Color Slicing Highlighting a specific range of colors in an image is useful for separating ob- jects from their surroundings, The basic idea is either to (1) display the colors of interest so that they stand out from the background or (2) use the region de- fined by the colors as a mask for further processing. The most straightforward approach is to extend the gray-level slicing techniques of Section 3.2.4. Because a color pixel is an n-dimensional quantity, however, the resulting color trans- formation functions are more complicated than their gray-scale counterparts in Fig.3.11. In fact, the required transformations are more complex than the color component transforms considered thus far. This is because all practical color slicing approaches require each pixel’s transformed color components to be function of all original pixel’s color components. ‘One of the simplest ways to “slice” a color image is to map the colors outside some range of interest to a nonprominent neutral color. If the colors of inter- est are enclosed by a cube (or hypercube for n > 3) of width W and centered at a prototypical (e.g., average) color with components (a), @2,....d,). the nec- essary set of transformations is Os i otherwise Wi: 2 any 1320 wm (65-7) These transformations highlight the colors around the prototype by forcing all other colors to the midpoint of the reference color space (an arbitrarily chosen neutral point). For the RGB color space, for example, a suitable neutral point is middle gray or color (0.5. 0.5. 0).65 © Color Transformations 321 Ifa sphere is used to specify the colors of interest, Eq, (6.5-7) becomes 05 iT S,- a) a = eens A. 5. Qn otherwise ie a Mere, Ry is the radius of the enclosing sphere (or hypersphere for 2 > 3) and (a). 8)... 4,) are the components of its center (ic, the prototypical color). Other useful variations of Eqs. (65-7) and (6.5-8) include implementing multi ple color prototypes and reducing the intensity of the colors outside the region of interest—rather than setting them to a neutral constant & Equations (6,5-7) and (6.5-8) can be used to separate the edible part of the strawberries in Fig, 6.30(a) from the background cups, bowl. coffee, and table Figures 6.34(a) and (b) show the results of applying both transformations, In cach case, a prototype red with RGB colar caordinate (0.6863, 0.1608, 0.1922) was selected from the most prominent strawberry; W and Ry were chosen so that the highlighted region would not expand to undesirable portions of the image. The actual values, W = 0.2549 and Ro = 0.1765, were determined inter actively: Note that the sphere-bs ter. in the sense that it includes more of the strawberries ted areas. A sphere of radius 0.1765 does not completely enclose a cube of width 0.2549 but i itself not completely enclosed by the eube. ab FIGURE 6.34 Color slicing transformations that detect (a) reds within an RGB cube of width W = 0.2549 centered at (0.6863, 01608, 0.1922). and (b) reds within an RGB. sphere of radius 0.1765 centered at the Same point, Pixels outside the cube and sphere were replaced by color (015.05,05), EXAMPLE 68: tion oF322 Chapter 6 = Color Image Processing, Tone and Color Corrections Color transformations can be performed on most desktop computers. In con- junction with digital cameras, flatbed scanners, and inkjet printers, they turn a personal computer into a digital darkroom—allowing tonal adjustments and color corrections, the mainstays of high-end color reproduction systems, to be performed without the need for traditionally outfitted wet processing (ie..dark- room) facilities. Although tone and color corrections are useful in other areas of imaging, the focus of the current discussion is on the most common ust photo enhancement and color reproduction The effectiveness of the transformations examined in th ultimately in print. Since these transformations are developed, refined. uated on monitors, it is necessary to maintain a high degree of color consisten- cy between the monitors used and the eventual output devices. In fact, the colors, of the monitor should represent accurately any digitally scanned source images, as well as the final printed output. This is best accomplished with a device- independent color model that relates the color gamuts (see Section 6.1) of the monitors and output devices, as well as any other devices being used, to one another. The success of this approach is a function of the quality of the color pro- files used to map each device to the model and the model itself, The model of choice for many color management systems (CMS) is the CLE L*a*b* model. also called CIELAB (CIE {1978}, Robertson [1977]). The L*a*b* color com- ponents are given by the following equations: Lts 6-H) ~ 16 (65-9) coat) (2)) wm bra 200{ (2) - (2) (65-11) where Va 4g > 0.008856 ng) = i + 16/116 q = 0.008856 (65-12) and Xy. ¥y,and Zy ate reference white tristimulus values—typically the white of a perfectly reflecting diffuser under CIE standard D6S illumination (defined by.x = 0.3127 and y = 0.3290 in the CIE chromaticity diagram of Fig. 6.5). The L*a*b? color space is colorimetric (i.e., colors perceived as matching are en- coded identically), perceptually uniform (ie.,color differences among various hues are perceived uniformly—see the classic paper by MacAdams [1942]),and independent. While not a directly displayable format (conversion to an- mut encompasses the entire visible spectrum . print. or input device: device other color space is required), its and can represent accurately the colors of any displ Like the HSI system, the L"a'b? system is an excellent decoupler of intensity (represented by lightness /.°) and color (represented by a* for red minus gree65 © Color Transformations 323 and b* for green minus blue), making it useful in both image manipulation (tone and contrast editing) and image compression applications. “The principal benefit of calibrated imaging systems is that they allow tonal and color imbalances to be corrected interactively and independently—that is, in bo sequential operations. Before color irregularities, like over- and under- saturated colors, are resolved, problems involving the image’s tonal range are corrected. The ronal range of an image, also called its key type, refers to its gen- eral distribution of color intensities. Most of the information in high-key im- ages is concentrated at high (or light) intensities: the colors of low-key images are located predominantly at low intensities; middle-key images lie in between. {Asin the monochrome case, it is often desirable to distribute the intensities of acolor image equally between the highlights and the shadows. The following ex- amples demonstrate a variety of color transformations for the correction of tonal and color imbalances. 1. Transformations for modifying image tones normally are selected interac- tively. The idea is to adjust experimentally the image’s brightness and contrast to provide maximum detai) over a suitable range of intensities. The colors them- selves are not changed. In the RGB and CMY(K) spaces, this means mapping all three (or four) color components with the same transformation function; in the HSI color space, only the intensity component is modified. Figure 6.35 shows typical transformations used for correcting three common tonal imbalances—flat, light, and dark images. The S-shaped curve in the first row of the figure is ideal for boosting contrast [see Fig. 3.2(a)]. Its midpoint is, anchored so that highlight and shadow areas can be lightened and darkened, re- spectively. (The inverse of this curve can be used to correct excessive contrast.) ‘The transformations in the second and third rows of the figure correct light and dark images and are reminiscent of the power-law transformations in Fig. 3.6. Although the color components are discrete, as are the actual transformation functions, the transformation functions themselves are displayed and manipu- lated as continuous quantities—typically constructed from piecewise linear or higher order (for smoother mappings) polynomials. Note that the keys of the im- ages in Fig. 6.35 are directly observable: they could also be determined using the histograms of the images’ color components. 6 & After the tonal characteristics of an image have been properly established, any color imbalances can be addressed. Although color imbalances can be de- termined objectively by analyzing—with a color spectrometer—a known color in an image, accurate visual assessments are possible when white areas, where the RGB or CMY(K) componenis should be equal, are present. As can be seen in Fig. 6.36, skin tones also are excellent subjects for visual color assessments be- cause humans are highly perceptive of proper skin color. Vivid colors, such as bright red objects, are of little value when it comes to visual color assessment. * studies india thatthe degre to which the luminance (lightness informations separated fom the color Information in L*a* is greater than in other color models—such as CIELUV, Y1O, YUV, YCC. and XYZ (Kasson and Ploue 1992). EXAMPLE 69: Tonal transformations, EXAMPLE 6,10: Color balancing.324 Ghopter 6 a Color Image Processing Dark Contecte FIGURE 6.35 Tonal corrections for flat. light (high key), and dark (low key) color images. Adjusting the red. erven,and blue components equally does not alter the image hues.65 © Color Transformations 323 and b* for green minus blue), making it useful in both image manipulation (tone and contrast editing) and image compression applications.’ ‘The principal benefit of calibrated imaging systems is that they allow tonal and color imbalances to be corrected interactively and independently—that is, jn two sequential operations. Before color irregularities, like over- and under- saturated colors, are resolved, problems involving the image's tonal range are corrected, The fonal range of an image. also called its key type, refers to its gen- eral distribution of color intensities. Most of the information in high-key im- ages is concentrated at high (or light) intensities: the colors of low-key images are located predominantly at low intensities; middle-key images lie in between. Asin the monochrome case, itis often desirable to distribute the intensities of a color image equally between the highlights and the shadows, The following ex- amples demonstrate a variety of color transformations for the correction of tonal and color imbalances. £. Transformations for modifying image tones normally are selected interac tively. The idea is to adjust experimentally the image's brightness and contrast to provide maximum detail over a suitable range of intensities.The colors them- selves are not changed, in the RGB and CMY(K) spaces, this means mapping all three (or four) color components with the same transformation funetion; in the HSI color space, only the intensity component is modified. Figure 6.35 shows typical transformations used for correcting three common tonal imbalances—lat, light, and dark images. The S-shaped curve in the first row of the figure is ideal for boosting contrast (see Fig. 3.2(a)]. Its midpoint is anchored so that highlight and shadow areas can be lightened and darkened, re- spectively, (The inverse of this curve can be used to correct excessive contrast.) ‘The transformations in the second and third rows of the figure correct light and dark images and are reminiscent of the power-law transformations in Fig. 3.6. Although the color components are discrete, as are the actual transformation functions, the transformation functions themselves are displayed and manipu- lated as continuous quantities—typically constructed from piecewise linear or higher order (for smoother mappings) polynomials Note that the keys of the im- ages in Fig. 6.35 are direcily observable: they could also be determined using the histograms of the images’ color components. E. Aiter the tonal characteristics of an image have been properly established, any color imbalances can be addressed. Although color imbalances can be de- termined objectively by analyzing—with a color spectrometer—a known color in an image, accurate visual assessments are possible when white areas, where the RGB or CMY(K) components should be equal, are present. Ascan beseen in Fig. 6.36, skin tones also are excellent subjects for visual color assessments be~ cause humans are highly perceptive of proper skin color. Vivid colors, such as bright red objects, are of little value when it comes to visual color assessment. “Studies indicate thatthe degre to which the luminance (lightness) informations separated fom the color informtion in Lab" is greater than in other color models—such as CIELUV. YI, YUV,YCC. and XYZ (Kasson and Ploue (1982). EXAMPLE 6.9: Tonal transformations EXAMPLE 6.10: Color balancing65 & Co FIGURE 6.36 Color balai326 Ghapter 6 & Color Image Processing EXAMPLE 6.11: Histogram, ‘equalization in the HSI color space. ‘When a color imbalance is noted, there are a variety of ways to correct it. When adjusting the color components of an image, it is important to re that every action affects the overall color balance of the image. That is, the per- ception of one color is affected by its surrounding colors, Nevertheless, the color wheel of Fig. 6.32 can be used to predict how one color component will affect others. Based on the color wheel, for example, the proportion of any color can be increased by decreasing the amount of the opposite (or complementary) color in the image. Similarly, it can be increased by raising the proportion of the two immediately adjacent colors or decreasing the percentage of the two colors adjacent to the complement. Suppose, for instance, that there is an abun- dance of magenta in an RGB image. It can be decreased by (1) removing both red and blue or (2) adding green. Figure 6.36 shows the transformations used to correct simple CMYK output imbalances Note that the transformations depicted are the functions required for correcting the images; the inverses of these functions were used to generate the associated color imbalances. Together, the images are analogous to a color ring-around print of a darkroom environment and are useful as a reference tool for identifying color printing problems. Note, for example, that too much red can bbe due to excessive magenta (per the bottom left image) or too little cyan (as shown in the rightmost image of the second row). 6.5.5 Histogram Processing Unlike the interactive enhancement approaches of the previous section, the gray- levet histogram processing transformations of Section 33 can be applied to color images in an automated way. Recall that histogram equalization automatically determines a transformation that seeks to produce an image with a uniform histogram of intensity values. In the case of monochrome images, it was shown (see Fig. 3.17) to be reasonably successful at handling low-, high-.and middle-key im- ages. Since color images are composed of multiple components, however, con- sideration must be given to adapting the gray-scale technique to more than one component and/or histogram. As might be expected, itis generally unwise to his- togram equalize the components of a color image independently. This results in erroneous color, A more logical approach is to spread the color intensities uni- formly, leaving the colors themselves (¢.g., hues) unchanged. The following ex- ample shows that the HSI color space is ideally suited to this type of approach. @ Figure 6.37(a) shows a color image of a caster stand containing cruets and shakers whose intensity component spans the entire (normalized) range of pos- sible values, (0, 1]. As can be seen in the histogram of ils intensity component prior to processing (Fig. 6.37(b)], the image contains a large number of dark colors that reduce the median intensity to 0.36. Histogram equalizing the in- tensity component, without altering the hue and saturation, resulted in the image shown in Fig. 6.37(c). Note that the overall image is significantly brighter and that several moldings and the grain of the wooden table on which the caster is sitting are now visible, Figure 6.37(b) shows the intensity histogram of the new image, as well as the intensity transformation used to equalize the intensity component [see Eq. (3.3-8)}64m Smoothing and Sharpening 327 i th Although the intensity equalization process did not alter the values of hue and saturation of the image, it did impact the overall color perception. Note, particular, the loss of vibrancy in the oil and vinegar in the cruets. Figure 6.37(d) shows the result of correcting this partially by increasing the image’s saturation component, subsequent to histogram equalization, using the transformation in Fig. 6.37(b). This type of adjustment is common when working with the inten- sity component in HSI space because changes in intensity usually affect the rel- ative appearance of colors in an image. EE Smoothing and Sharpening ‘The next step beyond transforming each pixel of a color image without regard to its neighbors (as in the previous section) is to modify its value based on the characteristics of the surrounding pixels. In this section, the basics of this type of neighborhood processing are illustrated within the context of color image smoothing and sharpening ab cd FIGURE 6.37 Histogram equalization {followed by saturati adjustment) in the HSI color space328 Ghopter6 w Color Image Processing Const the Book web ate fora bret review of 9 torsand mates EXAMPLE 6.12 Color image smoothing by neighborhood averaging, 6.6.1 Color Image Smoothing With reference to Fig. 6.29(a) and the discussion in Section 3.6, gray-scale image smoothing can be viewed as a spatial filtering operation in which the coeffi- cients of the filtering mask are all 1’s. As the mask is slid across the image to bbe smoothed, each pixel is replaced by the average of the pixels in the neigh- borhood defined by the mask. As can be seen in Fig. 6.29(b), this concept is eas- ily extended to the processing of full-color images. The principal difference is that instead of scalar gray-level values we must deal with component vectors of the form given in Eq, (6.4-2). Let S,, denote the set of coordinates defining a neighborhood centered at (x, y) in. an RGB color image. The average of the RGB component vectors in this neighborhood is 1 . K ude," y): (6.6-1) It follows from Eq. (6.4-2) and the properties of vector addition that ex, y) (is, K any =| bE, Gtx» |. (662) Le Ly wx, a 2, (x,y) We recognize the components of this vector as the scalar images that would be obtained by independently smoothing each plane of the starting RGB image using conventional gray-scale neighborhood processing, Thus, we conclude that smoothing by neighborhood averaging can be carried out on a per-color-plane basis. The result is the same as when the averaging is performed using RGB color vectors. @ Consider the color image shown in Fig. 6.38(a). The red, green, and blue planes of this image are depicted in Figs. 6.38(b) through (4). Figures 6.39(a) through (c) show the image’s HSI components, In accordance with the discus sion in the preceding paragraph, we can smooth the RGB image in Fig. 6.38 using the 5 5 gray-level averaging mask of Section 3.6, We simply smooth in- dependently each of the RGB color planes and then combine the processed planes to form a smoothed full-color result. The image so computed is shown in Fig, 6.40(a). Note that it appears as we would expect from the discussion and examples in Section 3,6. In Section 6.2 it was noted that an important advantage of the HSI color model is that it decouples intensity (closely related to gray scale) and color in- formation. This makes it suitable for many gray-scale processing techniques and suggests that it might be more efficient to smooth only the intensity component of the HSI representation in Fig. 6.39. To illustrate the merits and/or consequences of this approach, we next smooth only the intensity component66 & Smoothing and Sharpening 329 ab cd FIGURE 6.38 (2) RGB image. (b) Red component image. (c) Green component, (a) Blue component abe FIGURE 6.39: HSI components of the RGB color image in Fig. 6.38(a).(a) Hue. (b) Saturation, (c) Intensity330 Chapter 6 Color Image Proce abe FIGURE 6.40 Image smoothing with a $ 5 averaging maxk. (a) Result of processing image. (1) Result of processing the intensity component of the HSL image and con erence between the 10 results, ich RGB component ting to RGB. (c) Dif (leaving the hue and saturation components unmodified) and convert the processed result 10 an RGB image for display: The smoothed color im, show in Fig. 640(b), Note that it is similar to Fig, 6.40(a). but. as can be seen from the difference image in Fig. 640(¢). is not identical. This is due to the fact that the average of two pixels of differing color is a mixture of the Ovo colors. not either of the original colors, By smoothing only the intensity image. the pixels in Fis 6.40(b) maintain their original hue and saiuration—and thus theit original color Finally. we note that the difference (between the smoothed results in this example) would increase as the size of the smoathing mask increases, Color Image Sharpening In this section we consider image sharpening using the Laplacian (see Section 3.7.2), From vector sinalysis. we know that the Laplacian of a vector is defined asa yeetor whose components are equal to the Laplacian of the individual sea components of the input veetar. fn the RGB color system. the Lapkieian of ve tor ein Ey. (64-2 is [a seecre) Tlecswy] =| PGE.) (6.63) VBL ey) which, as in the previous section, tells us that we ean compute the Laplacian of a full-color ima, ts by computin the Laplacian of each component ims separ EXAMPLE 6.13: Figure 6.4 Ka) vas obtained using Ey. (3.7.6) in compuie the Laphacians at the Sharpening with RGB component images in Fig. 6,38 sand combinin the Papkician them to produce the sharp: ened full-cok Fresult, Figare sd} (by) shows a sini the HSE components in T Laplacitn of the intensity component with the unc, iy sharpened image bused oF 6.39 This rest way venerated by combining the hue and Sturt67 Color Segmentation abe FIGURE 6.41 Image sharpening with th. processing the intensity component and iplacian. (a) Result of processing each RGB channel. (b} Result of Wwerting lo RGB, (c) Difference between the two results, components. The difference between the RGB and HSI-based results is shown in Fig. 6.41(c): it results from the same factors explained in Example 6.12. Color Segmentation Segmentation is « process that partitions an image into regions Although seemen tation isthe topie of Chapter 10, we consider color segmentation briefly here for the sake of continuity. The reader will have no difficulty in following the discussion, Segmentation in HSI Color Space If we wish to segment an image based on color. and. in addition, we want to carry out the process on individual planes. it is natural to think first of the HSI space because color is conveniently represented in the hue image. Typically. sal~ in order to isolate further regions of inter is used less frequently for segmentation uration is used as a masking image est in the hue image, The intensity’ ima, ‘of color images because it carries no color information. The following example is typical of how segmentation is performed in the HSI system, ‘Suppose that itis of interest to segment the reddish region in the lower left EXAMPLE 6.14: of the image in Fig. 6.42(a). Alhough it was generated by pseudocolor methods, Segmentation in this image can be processed (segmented) as a full-color image without loss of nerality. Figures 6.42(b) through (d) are its HSI component images. Note by comparing Fig. 6.42(a) and (b) that the region in which we are interested has rel- atively high values of hue. indicating that the colors are on the blue-magenta side of red (see Fig. 6.13). Figure 6.42(e) shows a binary mask generated by thresh: olding the saturation image with a threshold equal to 10% of the maximum value in the saturation image. Any pixel value greater than the threshold was set to L (white). All others were set 10.0 (black), Figure 6.42(f) isthe product of the mask with the hue image. and Fig the histogram of the product image (note that the gray scale isin the ra332. Ghapter 6 # Color Image Processing, FIGURE 6.42 Image seamentation in HSH space. (a) Original (>) Hue, fe) Saturation (4) Intensity. (e) Binary saturation mask (black = 1). {f) Product of (b) and (2). {4) Histogram of (0. (h) Segmentation of red components (2). zaer47 m Color Segmentation 333 We see in the histogram that high values (which are the values of interest) are grouped at the very high end of the gray scale, near 1.0. The result of thresholding the product image with threshold value of 0.9 resulted in the binary image shown in Fig. 6.42(h). The spatial location of the white points in this image identifies the points in the original image that have the reddish hue of interest. This was far from ‘a perfect segmentation because there ate points in the original image that we ces tainly would say have a reddish hue, but that were not identified by this segmen- tation method. However, it can be shown by experimentation that the regions shown in white in Fig.6.42(h) are about the best this method can do in identifying the reddish components of the original image. The segmentation method discussed in the following section is capable of yielding considerably better results. 6.1.2 Segmentation in RGB Vector Space Although, 2s.mentioned numerous times in this chapter, working in HSI space is ‘more intuitive, segmentation is one area in which better results generally are ob- tained by using RGB color vectors The approach is straightforward. Suppose that the objective is to segment objects of a specified color range in an RGB image. Given a set of sample color points representative of the colors of interest, we ob- tain an estimate of the “average” color that we wish to segment. Let this average color be denoted by the RGB vector a. The objective of segmentation is to classi- fy each RGB pixel in a given image as having a color in the specified range or not. In order to perform this comparison, it is necessary to have a measure of similari- ty. One of the simplest measures is the Buclidean distance. Letz.denote an arbitrary point in RGB space. We say that zis similar to aif the distance between them is less than a specified threshold, Dp. The Euclidean distance between z and ais given by D(z, a) = jz — all {@-ae- ap (67-1) 4 = [len ~ an) + (ea ~ ac)? + (te - an)? where the subscripts R. G, and B, denote the RGB components of vectors a and 2. The locus of points such that D(z,a) = Do isa solid sphere of radius Da, as illustrated in Fig. 6.43(a). Points contained within or on the surface of the B abe FIGURE 6.43 ‘Three approaches for enclosing data regions for RGB vector segmentation.334 Chapter 6 m Color Image Processing EXAMPLE 6.15: Color image segmentation in RGB space. sphere satisfy the specified color criterion; points outside the sphere do not. Coding these two sets of paints in the image with, say, black and white, pro- duces a binary segmented image. A useful generalization of Eq, (6,7-1) is a distance measure of the form D(z,a) = [(2 ~ a)'C"(2 — a) (6.7-2) where Cis the covariance matrix’ of the samples representative of the color we. wish to segment. The locus of points such that D(z,a) = Do describes a solid 3-D elliptical body [Fig. 6.43(b)] with the important property that its principal axes are oriented in the direction of maximum data spread. When C = 1, the 3 x 3 identity matrix, Eq, (6.7-2) reduces to Eq, (6.7-1). Segmentation is as described in the preceding paragraph. Because distances are positive and monotonic, we can work with the dis- tance squared instead, thus avoiding root computations. However, implement- ing Eq. (6.7-1) or (6.7-2) is computationally expensive for images of practical size, even if the square roots are not computed, A compromise is 10 use a bound- ing box, as illustrated in Fig. 6.43(c). In this approach, the box is centered on and its dimensions along each of the color axes is chosen proportional to the standard deviation of the samples along each of the axis. Computation of the standard deviations is done only once using sample color data. Given an arbitrary color point, we segment it by determining whether or not itis on the surface or inside the box, as with the distance iormulations. However, determining whether a color point is inside or outside the box is much simples computationally when compared to a spherical or elliptical enclosure. Note that the preceding discussion is a generalization of the method introduced in Section 6.5, in connection with color slicing. IW The rectangular region shown Fig. 6.44(a) contains samples of reddish col- ors we wish to segment out of the color image. This is the same problem we considered in Example 6.14 using hue, but here we approach the problem using RGB color vectors. The approach followed was to compute the mean vector 8 using the color points contained within the rectangle in Fig. 6.44(a),and then to compute the standard deviation of the red, green, and blue values of those sam- ples. A box was centered at a, and its dimensions along each of the RGB axes were selected as 1.25 times the standard deviation of the data along the corre sponding axis, For example, let vg denote the standard deviation of the red components of the sample points. Then the dimensions of the box along the R- axis extended from (ag — 1.25a) to (ag + 1.25a,g), where ag denotes the red component of average vector a. The result of coding each point in the entire color image as white if it was on the surface or inside the box, and as black oth- cerwise, is shown in Fig, 6.44(b). Note how the segmented region was generalized from the color samples enclosed by the rectangle. In fact, by comparing “Computation of the covariance macri of set of vector samples is discussed in etal in Section 11467 © Color Segmentation 335 Figs. 6.44(b) and. 6.42(h). we see that segmentation in the RGB vector space yielded results that are much more accurate. in the sense that they correspond much more closely with what we would define as “reddish” points in the origi- nal color image ® 6.1.3 Color Edge Detection As discussed in Chapter 10, edge detection is an important too! for image seg- mentation. In this section, we are interested in the issue of computing edges on an individual-image basis versus computing edges directly in color vector space. ‘The details of edge-based segmentation are given in Section 10.1.3. a b FIGURE 6.44 segmentation in RGB space {@) Original image with colors of interest shown enclosed by rectangle (b) Result of Segmentation in RGB vector space. Compare with Fig 642(h)336 Ghopter 6m Color Image Processing Edge detection by gradient operators was introduced in Section 3.7.3 in con- tion with edge enhancement, Unfortunately, the gradient discussed in Sec- tion 3.7.3 is not defined for vector quantities. Thus. we know immediately that computing the gradient on individual images and then using the results to form a color image will lead to erroneous results. A simple example will help illustrate the reason why. Consider the two MX M color images (M odd) in Fig. 6.45(d) and (h), com- posed of the three component images in Figs. 6.45(a) through (c) and 6.45 through (g), respectively. If, for example, we compute the gradient image of each of the component images [see Eq. (3.7-13)] and add the results to form the two corresponding RGB gradient images, the value of the gradient at point {(Mf + 1)/2, (M + 1)/2] would be the same in both cases. Intuitively. we would expect the gradient at that point to be stronger for the image in Fig, 6.45(4) be cause the edges of the R, Gand B images are in the same direction in that image, as opposed to the image in Fig. 6.45(h). in which only two of the edges are in the same direction. Thus we see from this simple example that process ing the three individual planes to form a composite gradient image can yield er- roneous results. If the problem is one of just detecting edges, then the individual-component approach usually yields acceptable results. If accuracy is an issue, however, then obviously we need a new definition of the gradient ap- plicable to vector quantities. We discuss next a method proposed by Di Zenzo [1986] for doing this. ‘The problem at hand is to define the gradient (magnitude and direction) of the vector ¢ in Eq. (6.4-2) at any point (x, y). As was just mentioned, the gra- dient we studied in Section 3.7.3 is applicable to a scalar function f(x. y):it is abed ef&h FIGURE 6.45 (1)-(c) #,G.and Beomponent images and (d) resulting RGB color imave. (-(g) R.G.and B component images and (h) resulting RGB color image6.1 & Color Segmentation not applicable to vector functions. The following is one of the various ways in which we can extend the concept of a gradient to vector functions, Recall that fora scalar function f(x, y).the gradient is a vector pointing in the direction of maximum rate of change of f at coordinates (1, y) Let r,g.and b be unit vectors along the R, G, and B axis of RGB color space (Fig. 6.7). and define the vectors aR, aG ue + 673) ant * ae ®* a ee and RAG a v Peek (6.7-4) Let the quantities @,, these vectors. as follows eee. Be FV 22 (6.7-6) oe aR AR ‘ 4G aG (67-7) ax ay ax ay Keep in mind that RG, and B, and consequently the g’s.are functions of.x and y. Using this notation. it can be shown (Di Zenzo [1986]) that the direction of ‘maximum rate of change of (x, y) is given by the angle 2s, tan"'| — ;| (6.7-8) L(8n ~ By) i 2 and that the value of the rate of change at (x, y).in the direction of 6,is given by FQ) = {Fle + gu) + (Bie — Bry) C0820 + gu, sinze]}! (6.7-9) Because tan (a) = tan(a + 7), if isa solution to Eq, (6.7-8),s0 is 6 + 17/2. Furthermore, (0) = F(0 + 77).s0 F needs to be computed only for values of @ in the half-open interval [0, 77). The fact that Eq. (6.7-8) provides two val- ues 90° apart means that this equation associates with each point (x, y) a pair of orthogonal directions. Along one of those directions Fis maximum, and it is minimum along the other. The derivation of these results is rather lengthy, and we would gain little in terms of the fundamental objective of our current discussion by detailing it here. The interested reader should consult the paper 337338 Chapter 6 2 Color Image Processing vector space. ab cd FIGURE 6.46 {a} RGB image. (b) Gradient computed in RGB. color vector space (©) Gradients computed on a per-image basis then added. (@) Difference between (b) and (c), by Di Zenzo [1986] for details. The partial derivatives required for imple. menting Eqs, (6.7-5) through (6.7-7) can be computed using, for example, the Sobel operators discussed in Section 3.7.3. obtained using the obtained by com- “omposite Figure 6.46(b) is the gradient of the image in Fig. 6.46(a), vector method just discussed. Figure 6.46(c) shows the im: puting the gradient of each RGB component image and forming a gradient image by adding the corresponding values of the three component images at each coordinate (x. y). The edge detail of the vector gradient image is more complete than the detail in the individual-plane gradient image in Fig. 6.46(c): for example. see the detail around the subject's right eye. The image in Fig. 6.46(d) shows the difference between the two gradient images at each point (x. y). It is important to note that both approaches yielded reasonable results. Whether the extra detail in Fig. 6.46(b) is worth the added computa tional burden (as opposed to implementation of the Sobel operators, which68 NoiseinColor Images 339 abe FIGURE 6.47 Component gradient images of the color im age in Fig. 646, (a) Red component, (b) green com ponent, and (c) blue component. These three images were added and scaled to produce the image in Fig, 64646}. were used to generate the gradient of the individual planes) can only be determined by the requirements of a given problem. Figure 6.47 shows the three component gradient images, which, when added and scaled, were used to obtain Fig. 6.46(C). {| £2 Noise in Color Images ‘The noise models discussed in Section 5.2 are applicable to color images, Usu- ally, the noise content of a color image has the same characteristies in each color channel, but itis possible for color channels to be affected differently by noise, One possibility is for the electronics of a particular channel to mal- function. However, different noise levels are more likely to be caused by dil- ferences in the relative strength of illumination available to each of the color channels. For example, use of a red (reject) filter in a CCD camera will re- duce the strength of illumination available to the red sensor. CCD sensors are noisier at lower levels of illumination, so the resulting red component of an RGB image would tend to be noisier than the other two component images in this situation E! In this example we take a brief look at noise in color images and how noise carries over when converting from one color mode! to another. Figures 6.48(a) through (c) show the three color planes of an RGB image corrupted by Gauss an noise, and Fig, 6.48(d) is the composite RGB image. Note that fine grain noise such as this tends to be less visually noticeable in a color image than it is in a monochrome image. Figures 6.49(a) through (c) show the result of con. verting the RGB image in Fig, 6.48(d) 10 HST. Compare these results with the HSI components of the original image (Fig, 6.39) and note how significantly EXAMPLE 6.17: Tlustration of the effects of converting noisy RGB images to HSI340 Chapter 6 @ Color Image Processing, ab ed FIGURE 6.48 (a)-(e) Red, ‘green,and blue component images corrupted by additive Gaussian noise of mean and variance 800. (d) Resulting RGB image. {Compare (d) with Fig. 6.46(a),] abe FIGURE 6.49. HSI components of the noisy color image in Fig, 648(d). (1) Hue. (b) Saturation, (c) Intensity.68 & Noise in Color Images 341 degraded the hue and saturation components of the noisy image are. This is due to the nonlinearity of the cos and min operations in Eqs. (6.2-2) and (6.2-3),re- spectively. On the other hand, the intensity component in Fig. 6.49(c) is slight- ly smoother than any of the three noisy RGB component images. This is due to the fact that the intensity image is the average of the RGB images. as indical- ed in Eq. (6.2-4). (Recall the discussion in Section 3.4.2 regarding the fact that image averaging reduces random noise.) In cases when, say, only one RGB channel is affected by noise. conversion to HSI spreads the noise to all HST component images. Figure 6.50 shows an ex- ample. Figure 6.50(a) shows an RGB image whose green image is corrupted by salt-and-pepper noise. in which the probability of either salt or pepper is 0.08. The HSI component images in Figs. 6.50(b) through (<) show clearly how the noise spread from the green RGB channel to all the HSI images. Of course, this is not unexpected because computation of the HSI components makes use of all RGB components, as shown in Section 6.2.3 ab cd FIGURE 6.50 (a) RGB im with green plane corrupted by salt- aand-pepper noise. (b) Hue component of HST image (©) Saturation ‘component. (a) Intensity ‘component342 Ghepter 6 m1 Color Image Processing EXAMPLE 6.18: A color image compre example Asis true of the processes we have discussed thus far, filtering of full-color images can be carried out on a per-image basis or directly in color vector space, depending on the process. For example. noise reduction by using an averaging filter is the process discussed in Section 6.6.1, which we know gives the same re. sult in vector space as it does if the component images are processed indepen- dently. Other filters, however, cannot be formulated in this manner. Examples include the class of order statistics filters discussed in Section 5.3.2, For instance, to implement a median filter in color vector space it is necessary to find! a scheme for ordering vectors in a way that the median makes sense. While this was a simple process when dealing with scalars, the process is considerably more com- plex when dealing with vectors, A discussion of vector ordering is beyond the scope of our discussion here, but the book by Plataniotis and Venetsanopoulos [2000] is a good reference on veetor ordering and some of the filters based on the ordering concept HEX Color Image Compression Since the number of bits required to represent color is typically three to four limes greater than the number employed in the representation of gray levels, data compression plays a central role in the storage and transmission of color images. With respect to the RGB, CMY(K), and HSI images of the previous sections, the dara that are the object of any compression are (he components of cach color pixel (c.g, the red, green, and blue components of the pixels in an RGB image): they are the means by which the color information is conveyed. Compression is the process of reducing or eliminating redundant and/or irtel- evant data, Although compression is the topic of Chapter 8, we illustrate the concept briefly in the following example using a color image 3 Figure 6.51 (a) shows 4 24-bit RGB full-color image ofan itis in which 8 bits cach are used to represent the red, green, and blue components. Figure 6.51(b) was reconstructed from a compressed version of the image in (a) and is.in fact a compressed and subsequently decompressed approximation of it. Although the compressed image is not directly displayable—it must be decompressed before input to a color monitor—-the compressed image contains only 1 data bit (and thus I storage bit) for every 230 bits of data i the original image. As- suming that the compressed image could be transmitted over, say. the Internet in | minute, transmission of the original image Would require almost 4 hours Of course, the transmitted data would have to be decompressed for viewing but the decompression could be done in a matter of seconds. The JPEG 2000) compression algorithm used to generate Fig, 6.51(b) is a recently introduced standard that is described in detail in Section 8.6.2, Note that the reconstruct- ye is slightly blurred. This isa characteristic of many he ed approximation ima; fossy compression techniques: it ean be reduced oF eliminated by alteri level of compression.Summary 343 Summary The material in this chapters an introduction to color image processing and covers top- ies selected to give the reader a solid background in the techniques used in this branch ofimage processing. Our treatment of color fundamentals and color models was prepared fs foundation material for a field that is in its own Fight wide in technical scope and areas of application, In particular. we focused on color models that we felt are not only uselul in digital image processing but would also provide the tools necessary for further study in this area of color image processing, The discussion of pseudocolor and full-color processing on an individual image basis provides a tie to techniques that were covered in some detail in Chapters 3 through 5 “The material on color vector spaces is a departure from methods that we had stud ied before and highlights some important differences between gray-scale and full-color ab ed FIGURE 6.51 Color image compression. (a) Original RGB image. (b) Result of compressing and decompressing the image in (a).
You might also like
BAI151A Module 4 Textbook
PDF
No ratings yet
BAI151A Module 4 Textbook
56 pages
Digital ImageProcessing-color Image Processing
PDF
No ratings yet
Digital ImageProcessing-color Image Processing
115 pages
Lecture 3 - Color Image Processing
PDF
No ratings yet
Lecture 3 - Color Image Processing
105 pages
21CS732 Module 4 Textbook
PDF
No ratings yet
21CS732 Module 4 Textbook
93 pages
Module 4
PDF
No ratings yet
Module 4
131 pages
Digital Image Processing Question & Answers
PDF
No ratings yet
Digital Image Processing Question & Answers
13 pages
After CT DIP-417-547-1-67
PDF
No ratings yet
After CT DIP-417-547-1-67
67 pages
BCS613B Module 4 Textbook
PDF
No ratings yet
BCS613B Module 4 Textbook
57 pages
Unit6pdf 2021 09 11 13 53 21
PDF
No ratings yet
Unit6pdf 2021 09 11 13 53 21
72 pages
Color in Computer Graphic
PDF
No ratings yet
Color in Computer Graphic
98 pages
DIP9
PDF
No ratings yet
DIP9
61 pages
MSC DIP 7th Lecture
PDF
No ratings yet
MSC DIP 7th Lecture
69 pages
Module 4.2 - Colour Image Processing
PDF
No ratings yet
Module 4.2 - Colour Image Processing
72 pages
Color Image Processing
PDF
No ratings yet
Color Image Processing
89 pages
M-3 Color Image Processing
PDF
No ratings yet
M-3 Color Image Processing
63 pages
Chapter 06-Colour Image Processing
PDF
No ratings yet
Chapter 06-Colour Image Processing
84 pages
Lecture 1 Color OPTICS and TV
PDF
No ratings yet
Lecture 1 Color OPTICS and TV
43 pages
08 ColorImageProcessing
PDF
No ratings yet
08 ColorImageProcessing
74 pages
Chp7 - Color Image Proccessing
PDF
No ratings yet
Chp7 - Color Image Proccessing
67 pages
BAI151A Computer Vision Module 4
PDF
No ratings yet
BAI151A Computer Vision Module 4
57 pages
Lecture-6 (Color Image Processing)
PDF
No ratings yet
Lecture-6 (Color Image Processing)
43 pages
Color Image Fundamentals and Color Models
PDF
No ratings yet
Color Image Fundamentals and Color Models
45 pages
Color Image Processing: © 2002 R. C. Gonzalez & R. E. Woods
PDF
No ratings yet
Color Image Processing: © 2002 R. C. Gonzalez & R. E. Woods
54 pages
Lecture 11
PDF
No ratings yet
Lecture 11
36 pages
CHAPTER - 5 FINAL FOR CLASS Color Image Processing
PDF
No ratings yet
CHAPTER - 5 FINAL FOR CLASS Color Image Processing
63 pages
CO4 Color Models
PDF
No ratings yet
CO4 Color Models
28 pages
Unit-4 DIP
PDF
No ratings yet
Unit-4 DIP
33 pages
DIP Lecture Notes UNIT 1
PDF
No ratings yet
DIP Lecture Notes UNIT 1
38 pages
DIP ColourImageProcessing
PDF
No ratings yet
DIP ColourImageProcessing
32 pages
Color Models
PDF
No ratings yet
Color Models
26 pages
Chapter 6
PDF
No ratings yet
Chapter 6
30 pages
Ch6 Color Image Processing
PDF
No ratings yet
Ch6 Color Image Processing
50 pages
Color Image Processing
PDF
No ratings yet
Color Image Processing
72 pages
Digital Image Processing Digital Image Processing Digital Image Processing Digital Image Processing
PDF
No ratings yet
Digital Image Processing Digital Image Processing Digital Image Processing Digital Image Processing
76 pages
Colouring: Types of Colours
PDF
No ratings yet
Colouring: Types of Colours
26 pages
Color Image Processing
PDF
No ratings yet
Color Image Processing
189 pages
07 - Color Models
PDF
No ratings yet
07 - Color Models
31 pages
15 Color Vision
PDF
No ratings yet
15 Color Vision
32 pages
Digital Image Processing
PDF
No ratings yet
Digital Image Processing
11 pages
05.DIP. ColorImageProcessing
PDF
No ratings yet
05.DIP. ColorImageProcessing
32 pages
Chapter 6 Color Image Processing
PDF
No ratings yet
Chapter 6 Color Image Processing
66 pages
Color Patterns PDF
PDF
No ratings yet
Color Patterns PDF
58 pages
Digital Image Unit 3
PDF
No ratings yet
Digital Image Unit 3
13 pages
Dip Module 4 Colour Image Processing
PDF
No ratings yet
Dip Module 4 Colour Image Processing
17 pages
Collor Ip
PDF
No ratings yet
Collor Ip
29 pages
4 Color Modeling
PDF
No ratings yet
4 Color Modeling
40 pages
ImageProcessing12 ColourImageProcessing
PDF
No ratings yet
ImageProcessing12 ColourImageProcessing
76 pages
Module 4 Chapter 6 - Color Image Processing
PDF
No ratings yet
Module 4 Chapter 6 - Color Image Processing
11 pages
Medical Image Processing (UBM1601) Unit - I Fundamentals of Medical Image Processing and Transforms
PDF
No ratings yet
Medical Image Processing (UBM1601) Unit - I Fundamentals of Medical Image Processing and Transforms
55 pages
Colour Image Processing
PDF
No ratings yet
Colour Image Processing
60 pages
Color Model Ip
PDF
No ratings yet
Color Model Ip
44 pages
06 Color
PDF
No ratings yet
06 Color
89 pages
Color Ip2
PDF
No ratings yet
Color Ip2
17 pages
Color Image Processing: Preview
PDF
No ratings yet
Color Image Processing: Preview
19 pages
Color
PDF
No ratings yet
Color
15 pages
Color Image Processing: Prepared by T. Ravi Kumar Naidu
PDF
No ratings yet
Color Image Processing: Prepared by T. Ravi Kumar Naidu
60 pages
UNIT-4 Color Image Processing
PDF
No ratings yet
UNIT-4 Color Image Processing
17 pages
Digital Image Processing Question & Answers
PDF
No ratings yet
Digital Image Processing Question & Answers
13 pages
Digital Image Processing Question & Answers
PDF
No ratings yet
Digital Image Processing Question & Answers
13 pages