Quiz DIP
Quiz DIP
Quiz DIP
A system for describing color numerically. Also known as a "color model," the most widely used color
spaces are RGB for scanners, cameras and displays, CMYK for color printing and YUV for TV/video. Prior
to the proliferation of electronic displays, which are all RGB, color spaces were developed that were
closer to the way people think about color. For example, the CIE Lab model uses lightness (L) and values
on red-green (a) and blue-yellow (b) axes, while HSB uses hue, saturation and brightness. See CIE Lab,
HSB, HSL, RGB, sRGB, YUV, xvYCC and color space conversion.
A ColorSpace is used to identify a specific organization of colors. Each color space is characterized by a
color model that defines how a color value is represented (for instance the RGB color model defines a
color value as a triplet of numbers).
Each component of a color must fall within a valid range, specific to each color space, defined by
getMinValue(int) and getMaxValue(int) This range is commonly [0..1]. While it is recommended to use
values in the valid range, a color space always clamps input and output values when performing
operations such as converting to a different color space.
Explain the relationship between RGB and HLS. Explain the conversion from RGB to HLS and vice
versa.
RGB is a way to describe a color in a cube, where Red, Green and Blue are on the different axis.
HSL is another way to describe a color. Here we use Hue (the angle on the color wheel), Saturation
(the amount of color/chroma) and Lightness (how bright the color is).
RGB is used in many cases but sometimes it’s a bit difficult to know what color you are describing
just by looking at the numbers. R= 24, G= 98, B=118. It’s a Deep Cyan color.
In HSL it’s described like H=193° S=67% L=28%. This tells us it’s quite saturated and dark. When
we look at the color wheel we see that 193° is very close to cyan.
RGB – HSL
1
Convert the RGB values to the range 0-1, this can be done by dividing the value by 255 for 8-
bit color depth
R = 24 / 255 = 0.09
G = 98 / 255 = 0.38
B = 118 / 255 = 0.46
2
Find the minimum and maximum values of R, G and B.
min = 0.09 (the R value)
max = 0.46 (the B value)
3
Now calculate the Luminace value by adding the max and min values and
divide by 2.
4
The next step is to find the Saturation.
If the min and max value are the same, it means that there is no saturation. If all RGB values are
equal you have a shade of grey. Depending on how bright it’s somewhere between black and white.
If there is no Saturation, we don’t need to calculate the Hue. So we set it to 0 degrees.
But in our case min and max are not equal which means there is Saturation.
5
Now we know that there is Saturation we need to do check the level of the Luminance in order to
select the correct formula.
If Luminance is less or equal to 0.5, then Saturation = (max-min)/(max+min)
If Luminance is bigger then 0.5. then Saturation = ( max-min)/(2.0-max-min)
In our case Luminance is smaller then 0.5, so we use the first formula.
S = (0.46 – 0.09) / (0.46 + 0.09) = 0.37 / 0.55 = 0.672 which is rounded down equal to 67%
6
Two done, one to go. We still need to calculate the Hue.
The Hue formula is depending on what RGB color channel is the max value. The three different
formulas are:
If Red is max, then Hue = (G-B)/(max-min)
If Green is max, then Hue = 2.0 + (B-R)/(max-min)
If Blue is max, then Hue = 4.0 + (R-G)/(max-min)
The Hue value you get needs to be multiplied by 60 to convert it to degrees on the color circle
If Hue becomes negative you need to add 360 to, because a circle has 360 degrees.
5
5
An intuitive way to convert a color image 3D array to a grayscale 2D array is, for each
pixel, take the average of the red, green, and blue pixel values to get the grayscale
value. This combines the lightness or luminance contributed by each color band into a
reasonable gray approximation.
img = numpy.mean(color_img, axis=2)
The axis=2 argument tells numpy.mean() to average values across all three color
channels. (axis=0 would average across pixel rows and axis=1 would average across
pixel columns.)
Indexed Images
An indexed image does not explicitly contain any color information. Its pixel values
represent indices into a color Look-Up Table (LUT). Colors are applied by using these
indices to look up the corresponding RGB triplet in the LUT. In some cases, the pixel
values of an indexed image reflect the relative intensity of each pixel. In other cases,
each pixel value is simply an index, in which case the image is usually intended to be
associated with a specific LUT. In this case, the LUT is typically stored with the image
when it is saved to a file. For information on the LUTs provided with IDL, see Loading a
Default Color Table.
Uses
An indexed image uses direct mapping of pixel values to colormap values. The color
of each image pixel is determined by using the corresponding value of X as an index
into map.
A colormap is often stored with an indexed image and is automatically loaded with
the image when you use the imread function.
CIE
The CIE color model is a color space model created by the International
Commission on Illumination known as the Commission Internationale de
l’Elcairage (CIE). It is also known as the CIE XYZ color space or the CIE
1931 XYZ color space.
3.
A representation of the distribution of colors in an image, derived by counting the number of pixels of
each of given set of color ranges in a typically two-dimensional (2D) or three-dimensional
(3D) color space
Phase 4 is a trivial matter regardless of the quantization method. The other three
phases however are more strongly connected. In particular the method used for
phases 1 and 2 will determine the best method for accomplishing phase 3.
In general algorithms for color quantization can be broken into two categories:
Uniform and Non-Uniform.
Uniform: Here the color space is broken into equal sized regions where the
number of regions, NR
is less than or equal to K.
Non-Uniform: Here the manner in which the color space is divided is
dependent on the distribution of colors in the image.