Image Processing
Image Processing
digital computer. Digital image is composed of a finite number of elements, each of which
has a particular location and value. These elements are called picture elements, image
elements, pels and pixels. Pixel is the term used most widely to denote the elements of
digital image.
An image is a two-dimensional function that represents a measure of some characteristic
such as brightness or color of a viewed scene. An image is a projection of a 3- D scene
into a 2D projection plane.
An image may be defined as a two-dimensional function f(x,y), where x and y are spatial
(plane) coordinates, and the amplitude tofat any pair of coordinates (x,y) is called the
intensity of the image at thatpoint. The term gray levelis used often to refer to the intensity
of monochrome images. Color images are formed by a combination of individual 2-D
images. For example, the RGB color system, a color image consists of three (red, green
and blue) individual component images. For this reason, many of the techniques
developed for monochrome images can be extended to color images by processing the
three component images individually. An image may be continuous with respect to the x-
and y- coordinates and also in amplitude. Converting such an image to digital form
requires that the coordinates, as well as the amplitude, be digitized.
Image Processing System is the combination of the different elements involved in the
digital image processing. Digital image processing is the processing of an image by means
of a digital computer. Digital image processing uses different computer algorithms to
perform image processing on the digital images.
Computer:
Computer used in the image processing system is the general purpose
computer that is used by us in our daily life.
Mass Storage:
Mass storage stores the pixels of the images during the processing.
Image Display:
It includes the monitor or display screen that displays the processed images.
Network:
Network is the connection of all the above elements of the image processing
system.
Fundamental steps in image processing:
Medical applications:
1. Processing of chest X-rays
2. Cineangiograms
3. Projection images of trans axial tomographyand
4. Medical images that occur in radiology nuclear magneticresonance (NMR)
5. Ultrasonicscanning
Vidicon Digital Camera
A Vidicon is a type of camera tube whose basis of working is photoconductivity.
Basically it changes optical energy into electrical energy by the variation in resistance of
the material with respect to the illumination.
The Vidicon camera tube is based on the photoconductive properties of semiconductors.
When light falls on it, the number of free electrons created at any point is directly
proportional to the intensity of light falling on that point. A photo-conductive property of
semiconductors means a decrease in resistance with the amount of incident light. Brighter
the light, greater is the number of free electrons. These electrons are removed from the
material by using a positive voltage and hence that becomes positively charged. The value
of charge on the target at any point is proportional to the intensity of light at the
corresponding point in the original scene. So, the charge image of the picture is formed
on the surface of the target.
The photoconductive target material is an intrinsic semi-conductor that has a very high
resistivity in darkness and is decreasing it with an increase in illumination. The photo layer
has a thickness of about 0.0001 cm and behaves like an insulator with a resistance of
approximately 20 MΩ when in dark. When bright light falls on any area of the
photoconductive coating, resistance gets reduces to about 2 MΩ. Very few charge leaks
between the successive frames and this change are stored by the beam and the resulting
current to the electrode is called dark current.
Advantages
1. No ghost image.
2. There is no halo effect.
3. It is low cost and simple.
4. No gamma corrections needed.
5. Signal response is close to the human eyes.
6. It has a long life of about 5000 to 20,000 hours.
7. It is compact, having a length of 12-20 cms and a diameter of 1.5-4 cm.
8. By varying the target voltage as per the illumination of the scene, sensitivity can
be adjusted.
9. Resolution is better than ortiocon. Resolution in order of 350 lines can be achieved
under practical conditions.
Elements of Visual Perception
In human visual perception, the eyes act as the sensor or camera, neurons act as the
connecting cable and the brain acts as the processor.
The basic elements of visual perceptions are:
1. Structure of Eye
2. Image Formation in the Eye
3. Brightnes Adaptation and Discrimination
Structure of Eye:
The eye is nearly a sphere with average approximately 20 mm diameter. The eye is
enclosed with three membranes
The cornea and sclera - it is a tough, transparent tissue that covers the anterior
surface of the eye. Rest of the optic globe is covered by the sclera
The choroid – It contains a network of blood vessels that serve as the major
source of nutrition to the eyes. It helps to reduce extraneous light entering in the eye It
has two parts
(1) Iris Diaphragms- it contracts or expands to control the amount of light that enters
the eyes
(2) Ciliary body
Retina – it is innermost membrane of the eye. When the eye is properly focused, light
from an object outside the eye is imaged on the retina. There are various light
receptors over the surface of the retina
The distance between the lens and the retina is about 17mm and the focal length is
approximately 14mm to 17mm.
Brightness Adaptation and Discrimination:
Digital images are displayed as a discrete set of intensities. The eyes ability to
discriminate black and white at different intensity levels is an important consideration
in presenting image processing result.
The range of light intensity levels to which the human visual system can adapt is of the
order of 1010 from the scotopic threshold to the glare limit. In a photopic vision, the
range is about 106.
Contrast: Simultaneous contrast is related to the fact that a region’s perceived brightness
does not depend simply on its intensity, as below fig. demonstrates. All the center squares
have exactly the same intensity. Contrast is defined as the ratio (max-min)/(max+min)
where max and min are the maximum and minimum of the grating intensity respectively.
Hue
Hue refers to a specific basic tone of color or the root color and, in a rough definition, can be
considered as the main colors in the rainbow. It is not another name for color as colors are
more explicitly defined adding with brightness and saturation. For example, blue can be
considered as a hue, but with the addition of different levels of hue and saturation many colors
can be created. Prussian blue, navy blue, and royal blue are some commonly known colors of
blue.
Hue spectrum has three primary colors, three secondary colors, and six tertiary colors.
Saturation
Saturation is the measure of the strength of hue included in the color. At maximum saturation,
the color is almost like the hue and contains no grey. At the minimum, the color contains the
maximum amount of grey.
• Hue is a root color identified and can be roughly taken as the primary colors of the
rainbow.
• Saturation is the strength of the hue present in the color ranging from grey to the original
root color.
Color Fundamentals
Colors are seen as variable combinations of the primary color s of light:
red (R), green (G), and blue (B). The primary colors can be mixed to
produce the secondary colors: magenta (red+blue), cyan (green+blue),
and yellow (red+green). Mixing the three primaries, or a secondary with
its opposite primary color, produces white light.
RGB colors are used for color TV, monitors, and video cameras.
However, the primary colors of pigments are cyan (C), magenta (M), and
yellow (Y), and the secondary colors are red, green, and blue. A proper
combination of the three pigment primaries, or a secondary with its
opposite primary, produces black.
Color Models
The purpose of a color model is to facilitate the specification of colors in
some standard way. A color model is a specification of a coordinate
system and a subspace within that system where each color is represented
by a single point. Color models most commonly used in image processing
are:
All color values R, G, and B have been normalized in the range [0, 1].
However, we can represent each of R, G, and B from 0 to 255.
Each RGB color image consists of three component images, one for each
primary color as shown in the figure below. These three images are
combined on the screen to produce a color image.
Figure 15.4 Scheme of RGB color image
The total number of bits used to represent each pixel in RGB image is
called pixel depth. For example, in an RGB image if each of the red,
green, and blue images is an 8-bit image, the pixel depth of the RGB
image is 24-bits. The figure below shows the component images of an
RGB image.
Full color
The figure below shows the CMYK component images of an RGB image.
Yellow Black
360 − 𝜃 if 𝐵 > 𝐺
Where
1
[(𝑅 − 𝐺) + (𝑅 − 𝐵)]
𝜃 = 𝑐o𝑠 {
−1 2 }
√(𝑅 − 𝐺) + (𝑅 − 𝐵)(𝐺 − 𝐵)
2
𝐼 = 1 (𝑅 + 𝐺 + 𝐵)
𝐺 = 3𝐼 − (𝑅 + 𝐵)
𝐵 = 3𝐼 − (𝑅 + 𝐺)
If 240° ≤ 𝐻 ≤ 360° :
𝐻 = 𝐻 − 240°
𝐺 = 𝐼(1 − 𝑆)
𝑆 cos 𝐻
𝐵 = 𝐼 [1 + ]
cos (60°−𝐻)
𝑅 = 3𝐼 − (𝐺 + 𝐵)
The next figure shows the HSI component images of an RGB image.
Full color
(a) (b)
Figure 15.8 (a) Original image. (b) Result of decreasing its intensity