MOD4 Half
MOD4 Half
Introduction to Image processing: overview, Nature of IP, IP and its related fields, Digital
Image representation, types of images.
Text book 2: Chapter 1
Questions:
3. Transmissive imaging
o Transmissive imaging is the third type, where the radiation source illuminates the
object.
o The absorption of radiation by the objects depends upon the nature of the material.
o Some of the radiation passes through the objects.
o The attenuated radiation is sensed into an image.
o This is called transmissive imaging.
The first major challenge in image processing is to acquire the image processing.
Figure 1.1 shows three types of processing—optical, analog, and digital image Processing.
➢ Optical image processing
Optical image processing is the study of the radiation source, the object, and other optical
processes involved.
o It refers to the processing of images using lenses and coherent light beams instead of
computers. Human beings can see only the optical image.
o An optical image is the 2D projection of a 3D scene. This is a continuous distribution
of light in a 2D surface and contains information about the object that is in focus.
o This is the kind of information that needs to be captured for the target image.
o Optical image processing is an area that deals with the object, optics, and how processes
are applied to an image that is availbale in the form of reflected or transmitted light.
o The optical image is said to be available in optical form till it is converted into analog
form.
➢ Analog or continuous image
An analog or continuous image is a continuous function f(x, y), where x and y are two
spatial coordinates.
➢ The final form of an image is the display image .The human eye can recognize only the
optical form. So the digital image needs to be converted to optical form through the digital
to analog conversion process.
Questions:
Figure 1.2 illustrates the relationships between image processing and other related fields.
Computer graphics and image processing are very closely related areas.
• Image processing deals with raster data or bitmaps, whereas computer graphics
primarily deals with vector data.
• Raster data or bitmaps are stored in a 2D matrix form and often used to depict real
images. However, vector images are composed of vectors, which represent the
mathematical relationships between the objects.
• Vectors are lines or primitive curves that are used to describe an image. Vector graphics
are often used to represent abstract, basic line drawings.
• The algorithms in computer graphics often take numerical data as input and produce an
image as output. However, in image processing, the input is often an image.
• The goal of image processing is to enhance the quality of the image to assist in
interpreting it.
• Hence, the result of image processing is often an image or the description of an image.
• Thus, image processing is a logical extension of computer graphics and serves as a
complementary field.
• An image can be defined as a 2D signal that varies over the spatial cordinates x and y, and
can be written mathematically as f(x, y).
• Medical images such as magnetic resonance images and computerized tomography (CT)
images are 3D images that can be represented as f(x, y, z), where x, y, and z are spatial
coordinates.
• A sample digital image and its matrix equivalent are shown in Figs 1.3(a) and 1.3(b).
o The width of the line pair is 2W, that is, W for the line and W for the space.
o Thus there are 1/2W line pairs per unit distance .
o A useful way to define resolution is the smallest number of line pairs per unit
distance .
o The resolution can then be quantified as 200 line pairs per mm.
• Spatial resolution depends on two parameters—the number of pixels of the image and
the number of bits necessary for adequate intensity resolution, referred to as the bit
depth.
• The numbers of pixels determine the quality of the digital image.
• The total number of pixels that are present in the digital image is the number of rows
multiplied by the number of columns.
• The choice of bit depth is very crucial and often depends on the precision of the
measurement system.
• To represent the pixel intensity value, certain bits are required.
• For example, in binary images, the possible pixel values are 0 or 1. To represent two
values,one bit is sufficient .
• The number of bits necessary to encode the pixel value is called bit depth.
Number of rows = Number of columns * Bit dept
• On the other hand, the decrease in the number of pixels while retaining the quantization
levels leads to a phenomenon called checkerboard effect (or pixelization error).
• A 3D image is a function f(x, y, z), where x, y, and z are spatial coordinates. In 3D images,
the term ‘voxel’ is used for pixel. Voxel is an abbreviation of ‘volume element’.
Questions:
1. Define Pixel.
2. Explain intensity value of image.
3. Define number of rows.
4. Define Bit depth.
5. Define total number of bits.
6. Define Voxel.
➢ Based on Attributes
• Based on attributes,images can be classified as raster images and vector graphics.
• Vector graphics use basic geometric attributes such as lines and circles, to describe an
image. Hence the notion of resolution is practically not present in graphics.
• However, raster images are pixel-based.
• The quality of the raster images is dependent on the number of pixels.
• So operations such as enlarging or blowing-up of a raster image often result in quality
reduction.
➢ Based on Dimensions
• Images can be classified based on dimension also.Normally ,digital images are a 2D
rectangular array of pixels. If another dimension, of depth or any other characteristic, is
considered, it may be necessary to use a higher-order stack of images.
• A good example of a 3D image is a volume image, where pixels are called voxels. By ‘3D
image’, it is meant that the dimension of the target in the imaging system is 3D. The target
of the imaging system may be a scene or an object.
• In medical imaging, some of the frequently encountered 3D images are CT images, MRIs,
and microscopy images. Range images, which are often used in remote sensing
applications, are also 3D images
➢ Based on Colour
4. Based on colour, images can be classified as
1. grey scale ,
2. binary ,
3. true color ,and
4. indexed image
5. Pseudocolour images.
Grey scale and binary images are called monochrome images as there is no colour component
in these images.
• True colour (or full colour) images represent the full range of available colours.
• So the images are almost similar to the actual object and hence called true colour images.
In addition, true colour images do not use any lookup table but store the pixel information
with full precision.
• On the other hand, pseudocolour images are false colour images where the colour is added
artificially based on the interpretation of data.
2. Binary images
• In binary images ,the pixels assume a value of 0 or 1. So one bit is sufficient to
represent the pixel value. Binary images are also called bi-level images. In image
processing, binary images are encountered in many ways.
• The binary image is created from a grey scale image using a threshold process. The
pixel value is compared with the threshold value. If the pixel value of the grey scale
image is greater than the threshold value, the pixel value in the binary image is
considered as 1.
• Otherwise, the pixel value is 0. The binary image created by applying the threshold
process on the grey scale image in Fig. 1.5(a) is displayed in Fig. 1.5(b). It can be
ARUNA G K ,DEPT. OF CSE ,CEC 15
COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4
observed that most of the details are eliminated. However, binary images are often used
in representing basic shapes and line drawings. They are also used as masks.
• In addition, image processing operations produce binary images at intermediate stages.
4. Pseudocolour images
• Like true colour images, pseudocolour images are also used widely in image
processing.
• True colour images are called three-band images. However, in remote sensing
applications, multi-band images or multi-spectral images are generally used. These
images, which are captured by satellites, contain many bands.
• A typical remote sensing image may have 3–11 bands in an image. This information is
beyond the human perceptual range. Hence it is mostly not visible to the human
observer.
• So colour is artificially added to these bands, so as to distinguish the bands and to
increase operational convenience. These are called artificial colour or pseudocolour
images. pseudocolour imagesare popular in the medical domain also. For example, the
Doppler colour image is a pseudocolour image.
• For example, Fourier transforms produce images involving complex numbers. To handle
negative numbers, signed and unsigned integer types are used .
• In these datatypes, the first bit is used to encode whether the number is positive or negative.
For example, the signed data type encodes the numbers from 128 to 127 where one bit is
used to encode the sign.
• In general, an n-bit signed integer can represent integers from 2n-1 to 2n-1 -1, a total of 2n.
Unsigned integers represent all integers from 0 to 2n-1 with n bits.
• Floating point involves storing the data in scientific notation.
• For example 1230 can be represented as 0.123x104 where 0.123 is called the significant
and the power is called the exponent .
• The quality of such data representation is characterized by parameters such as data accuracy
and precision. Data accuracy is the property of how well the pixel values of an image are
able to represent the physical properties of the object that is being imaged.
• Data accuracy is an important parameter, as the failure to capture the actual physical
properties of the image leads to the loss of vital information that can affect the quality of
the application.
• While accuracy refers to the correctness of a measurement, precision refers to the
repeatability of the measurement. In other words, repeated measurements of the physical
properties of the object should give the same result.
• Most software use the data type ‘double’ to maintain precision as well as accuracy.
properties of the image leads to the loss of vital information that can affect the quality of
the application.
• While accuracy refers to the correctness of a measurement, precision refers to the
repeatability of the measurement.
• In other words, repeated measurements of the physical properties of the object should give
the same result.
• Most software use the data type ‘double’ to maintain precision as well as accuracy.
Questions: