0% found this document useful (0 votes)
51 views21 pages

MOD4 Half

notes

Uploaded by

swasthikarai22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views21 pages

MOD4 Half

notes

Uploaded by

swasthikarai22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

Module 4:Introduction to Image processing

Introduction to Image processing: overview, Nature of IP, IP and its related fields, Digital
Image representation, types of images.
Text book 2: Chapter 1

Digital Image Processing Operations: Basic relationships and distance metrics,


Classification of Image processing Operations.
Text book 2: Chapter 3

Questions:

1. Illustrate the digital image representations.


2. Write a note on image processing and related fields.
3. Explain nature of image processing.
4. Illustrate types of images based on colour .
5. Illustrate types of images based on dimensions and Data types.
6. Illustrate types of images based on domain spefici images.
7. Explain types of images based on Attributes and nature.

ARUNA G K ,DEPT. OF CSE ,CEC 1


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

SESSION1: Introduction to Image processing: overview, Nature of IP


Introduction to Image processing
• Sources of Images are paintings, photographs in magazines, Journals, Image galleries,
digital Libraries, newspapers, advertisement boards, television and Internet. Images are
imitations of Images.
• In image processing, the term ‘image’ is used to denote the image data that is sampled,
quantized, and readily available in a form suitable for further processing by digital
computers.

Nature of Image processing


• There are 3 ways of acquiring an image an
1. Reflective mode imaging ,
2. Emissive type imaging, and
3. Transmissive imaging.
• These are illustrated in Fig. 1.1. The radiation source shown in Fig. 1.1 is the light
source.
• The sun, lamps, and clouds are all examples of radiation or light sources.
• The object is the target for which the image needs to be created.
• The object can be people, industrial components, or the anatomical structure of a
patient.
• The objects can be two-dimensional, three-dimensional, or multidimensional
mathematical functions involving many variables.
• For example, a printed document is a 2D object. Most real-world objects are 3D.

1. Reflective mode imaging


o Reflective mode imaging represents the simplest form of imaging and uses a sensor
to acquire the digital image.
o All video cameras, digital cameras, and scanners use some types of sensors for
capturing the image. Image sensors are important components of imaging systems.
o They convert light energy to electric signals.

ARUNA G K ,DEPT. OF CSE ,CEC 2


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

2. Emissive type imaging


o It is the second type, where the images are acquired from self-luminous objects without
the help of a radiation source.
o In emissive type imaging, the objects are self-luminous.
o The radiation emitted by the object is directly captured by the sensor to form an image.
o Thermal imaging is an example of emissive type imaging.
o In thermal imaging, a specialized thermal camera is used in low light situations to
produce images of objects based on temperature.
o Other examples of emissive type imaging are magnetic resonance imaging (MRI) and
positron emissive tomography (PET).

3. Transmissive imaging
o Transmissive imaging is the third type, where the radiation source illuminates the
object.
o The absorption of radiation by the objects depends upon the nature of the material.
o Some of the radiation passes through the objects.
o The attenuated radiation is sensed into an image.
o This is called transmissive imaging.

ARUNA G K ,DEPT. OF CSE ,CEC 3


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

o Examples of this kind of imaging are X-ray imaging,microscopic imaging, and


ultrasound imaging.

The first major challenge in image processing is to acquire the image processing.
Figure 1.1 shows three types of processing—optical, analog, and digital image Processing.
➢ Optical image processing
Optical image processing is the study of the radiation source, the object, and other optical
processes involved.
o It refers to the processing of images using lenses and coherent light beams instead of
computers. Human beings can see only the optical image.
o An optical image is the 2D projection of a 3D scene. This is a continuous distribution
of light in a 2D surface and contains information about the object that is in focus.
o This is the kind of information that needs to be captured for the target image.
o Optical image processing is an area that deals with the object, optics, and how processes
are applied to an image that is availbale in the form of reflected or transmitted light.
o The optical image is said to be available in optical form till it is converted into analog
form.
➢ Analog or continuous image
An analog or continuous image is a continuous function f(x, y), where x and y are two
spatial coordinates.

o Analog signals are characterized by continuous signals varying with time.


o They are often referred to as pictures. The processes that are applied to the analog
signal are called analog processes.
o Analog image processing is an area that deals with the processing of analog electrical
signals using analog circuits.
o The imaging systems that use film recording images are also known as analog
imaging system .In medical imaging ,still films are used ,as films provide better
quality than digital systems.
o The analog signal is often sampled, quantized, and converted into digital form using
a digitizer.
o Digitization refers to the process of sampling and quantization.

ARUNA G K ,DEPT. OF CSE ,CEC 4


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

o Sampling is the process of converting a continuous-valued image f(x, y) into a


discrete image, as computers cannot handle continuous data.
o So the main aim is to create a discretized version of the continuous data.
o Sampling is a reversible process, as it is possible to get the original image back.
o Quantization is the process of converting the sampled analog value of the function
f(x, y) into a discrete-valued integer.

➢ Digital image processing


o Digital image processing is an area that uses digital circuits, systems, and software
algorithms to carry out the image processing operations.
o The image processing operations may include quality enhancement of an image,
counting of objects, and image analysis.
o Digital image processing has become very popular now, as digital images have many
o advantages over analog images. Some of the advantages are as follows:
1. It is easy to post-process the image. Small corrections can be made in the captured
2. image using software.
3. It is easy to store the image in the digital memory.
4. It is possible to transmit the image over networks. So sharing an image is quite easy.
5. A digital image does not require any chemical process. So it is very environment
friendly , as harmful film chemicals are not required or used.
6. It is easy to operate a digital camera.
➢ Some of the disadvantages are the initial cost, problems associated with sensors such as
high power consumption and potential equipment failure, and other security issues
associated with the storage and transmission of digital images.

➢ The final form of an image is the display image .The human eye can recognize only the
optical form. So the digital image needs to be converted to optical form through the digital
to analog conversion process.

Questions:

1. How to aquire an image?


2. Define Digital Image Processing
3. Define Analog Image Processing
4. Define Optical Image Processing

ARUNA G K ,DEPT. OF CSE ,CEC 5


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

SESSION2: IP and its related fields


IMAGE PROCESSING AND RELATED FIELDS
Image processing is an exciting interdisciplinary field that borrows ideas freely from
many fields .

Figure 1.2 illustrates the relationships between image processing and other related fields.

➢ Image Processing and Computer Graphics

Computer graphics and image processing are very closely related areas.
• Image processing deals with raster data or bitmaps, whereas computer graphics
primarily deals with vector data.
• Raster data or bitmaps are stored in a 2D matrix form and often used to depict real
images. However, vector images are composed of vectors, which represent the
mathematical relationships between the objects.
• Vectors are lines or primitive curves that are used to describe an image. Vector graphics
are often used to represent abstract, basic line drawings.
• The algorithms in computer graphics often take numerical data as input and produce an
image as output. However, in image processing, the input is often an image.
• The goal of image processing is to enhance the quality of the image to assist in
interpreting it.

ARUNA G K ,DEPT. OF CSE ,CEC 6


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

• Hence, the result of image processing is often an image or the description of an image.
• Thus, image processing is a logical extension of computer graphics and serves as a
complementary field.

➢ Image Processing and Signal Processing


• Human beings interact with the environment by means of various signals.
• In digital signal processing, one often deals with the processing of a one-dimensional
signal.
• In the domain of image processing, one deals with visual information that is often in
two or more dimensions.
• Therefore, image processing is a logical extension of signal processing.

➢ Image Processing and Machine Vision


• The main goal of machine vision is to interpret the image and to extract its physical,
geometric, or topological properties.
• Thus, the output of image processing operations can be subjected to more techniques, to
produce additional information for interpretation.
• Artificial vision is a vast field, with two subfields-machine vision and computer vision.
• The domain of machine vision includes many aspects such as lighting and camera, as
part of the implementation of industrial projects, since most of the applications
associated with machine vision are automated visual inspection systems.
• The applications involving machine vision aim to inspect a large number of products
and achieve improved quality controls.
• Computer vision is more ambitious. It tries to mimic the human visual system and is
often associated with scene understanding.
• Most image processing algorithms produce results that can serve as the first input for
machine vision algorithm.

➢ Image Processing and Video Processing


• Image processing is about still images.
• In fact, analog video cameras can be used to capture still images.
• A video can be considered as a collection of images indexed by time.

ARUNA G K ,DEPT. OF CSE ,CEC 7


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

• Most image processing algorithms work with video readily.


• Thus, video processing is an extension of image processing. In addition, images are
strongly related to multimedia, as the field of multimedia broadly includes the study of
audio,video,images.graphics and animation.

➢ Image Processing and Optics


• Optical image processing deals with lenses, light, lighting conditions, and associated
optical circuits.
• The study of lenses and lighting conditions has an important role in the study of image
processing.

➢ Image Processing and Statistics


• Image analysis is an area that concerns the extraction and analysis of object information
from the image.
• Imaging applications involve both simple statistics such as counting and mensuration
and complex statistics such as advanced statistical inference.
• So statistics play an important role in imaging applications. Image understanding is an
area that applies statistical inferencing to extract more information from the image.
Questions:
1. Difine image processing
2. List image processing related fields
3. Define Machine vision

ARUNA G K ,DEPT. OF CSE ,CEC 8


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

SESSION 3: Digital Image representation

DIGITAL IMAGE REPRESENTATION

• An image can be defined as a 2D signal that varies over the spatial cordinates x and y, and
can be written mathematically as f(x, y).
• Medical images such as magnetic resonance images and computerized tomography (CT)
images are 3D images that can be represented as f(x, y, z), where x, y, and z are spatial
coordinates.
• A sample digital image and its matrix equivalent are shown in Figs 1.3(a) and 1.3(b).

• In general, the image f(x, y) is divided into X rows and Y columns.


• Thus, the coordinate ranges are {x 0, 1,…, X-1} and {y 0, 1, 2,…, Y-1}. At the intersection
of rows and columns, pixels are present.
ARUNA G K ,DEPT. OF CSE ,CEC 9
COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

• The word ‘pixel’ is an abbreviation of ‘picture element’.


• The terms pixel, picture element, and pel are synonymous.
• A typical digital image consists of millions of pixels. Pixels are considered the building
blocks of digital images, as they combine together to give a digital image.
• Pixels represent discrete data. Their meaning varies with context.
• A pixel can be considered as a single sensor, photosite (physical element of the sensor array
of a digital camera), element of a matrix, or display element on a monitor.
• The value of the function f(x, y) at every point indexed by a row and a column is called
grey value or intensity of the image.
• Generally, the value of the pixel is the intensity value of the image at that point.
• The intensity value is the sampled, quantized value of the light that is captured by the sensor
at that point. It is a number and has no units.
• However, the value of the pixel is not always the intensity value.
• In an X-ray image, the value of the pixel indicates the attenuation of the X-ray at that point.
• In an MRI, the pixel value denotes the average MR signal intensity.
• The number of rows in a digital image is called vertical resolution.
• The number of columns is called horizontal resolution. The number of rows and columns
describes the dimensions of the image.
• The image size is often expressed in terms of the rectangular pixel dimensions of the array.
• Images can be of various sizes. Some examples of image size are 256 x 256, 512x512 etc
• For a digital camera ,the image size is defined as the total number of pixels(Mega pixels).
• Resolution is an important characteristic of an imaging system.
• It is the ability of the imaging system to produce the smallest discernable details, that is the
smallest sized object clearly, and differentiate it from the neighbouring small objects that
are present in the image.
• Image resolution depends on two factors—optical resolution of the lens and spatial
resolution.
• Spatial resolution of the image is very crucial as the digital image must show the object and
its separation from the other spatial objects that are present in the image clearly and
precisely.
o Consider a chart with vertical lines of width W.
o Let the space between the lines also be W.
o Then the line and the adjacent space constitute a line pair.

ARUNA G K ,DEPT. OF CSE ,CEC 10


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

o The width of the line pair is 2W, that is, W for the line and W for the space.
o Thus there are 1/2W line pairs per unit distance .
o A useful way to define resolution is the smallest number of line pairs per unit
distance .
o The resolution can then be quantified as 200 line pairs per mm.

• Spatial resolution depends on two parameters—the number of pixels of the image and
the number of bits necessary for adequate intensity resolution, referred to as the bit
depth.
• The numbers of pixels determine the quality of the digital image.
• The total number of pixels that are present in the digital image is the number of rows
multiplied by the number of columns.
• The choice of bit depth is very crucial and often depends on the precision of the
measurement system.
• To represent the pixel intensity value, certain bits are required.
• For example, in binary images, the possible pixel values are 0 or 1. To represent two
values,one bit is sufficient .
• The number of bits necessary to encode the pixel value is called bit depth.
Number of rows = Number of columns * Bit dept

• Bit depth is a power of two; it can be written as 2m.


• In monochrome grey scale images (e.g., medical images such as X-rays and ultrasound
images), the pixel values can be between 0 and 255. Hence, eight bits are used to represent
the grey shades between 0 and 255 (as 28 256). So the bit depth of grey scale images is 8.
• In colour images, the pixel value is characterized by both colour value and intensity value.
• So colour resolution refers to the number of bits used to represent the colour of the pixel.
The set of all colours that can be represented by the bit depth is called gamut or palette.
• So the total number of bits necessary to represent the image is
Number of rows X Number of columns X Bit depth
• As discussed earlier, spatial resolution depends on the number of pixels present in the image
and the bit depth.
• Keeping the number of pixels constant but reducing the quantization levels (bit depth) leads
to a phenomenon called false contouring.

ARUNA G K ,DEPT. OF CSE ,CEC 11


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

• On the other hand, the decrease in the number of pixels while retaining the quantization
levels leads to a phenomenon called checkerboard effect (or pixelization error).
• A 3D image is a function f(x, y, z), where x, y, and z are spatial coordinates. In 3D images,
the term ‘voxel’ is used for pixel. Voxel is an abbreviation of ‘volume element’.
Questions:
1. Define Pixel.
2. Explain intensity value of image.
3. Define number of rows.
4. Define Bit depth.
5. Define total number of bits.
6. Define Voxel.

ARUNA G K ,DEPT. OF CSE ,CEC 12


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

SESSION 4:Types of images


Classification of Images
TYPES OF IMAGES
➢ Based on Nature
• Images can be broadly classified as natural and synthetic images.
• Natural images are, as the name implies, images of the natural objects obtained using
devices such as cameras or scanners.
• Synthetic images are images that are generated using computer programs.

➢ Based on Attributes
• Based on attributes,images can be classified as raster images and vector graphics.
• Vector graphics use basic geometric attributes such as lines and circles, to describe an
image. Hence the notion of resolution is practically not present in graphics.
• However, raster images are pixel-based.
• The quality of the raster images is dependent on the number of pixels.
• So operations such as enlarging or blowing-up of a raster image often result in quality
reduction.
➢ Based on Dimensions
• Images can be classified based on dimension also.Normally ,digital images are a 2D
rectangular array of pixels. If another dimension, of depth or any other characteristic, is
considered, it may be necessary to use a higher-order stack of images.
• A good example of a 3D image is a volume image, where pixels are called voxels. By ‘3D
image’, it is meant that the dimension of the target in the imaging system is 3D. The target
of the imaging system may be a scene or an object.
• In medical imaging, some of the frequently encountered 3D images are CT images, MRIs,
and microscopy images. Range images, which are often used in remote sensing
applications, are also 3D images

ARUNA G K ,DEPT. OF CSE ,CEC 13


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

➢ Based on Colour
4. Based on colour, images can be classified as
1. grey scale ,
2. binary ,
3. true color ,and
4. indexed image
5. Pseudocolour images.

Grey scale and binary images are called monochrome images as there is no colour component
in these images.
• True colour (or full colour) images represent the full range of available colours.
• So the images are almost similar to the actual object and hence called true colour images.
In addition, true colour images do not use any lookup table but store the pixel information
with full precision.
• On the other hand, pseudocolour images are false colour images where the colour is added
artificially based on the interpretation of data.

ARUNA G K ,DEPT. OF CSE ,CEC 14


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

1. Grey scale images


• Grey scale images are different from binary images as they have many shades of grey
between black and white.
• A sample grey scale image is shown in Fig. 1.5(a).
• These images are also called monochromatic as there is no colour component in the
image, like in binary images. Grey scale is the term that refers to the range of shades
between white and black or vice versa.
• Eight bits (28 =256) are enough to represent grey scale as the human visual system can
distinguish only 32 different grey levels.
• The additional bits are necessary to cover noise margins. Most medical images such as
X-rays, CT images, MRIs, and ultrasound images are grey scale images.
• These images may use more than eight bits. For example, CT images may require a
range of 10–12 bits to accurately represent the image contrast.

2. Binary images
• In binary images ,the pixels assume a value of 0 or 1. So one bit is sufficient to
represent the pixel value. Binary images are also called bi-level images. In image
processing, binary images are encountered in many ways.
• The binary image is created from a grey scale image using a threshold process. The
pixel value is compared with the threshold value. If the pixel value of the grey scale
image is greater than the threshold value, the pixel value in the binary image is
considered as 1.
• Otherwise, the pixel value is 0. The binary image created by applying the threshold
process on the grey scale image in Fig. 1.5(a) is displayed in Fig. 1.5(b). It can be
ARUNA G K ,DEPT. OF CSE ,CEC 15
COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

observed that most of the details are eliminated. However, binary images are often used
in representing basic shapes and line drawings. They are also used as masks.
• In addition, image processing operations produce binary images at intermediate stages.

3. True colour images


• In true colour images, the pixel has a colour that is obtained by mixing the primary
colours red, green, and blue. Each colour component is represented like a grey scale
image using eight bits. Mostly, true colour images use 24 bits to represent all the
colours.
• Hence true colour images can be considered as three-band images.
• The number of colours that is possible is 2563 (i.e., 256 x 256 x 256 =1,67,77,216
colours). Figure 1.6(a) shows a colour image and its three primary colour components.
• Figure 1.6(b) illustrates the general storage structure of the colour image.
• A display controller then uses a digital-to-analog converter (DAC) to convert the colour
value to the pixel intensity of the monitor.
✓ Indexed image
• A special category of colour images is the indexed image. In most images, the full range
of colours is not used. So it is better to reduce the number of bits by maintaining a
colour map, gamut, or palette with the image.
• Figure 1.6(c) illustrates the storage structure of an indexed image.
• The pixel value can be considered as a pointer to the index, which contains the address
of the colour map. The colour map has RGB components. Using this indexed approach,
the number of bits required to represent the colours can be drastically reduced. The
display controller uses a DAC to convert the RGB value to the pixel intensity of the
monitor.

ARUNA G K ,DEPT. OF CSE ,CEC 16


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

ARUNA G K ,DEPT. OF CSE ,CEC 17


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

4. Pseudocolour images
• Like true colour images, pseudocolour images are also used widely in image
processing.
• True colour images are called three-band images. However, in remote sensing
applications, multi-band images or multi-spectral images are generally used. These
images, which are captured by satellites, contain many bands.
• A typical remote sensing image may have 3–11 bands in an image. This information is
beyond the human perceptual range. Hence it is mostly not visible to the human
observer.
• So colour is artificially added to these bands, so as to distinguish the bands and to
increase operational convenience. These are called artificial colour or pseudocolour
images. pseudocolour imagesare popular in the medical domain also. For example, the
Doppler colour image is a pseudocolour image.

ARUNA G K ,DEPT. OF CSE ,CEC 18


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

➢ Based on Data Types


• Images may be classifiied based on their data type .A binary image is a 1 bit image as one
bit is sufficient to represent black and white pixels.
• Grey scale images are stored as one-byte (8-bit) or two-byte (16-bit) images.
• With one byte, it is possible to represent 28, that is 0–255 256 shades and with 16 bits, it is
possible to represent 216, that is, 65,536 shades. Colour images often use 24 or 32 bits to
represent the colour and intensity value.
• Sometimes, image processing operations produce images with negative numbers, decimal
fractions, and complex numbers.

ARUNA G K ,DEPT. OF CSE ,CEC 19


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

• For example, Fourier transforms produce images involving complex numbers. To handle
negative numbers, signed and unsigned integer types are used .
• In these datatypes, the first bit is used to encode whether the number is positive or negative.
For example, the signed data type encodes the numbers from 128 to 127 where one bit is
used to encode the sign.
• In general, an n-bit signed integer can represent integers from 2n-1 to 2n-1 -1, a total of 2n.
Unsigned integers represent all integers from 0 to 2n-1 with n bits.
• Floating point involves storing the data in scientific notation.
• For example 1230 can be represented as 0.123x104 where 0.123 is called the significant
and the power is called the exponent .
• The quality of such data representation is characterized by parameters such as data accuracy
and precision. Data accuracy is the property of how well the pixel values of an image are
able to represent the physical properties of the object that is being imaged.
• Data accuracy is an important parameter, as the failure to capture the actual physical
properties of the image leads to the loss of vital information that can affect the quality of
the application.
• While accuracy refers to the correctness of a measurement, precision refers to the
repeatability of the measurement. In other words, repeated measurements of the physical
properties of the object should give the same result.
• Most software use the data type ‘double’ to maintain precision as well as accuracy.
properties of the image leads to the loss of vital information that can affect the quality of
the application.
• While accuracy refers to the correctness of a measurement, precision refers to the
repeatability of the measurement.
• In other words, repeated measurements of the physical properties of the object should give
the same result.
• Most software use the data type ‘double’ to maintain precision as well as accuracy.

➢ Domain Specific Images


• Images can be classified based on the domains and applixcations where such images are
encountered.

ARUNA G K ,DEPT. OF CSE ,CEC 20


COMPUTER GRAPHICS AND FUNDAMENTAL IMAGE PROCESSING -21CS63 MODULE-4

• The following are some of those images that are popular.


❖ Range images
• Range images are often encountered in computer vision.
• In range images, the pixel values denote the distance between the object and the
camera. These images are also referred to as depth images.
• This is in contrast to all other images that have been discussed so far whose pixel values
denote intensity and hence are often known as intensity images.
❖ Multispectral images
• Multispectral images are encountered mostly in remote sensing applications.
• These images are taken at different bands of visible or infrared regions of the
electromagnetic wave.
• Just as a colour image is of three bands, multispectral images may have many bands
that may include infrared and ultraviolet regions of the electromagnetic spectrum

Questions:

1. Define true colour image.


2. Define Pseudocolour image.
3. List datatypes of images.
4. Define grey scale image and binary image.
5. Define indexed image.

ARUNA G K ,DEPT. OF CSE ,CEC 21

You might also like