0% found this document useful (0 votes)
8 views

Lecture 1

This document provides an introduction to digital image processing. It discusses why image processing is needed, with major applications including improving images for human perception, machine vision applications, and efficient storage and transmission. Methods of image processing include noise filtering, content enhancement through contrast adjustment and deblurring, machine vision for industrial and medical applications, and video sequence processing for security and medical monitoring. Image sensing, acquisition, formation, sampling, quantization, and resolution are also introduced.

Uploaded by

Robera
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Lecture 1

This document provides an introduction to digital image processing. It discusses why image processing is needed, with major applications including improving images for human perception, machine vision applications, and efficient storage and transmission. Methods of image processing include noise filtering, content enhancement through contrast adjustment and deblurring, machine vision for industrial and medical applications, and video sequence processing for security and medical monitoring. Image sensing, acquisition, formation, sampling, quantization, and resolution are also introduced.

Uploaded by

Robera
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Introduction to Digital Image Processing

(CEng 442)

Perquisite     : CEng 337


Credit Hour : 4

By: Dr. Habib M.

1.1
Cont’d..

Why do we need image processing ?

- It is motivated by major applications

1. Improvement of pictorial information for human


perception

2. Image processing for autonomous machine


application

3. Efficient storage and transmission

1.2
Cont’d..

 Employ methods capable of enhancing pictorial information for


human interpretation and analysis.
Typical application
1. Noise filtering

1.3
Cont’d..

2. Content enhancement
- Contrast enhancement
- Deblurring the blurring image

- Defocused on lense
Blurring of image due to - motion image
- Camera setting

1.4
Cont’d..
 Machine vision application
Typical Application
1. Industrial machine vision for product assembly and inspection.
2. Automated target detection and tracking
3. Finger print recognition

1.5
Video sequence Processing
 The major emphasis of image sequence processing is detection of
moving parts, this has various application.
1. Detection and tracking of moving targets for security
surveillance purpose
2. To find out the trajectory of a moving target
3. Monitoring the movements of organ boundary in medical
application etc..

1.6
Cont,d..

7. Efficient storage and transmission


- Image Compression
- An image usually contains lot of redundancy that can be exploited
to achieve compression
 Pixel redundancy
 Coding Redundancy
 Psychovisual redundancy
Application’ s:-
1. Reduced storage
2. Reduction in Bandwidth

1.7
Cont’d..

Light is a particular type of electromagnetic radiation that


can be seen and sensed by the human eye.

The visible band of the electromagnetic spectrum spans the


range from approximately 0.43 micro m (violet) to about
0.79 micro m (red).

For convenience, the color spectrum is divided into six


broad regions: Violet, Blue, Green, Yellow, Orange, and Red.

1.8
Cont’d..
Light that is void of color is called achromatic or
monochromatic light.

The only attribute of such light is its intensity, or amount.

The term gray level generally is used to describe


monochromatic intensity because it ranges from black, to
grays, and finally to white.
Chromatic light spans the electromagnetic energy spectrum
from approximately 0.43 to 0.79 m, as noted previously.

1.9
Cont’d..

Radiance is the total amount of energy that flows from the light source,
and it is usually measured in watts (W).

Luminance, measured in lumens (lm), gives a measure of the amount of


energy an observer perceives from a light source.

For example, light emitted from a source operating in the far


infrared region of the spectrum could have significant energy
(radiance), but an observer would hardly perceive it; its luminance
would be almost zero.

Brightness is a subjective descriptor of light perception that is


practically impossible to measure. It embodies the achromatic notion of
intensity and is one of the key factors in describing color sensation.

1.10
Image Sensing and Acquisition

Incoming energy is transformed into a voltage by the


combination of input electrical power and sensor material that
is responsive to the particular type of energy being detected.
The illumination (incoming energy) may originate from a source
of electromagnetic energy such as:-
Radar, Infrared, x-ray etc..

The output voltage waveform is the response of the sensor(s),


and a digital quantity is obtained from each sensor by digitizing
its response.

1.11
Image Acquisition Using a Single
Sensor
The most familiar sensor of this type is the photodiode

It is constructed of silicon materials and whose output


voltage waveform is proportional to light.

The use of a filter in front of a sensor improves selectivity.


For example, a green (pass) filter in front of a light sensor
favors light in the green band of the color spectrum.

As a consequence, the sensor output will be stronger for


green light than for other components in the visible
spectrum.
1.12
Image Acquisition Using a Sensor Array
 Used for convert a continuous
image into a digital image

 Contains an array of light sensors

 Converts photon into electric charges


accumulated in each sensor unit

 CCD sensors are used widely in digital cameras and other


light sensing instruments.
 The response of each sensor is proportional to the integral
of the light energy projected onto surface of the sensors.
1.13
A Simple Image Formation Model

 An image is a two dimensional function f(x,y),

where (x,y) are spatial(plane) co-ordinates and

amplitude of a f at any point (x,y) is called

intensity or gray level of the image at that point

1.14
Cont’d..
 Spatial coordinate: (x,y) for 2D case such as photograph,
(x,y,z) for 3D case such as CT scan images
(x,y,t) for movies

 The function f may represent intensity (for monochrome images)


or color (for color images) or other associated values.

 Maximum Intensity Value ⇒ Saturation


 Minimum Intensity Value ⇒ Noise
 Dynamic Range = Max - Min Values ⇒ Contrast

1.15
Cont’d..
 An image is represented mathematically by a function
f:D→I
where D is the image domain
I is the image range (intensity).
 The function f(x,y) may be characterized by two components
1. The amount of source illumination incident on the scene
being viewed.
2. The amount of illumination reflected by the object in the
secene.

f(x,y) = i(x,y)  r(x,y)

1.16
Basic concepts in
Sampling & Quantization
 A digital sensor can only measure a limited number of samples

at a discrete set of energy levels


 Quantisation is the process of converting a continuous analogue
signal into a digital representation of this signal

1.17
Resolution: How Much Is Enough?

 The big question with resolution is always how much is


enough?
 This all depends on what is in the image and what you
would like to do with it
 Key questions include
 Does the image look aesthetically pleasing?
 Can you see what you need to see within the image?

1.18
Intensity Level Resolution(gray level)

 Intensity level resolution refers to the number of intensity


levels used to represent the image
 The more intensity levels used, the finer the level of
detail can be observe in an image
 Intensity level resolution is usually given in terms of the
number of bits used to store each intensity level

1.19

You might also like