0% found this document useful (0 votes)
12 views8 pages

Chapter 1

The document discusses various applications of image processing, including medical, industrial, military, and consumer electronics. It defines images and digital images, and explains digital image processing as modifying images using computers, detailing three levels of operations: low, mid, and high level. Additionally, it outlines typical image processing operations, components of a digital image processing system, and the challenges of emulating human visual system performance with machine vision systems.

Uploaded by

abdo2652149
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views8 pages

Chapter 1

The document discusses various applications of image processing, including medical, industrial, military, and consumer electronics. It defines images and digital images, and explains digital image processing as modifying images using computers, detailing three levels of operations: low, mid, and high level. Additionally, it outlines typical image processing operations, components of a digital image processing system, and the challenges of emulating human visual system performance with machine vision systems.

Uploaded by

abdo2652149
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Chapter 1

Image processing applications cover a wide range of


human activities, such as the following:
• Medical Applications: PET, CAT, MRI, FMRI, etc.
• Industrial Applications.
• Military Applications.
• Law Enforcement and Security.
• Consumer Electronics.
• The Internet, Particularly the World Wide Web.

What Is an Image?
• An image is a visual representation of an object, a person, or a
scene produced by an optical device such as a mirror, a lens, or a
camera.
• This representation is two dimensional (2D), although it
corresponds to one of the infinitely many projections of a real-
world, three-dimensional (3D) object or scene.

What Is a Digital Image?


• A digital image is a representation of a two-dimensional image
using a finite number of points, usually referred to as picture
elements, pels, or pixels.
• Each pixel is represented by one or more numerical values:
▪ for monochrome (grayscale) images, a single value representing the
intensity of the pixel (usually in a [0, 255] range) is enough;
▪ for color images, three values (e.g., representing the amount of red (R),
green (G), and blue (B)) are usually required.
What Is Digital Image Processing?
• Digital image processing can be defined as the science of
modifying digital images by means of a digital computer.
• A few remarks:
▪ Since both the images and the computers that process them are digital
in nature, we will focus exclusively on digital image processing in this
book.
▪ The changes that take place in the images are usually performed
automatically and rely on carefully designed algorithms.

Three levels of image processing operations:


• Low Level: Primitive operations (e.g., noise reduction, contrast
enhancement, etc.) where both the input and the output are images.
• Mid Level: Extraction of attributes (e.g., edges, contours, regions, etc.)
from images.
• High Level: Analysis and interpretation of the contents of a scene.

EXAMPLES OF TYPICAL IMAGE PROCESSING OPERATIONS


1. Sharpening: A technique by which the edges and fine details of an image
are enhanced for human viewing.
2. Noise Removal: Image processing filters can be used to reduce the
amount of noise in an image before processing it any further.

3. Deblurring: An image may appear blurred for many reasons, ranging from
improper focusing of the lens to an insufficient shutter speed for a fast-
moving object.
4. Edge Extraction: Extracting edges from an image is a fundamental
preprocessing step used to separate objects from one another before
identifying their contents.

5. Binarization: In many image analysis applications, it is often necessary to


reduce the number of gray levels in a monochrome image to simplify and
speed up its interpretation. Reducing a grayscale image to only two levels of gray
(black and white) is usually referred to as binarization.
6. Blurring: It is sometimes necessary to blur an image in order to minimize
the importance of texture and fine detail in a scene, for instance, in cases
where objects can be better recognized by their shape.

7. Contrast Enhancement: In order to improve an image for human viewing


as well as make other image processing tasks (e.g., edge extraction) easier,
it is often necessary to enhance the contrast of an image.

8. Object Segmentation and Labeling: The task of segmenting and


labeling objects within a scene is a prerequisite for most object recognition
and classification systems.
COMPONENTS OF A DIGITAL IMAGE PROCESSING SYSTEM
➢ Hardware:
• Acquisition Devices: Responsible for capturing and digitizing images
or video sequences. Examples: scanners, cameras, and camcorders.
• Processing Equipment: Responsible for running software that allows
the processing and analysis of acquired images. Example: computer.
• Display and Hardcopy Devices: Responsible for showing the image
contents for human viewing. Examples include color monitors and
printers.
• Storage Devices: Magnetic or optical disks responsible for long-term
storage of the images.
➢ Software:
• MATLAB and its toolboxes.

MACHINE VISION SYSTEMS


• The problem domain, in this case, is the automatic recognition of
license plates. The goal is to be able to extract the alphanumeric
contents of the license plate of a vehicle passing through the toll booth
in an automated and unsupervised way, that is, without need for
human intervention.
• The acquisition block: is in charge of acquiring one or more images
containing a front or rear view of the vehicle that includes its license
plate.
• The goal of the preprocessing stage: is to improve the quality of
the acquired image.
• The segmentation block: is responsible for partitioning an image
into its main components: relevant foreground objects and
background.
• The feature extraction block (also known as representation
and description): consists of algorithms responsible for encoding the
image contents in a concise and descriptive way.
• Classification.

Why it is so hard to emulate the performance of the


human visual system (HVS) using cameras and computers?
• The HVS can rely on a very large database of images and
associated concepts that have been captured, processed, and
recorded during a lifetime.
• The very high speed at which the HVS makes decisions based
on visual input.
• The remarkable ability of the HVS to work under a wide range
of conditions, from deficient lighting to less-than-ideal
perspectives for viewing a 3D object.
▪ Most MVS must impose numerous constraints on the operating
conditions of the scene to improve their chances of success.

Important level 3 (IT, CS)


Made by Abdulrahman Salah Eldin

You might also like