Digital Image Fundamentals
Digital Image Fundamentals
Digital Image Fundamentals
• The choroid lies directly below the sclera. This membrane contains a
network of blood vessels that serve as the major source of nutrition to
the eye.
• Even superficial injury to the choroid, often not deemed serious, can lead
to severe eye damage as a result of inflammation that restricts blood
flow.
• The choroid coat is heavily pigmented and hence helps to reduce the
amount of extraneous light entering the eye and the backscatter within
the optic globe.
• At its anterior extreme, the choroid is divided into the ciliary body and the
iris.
• The ciliary , contracts or expands to control the amount of light that enters
the eye and controls the shape of lens.
• The central opening of the iris (the pupil) varies in diameter from
approximately 2 to 8 mm.
• The front of the iris contains the visible pigment of the eye, whereas the
back contains a black pigment.
• The lens is made up of concentric layers of fibrous cells and is suspended
by fibers that attach to the ciliary body.
• It contains 60 to 70% water, about 6% fat, and more protein than any
other tissue in the eye. The lens is colored by a slightly yellow
pigmentation that increases with age.
• The innermost membrane of the eye is the retina, which lines the inside of
the wall’s entire posterior portion. When the eye is properly focused, light
from an object outside the eye is imaged on the retina.
• Pattern vision is afforded by the distribution of discrete light receptors
over the surface of the retina.
• Rods are giving a general, overall picture of the field of view and are not
involved in color vision.
Several rods are connected to a single nerve and are sensitive to low levels
of illumination (scotopic or dim-light vision).
2. Image Formation in the Eye:
• In an ordinary photographic camera, the lens has a fixed focal length, and
focusing at various distances is achieved by varying the distance
between the lens and the imaging plane, where the film is located.
• The distance between the lens and the imaging region (the retina) is fixed,
and the focal length needed to achieve proper focus is obtained by
varying the shape of the lens.
• The eye lens (if compared to an optical lens) is flexible.
• Distance between the center of the lens and the retina (focal length):
– varies from 17 mm to 14 mm (refractive power of lens goes from
minimum to maximum).
• Objects farther than 3 m use minimum refractive lens powers (and vice
versa).
• Example:
– Calculation of retinal image of an object
15 x
100 17
x 2.55mm
3. Brightness Adaptation & Discrimination:
• Range of light intensity levels to which HVS (human visual system) can
adapt: on the order of 1010.
• For any given set of conditions, the current sensitivity level of HVS is
called the brightness adaptation level.
• The eye also discriminates between changes in brightness at any specific
adaptation level.
Ic /I → WEBER EQUATION
• Where: Ic: the increment of illumination
discriminable 50% of the time and
I : background illumination
• A small value of means that a small percentage change in intensity is
discriminable. This represents “good” brightness discrimination.
• Conversely, a large value of means that a large percentage change in
intensity is required .This represents “poor” brightness discrimination.
• At low levels of illumination brightness discrimination is poor (rods) and it
improves significantly as background illumination increases (cones).
Perceived Brightness
Simultaneous Contrast
2.2 LIGHT AND THE ELECTROMAGNETIC SPECTRUM
• In 1666, Sir Isaac Newton discovered that when a beam of sunlight is passed
through a glass prism, the emerging beam of light is not white but consists
instead of a continuous spectrum of colors ranging from violet at one end to
red at the other.
• The range of colors we perceive in visible light represents a very small
portion of the electromagnetic spectrum.
• On one end of the spectrum are radio waves with wavelengths billions of
times longer than those of visible light.
• On the other end of the spectrum are gamma rays with wavelengths millions
of times smaller than those of visible light.
=
• h is Planck’s constant.
• The units of wavelength are meters, with the terms microns and
nanometers being used just as frequently.
• Frequency is measured in Hertz (Hz), with one Hertz being equal to one
cycle of a sinusoidal wave per second.
• unit of energy is the electron-volt.
The electromagnetic spectrum.The visible spectrum is shown zoomed to
facilitate explanation, but note that the visible spectrum is a rather narrow
portion of the EM spectrum.
• Electromagnetic waves can be visualized as propagating sinusoidal waves
with wavelength, or they can be thought of as a stream of mass-less
particles, each traveling in a wavelike pattern and moving at the speed of
light.
• Each mass-less particle contains a certain amount (or bundle) of energy.
Each bundle of energy is called a photon.
• Three basic quantities are used to describe the quality of a chromatic light
source:
Radiance is the total amount of energy that flows from the light source,
and it is usually measured in watts (W).
• It is noted that these expressions also are applicable to images formed via
transmission of the illumination through a medium, such as a chest X-ray.
• In this case, we would deal with a transmissivity instead of a reflectivity
function, but the limits would be the same and the image function
formed would be modeled as the product .
2.4 Image Sampling and Quantization
• Our objective is to generate digital images from sensed data.
• The basic idea behind sampling and quantization is illustrated in Fig. 2.16.
• Indicates that the output of a linear operation due to the sum of two inputs is
the same as performing the operation on the inputs individually and then
summing the results.
• The output of a linear operation to a constant times an input is the same as the
output of the operation due to the original input multiplied by that constant.
• Thefirst property is called the property of additivity and the second is called the
property of homogeneity.
• function of this operator is simply to sum its inputs. To test for linearity, we start with the
left side and attempt to prove that it is equal to the right side:
• where the first step follows from the fact that summation is distributive. So, an expansion
of the left side is equal to the right side and we conclude that the sum operator is linear.
• On the other hand, consider the max operation, whose function is to find the maximum
value of the pixels in an image. For our purposes here, the simplest way to prove that this
operator is nonlinear, is to find an example that fails the test .
•
Linear versus Nonlinear Operations
and suppose that we let and To test for linearity, we again start
with the left side of
The left and right sides are not equal in this case, so we
have proved that in general the max operator is nonlinear.
3. Arithmetic operations: