0% found this document useful (0 votes)
20 views40 pages

Unit-3 Dip

Image enhancement techniques can be categorized into spatial domain methods and frequency domain methods, focusing on improving image contrast and detail. Various transformations, including linear, logarithmic, power-law, and piecewise-linear functions, are employed to enhance images, with applications in fields like medical imaging and remote sensing. Histogram processing and equalization are also crucial for adjusting image brightness and contrast, ensuring a more uniform distribution of gray levels.

Uploaded by

kannanram623
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views40 pages

Unit-3 Dip

Image enhancement techniques can be categorized into spatial domain methods and frequency domain methods, focusing on improving image contrast and detail. Various transformations, including linear, logarithmic, power-law, and piecewise-linear functions, are employed to enhance images, with applications in fields like medical imaging and remote sensing. Histogram processing and equalization are also crucial for adjusting image brightness and contrast, ensuring a more uniform distribution of gray levels.

Uploaded by

kannanram623
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

UNIT-3

IMAGEENHANCEMENT

Introduction:

Image enhancement approaches fall into two broad categories: spatial domain methods and
frequency domain methods. The term spatial domain refers to the image plane itself, and
approaches in this category are based on direct manipulation of pixels in an ima ge.

Frequency domain processing techniques are based on modifying the Fourier transform of
an image. Enhancing an image provides better contrast and a more detailed image as
compare to non-enhanced image. Image enhancement has very good applications. It is used
to enhance medical images, images captured in remote sensing, images from satellite etc.
As indicated previously, the term spatial domain refers to the aggregate of pixels composing
an image. Spatial domain methods are procedures that operate directly on these pixels
.Spatial domain processes will be denoted by the expression.
g(x,y)=T[f(x,y)]
where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator on f,
defined over some neighborhood of (x, y).

The principal approach in defining a neighborhood about a point (x, y) is to use a square or
rectangular sub image area centered at (x, y), as Fig. 2.1 shows.
The center of the sub image is moved from pixel to pixel starting, say, at the top left corner. The
operator T is applied at each location(x, y) to yield the output, g, at that location. The process
utilizes only the pixels in the area of the images panned by the neighborhood.

The simplest form of T is when the neighborhood is of size 1*1 (that is, a single pixel). In
this case, g depends only on the value of f at (x, y), and T becomes a gray-level (also called
an intensity or mapping)transformation function of the form
s=T(r)
where r is the pixels of the input image and s is the pixels of the output image. T is a
transformation function that maps each value of „r‟ to each value of „s‟.
For example, if T(r) has the form shown in Fig. 2.2(a), the effect of this transformation would
be to produce an image of higher contrast than the original by darkening the levels below m
and brightening the levels above m in the original image. In this technique, known as contrast
stretching, the values of r below m are compressed by the transformation function in to a
narrow range of s, toward black. The opposite effect takes place for values of r above m.

In the limiting cases shown inFig.2.2(b),T(r)produces at two-level(binary) image. A mapping


of this form is called a thresholding function.
One of the principal approaches in this formulation is based on the use of so-called masks(also
referred to as filters, kernels, templates, or windows). Basically, a mask is a small (say, 3*3)
2-D array, such as the one shown in Fig. 2.1, in which the values of the mask coefficients
determine the nature of the process, such as image sharpening. Enhancement techniques
based on this type of approach often are referred to as mask processing or filtering.

Fig. 2.2 Gray level transformation functions for contrast enhancement Image
enhancement can be done through gray level transformations which are discussed below.

BASICGRAYLEVELTRANSFORMATIONS:
1. Image negative
2. Log transformations
3. Power l aw transformations
4. Piecewise-Linear transformation functions
LINEARTRANSFORMATION:
First, we will look at the linear transformation. Linear transformation includes simple identity
and negative transformation. Identity
transformationhasbeendiscussedinourtutorialofimagetransformation, but a brief description
of this transformation has been given here.
Identity transition is shown by a straight line. In this transition, each value of the input image
is directly mapped to each other value of output image. That results in the same input image
and output image. And hence is called identity transformation. It has been shown below:

Fig. Linear transformation between input and output


NEGATIVE TRANSFORMATION:
The second linear transformation is negative transformation, which is
Invert of identity transformation. In negative transformation, each value of the input image
is subtracted from the L-1 and mapped onto the output image

IMAGE NEGATIVE:
The image negative with gray level value in the range of[0,L-1]is obtained by negative
transformation given by S = T(r) or
S=L-1–r
Where r=gray level value at pixel (x, y)
List he largest gray level consists in the image
It results in getting photograph negative. It is useful when for enhancing white details
embedded in dark regions of the image.
The overall graph of these transitions has been shown below.
Inputgraylevel,r

Fig.Some basic gray-level transformation functions used for image In this


case the following transition has been done.
S=(L–1)–r
Since the input image of Einstein is an 8 bpp image, so the number of levels in this image
are256. Putting 256 in the equation, we get this
S=255–r
So, each value is subtracted by 255 and the result image has been shown above. So, what
happens is that, the lighter pixels become dark and the darker picture becomes light. And it
results in image negative.
It has been shown in the graph below.

Fig. Negative transformations

Fig. Negative transformations


LOGARITHMIC TRANSFORMATIONS:
Logarithmic transformation further contains two type of transformation. Log transformation and
inverse log transformation.

LOG TRANSFORMATIONS:
The log transformations can be defined by this formula
S= clog(r +1).
Where S and r are the pixel values of the output and the input image and c is a constant. The
value 1 is added to each of the pixel value of the input image because if there is a pixel
intensity of 0 in the image, then log (0) is equal to infinity. So, 1 is added, to make the
minimum value at least 1.
During log transformation, the dark pixels in an image are expanded as compare to the higher
pixel values. The higher pixel values are kind of compressed in log transformation. This result
in following image enhancement.
ANOTHERWAY TO REPRESENT LOG TRANSFORMATIONS: Enhance
details in the darker regions of an image at the expense of detail in brighter regions.
T(f)=C*log(1+r) Here C is constantandr≥0

The shape of the curve shows that this transformation maps the narrow range of low gray
level values in the input image in to a wider range of output image.
The opposite is true for high level values of input image.

Fig. Log Transformation Curve input vs output

POWER–LAWTRANSFORMATIONS:
There are further two transformation is power law transformations, that include nth power
and n th root transformation. these transformations can be given by the expression:
S=Cr γ
This symbol γ is called gamma, due to which this transformation is also known as gamma
transformation.

Variation in the value of γ varies the enhancement of the images. Different display devices/
monitors have their own gamma correction, that’s why they display their image at different
intensity, where c and g are positive constants. Sometimes Eq. (6) is written as
S=C(r+ε)γ
to account for an offset (that is, a measurable output when the input is zero). Plots of s versus
r for various values of γ are shown in Fig. 2.10. As in the case of the log transformation,
power-law curves with fractional values of γ map a narrow range of dark input values into a
wider range of output values, with the opposite being true for higher values of input
levels.Unlikethelogfunction,however,wenoticehereafamilyofpossible transformation curves
obtained simply by varying γ.

In Fig that curves generated with values of γ>1 have exactly the opposite effect as those
generated with values of γ<1. Finally, we Note that Eq. (6) reduces to the identity
transformation when c=γ=1.

Fig.2.13Plot of the equation S=Cr γ for various values of γ (C=1inallcases)

This type of transformation is used for enhancing images for different type of display devices.
The gamma of different display devices is different. For example, Gamma of CRT lies in
between of 1.8to2.5,that means the image displayed on CRT is dark.
Varying gamma(γ)obtains family of possible transformation curves
S=C*rγ
Here C and γ are positive constants. Plot of S versus r for various values of γ is γ> 1
compresses dark values and expands bright value sγ < 1 (similar to Log transformation) but
expands dark values Compresses bright values When C = γ = 1, it reduces to id entity
transformation.

CORRECTING GAMMA:
S=Crγ
S=Cr(1/2.5)
The same image but with different gamma values has been shown here.
PIECEWISE-LINEARTRANSFORMATIONFUNCTIONS:
A complementary approach to the methods discussed in the previous three sections is to use
piecewise linear functions. The principal advantage of piecewise linear functions over the types
of functions we have discussed thus far is that the form of piecewise functions can be arbitrarily
complex. The principal disadvantage of piecewise functions is that their specification requires
considerably more user input.

Contrast Stretching: One of the simplest piecewise linear functions is a contrast-


stretching transformation. Low-contrast images can result from poor illumination, lack of
dynamic range in the imaging sensor, or even wrong setting of a lens aperture during image
acquisition.
S=T(r)
Figure x(a) shows a typical transformation used for contrast stretching. The locations of
points (r1, s1) and (r2, s2) control the shape of the transformation function. If r1=s1and r2=s2,
the transformation is a linear function that produces No changes in gray levels. If r1=r2,
s1=0and s2= L- 1, the transformation Becomes a thresholding function that creates a binary
image, as illustrated in fig. 2.2(b).

Intermediate values of ar1, s1b and ar2, s2b produce various degrees of spread in the gray levels
of the output image, thus affecting its contrast. In general, r1≤ r2 and s1 ≤ s2 is assumed so that
the function is single value d and monotonically increasing.

Fig. x Contrast Stretching.


(a) Form of transformation function
(b) Allow- contrast stretching.
(c) Result of contrast stretching
(d) Result of thresholding

Figure x (b)showsan8-bitimagewithlowcontrast.
Fig.x (c) shows the result of contrast stretching, obtained by setting(r 1,s1)
=(rmin, 0) and (r2, s2)=(r max,L-1) where rminand rmax denote the minimum and maximum
gray levels in the image, respectively. Thus, the transformation function stretched the levels
linearly from their original range to the full range[0, L-1].
Finally, Fig. x(d) shows the result of using the thresholding function defined previously,
with r1=r2=m, the mean gray level in the image. The original image on which these results
are based is a scanning electron image of pollen ,magnified approximately700times.

Intensity -level Slicing:


Highlighting a specific range of gray levels in image often is desired. Applications include
enhancing features such as masses of water in satellite imagery and enhancing flaws in X-ray
images. There are several ways of doing level slicing, but most of them are variations of two
basic themes. One approach is to display a high value for all gray levels in the range of interest
and a low value for all other gray levels.

This transformation, shown in Fig. y(a), produces a binary image. The second approach,
based on the transformation shown in Fig y(b),brightens the desired range of gray levels but
preserves the background and gray- level tonalities in the image. Figure y (c) shows a gray-
scale image, and Fig. y(d) shows the result of using the transformation in Fig. y(a). Variations
of the two transformations shown in Fig. are easy to formulate.

Fig. y(a)This transformation highlights range[A,B]of gray levels and reduces all others to
a constant level
(b)This transformation highlights range[A,B]but preserves all other levels.
An image.(d)Result of using the transformation in(a).
BIT-PLANE SLICING:
Instead of highlighting gray-level ranges, highlighting the contribution made to total image
appearance by specific bits might be desired. Suppose that each pixel in an image is represented
by 8bits.Imagine that the image is composed of eight 1-bitplanes, ranging from bit-plane0 for the
least significant bit to bit plane 7for the most significant bit. In terms of 8 - bit bytes, plane 0
contains all the lowest order bits in the bytes comprising the pixels in the image andp
lane7contains all the high-order bits.

Figure 3.12 illustrates these ideas, and Fig. 3.14 shows the various bit planes for the image
showninFig.3.13.Notethatthehigher-order bits (especially the top four) contain the majority
of the visually significant data. The other bit planes contribute to more subtle details in the
image. Separating a digital image into its bit planes is useful for analyzing the relative
importance played by each bit of the image, a process that aids in determining the adequacy
of the number of bits used to quantize each pixel.

Intermsofbit-planeextractionforan8-bitimage,itisnotdifficulttoshow that the (binary) image for


bit-plane 7 can be obtained by processing the input image with a thresholding gray -level
transformation function that(1) maps all levelsintheimagebetween0and127toonelevel(for
example, 0); and (2) maps all levels between 129 and 255 to another (for example, 255).The binary
image for bit-plane 7 in Fig. 3.14 was obtained in just this manner. It is left as an exercise (Problem
3.3) to obtain the gray-leveltransformationfunctionsthatwouldyieldtheotherbitplanes.

HISTOGRAMPROCESSING:
The histogram of a digital image with gray levels in the range [0, L -1] is a discrete function
of the form
H(rk)=nk

where rk is the kth gray level and nk is the number of pixels in the image having the level rk.
A normalized histogram is given by the equation
P(rk)=nk/n fork=0,1,2,…..,L-1
P(rk) gives the estimate of the probability of occurrence of gray level r k. The sum of all
componentsofanormalizedhistogramisequalto1. The histogram plots are simple plots of H(rk)=nk
versus rk.

In the dark image the components of the histogram are concentrated on the low (dark) side of the
gray scale. In case of bright image, the histogram components are biased towards the high side of
the gray scale. The histogram of a low contrast image will be narrow and will be centered towards
the middle of the gray scale.

The components of the histogram in the high contrast image cover a broad range of the gray
scale. The net effect of this will be an image that shows a great deal of gray levels details and
has high dynamic range.

HISTOGRAM EQUALIZATION:
Histogram equalization is a common technique for enhancing the appearance of images. Suppose
we have an image which is predominantly dark. Then its histogram would be skewed towards the
lower end of the grey scale and all the image detail are compressed into the dark end of the
histogram. If we could stretch out the grey levels at the dark end to produce a more uniformly
distributed histogram then the image would become much clearer.

Let there be a continuous function with r being gray levels of the image to be enhanced. The
range of r is [0, 1] with r=0 repressing black and r=1 representing white. The transformation
function is of the form
S=T(r) where0<r<1
It produces a levels for every pixel value r in the original image.
The transformation function is assumed to fulfill two condition T(r)is single valued and
monotonically increasing in the internal 0<T(r)<1 for 0<r<1.The transformation function
should be single valued so that the inverse transformations should exist. Monotonically
increasing condition preserves the increasing order from black to white in the o utput image.
The second conditions guarantee that the output gray levels will be in the same range as the
input levels. The gray levels of the image may be viewed as random variables in the
interval[0.1].The most fundamental descriptor of a random variable is its probability density
function (PDF) P r(r) and Ps(s) denote the probability density functions of random variables r
and s respectively. Basic results from an elementary probability theory states that if Pr(r) and
Trare known and T-1(s) satisfies conditions (a), then the probability density function P s(s) of
the transformed variable is given by the formula

Thus, the PDF of the transformed variable s is the determined by the gray levels PDFof the
input image and by the chosen transformations function. A transformation function of a
particular importance in image processing

This is the cumulative is attribution function of r.List t he total number of


possible gray levels in the image.
HISTOGRAM MATCHING (SPECIFICATION)
The method used to generate images that have a specified histogram is called histogram matching or
histogram specification. Consider for a moment continuous intensities r and z which, as before, we treat as
random variables with PDFs p r r( ) and p z z( ), respectively. Here, r and z denote the intensity levels of the
input and output (processed) images, respectively. We can estimate p r r( ) from the given input image, and
p z z( ) is the specified PDF that we wish the output image to have. Let s be a random variable with the
property
LOCAL HISTOGRAM PROCESSING

Local histogram processing


• Global histogram processing: pixels are modified by a transformation function based on the gray-
level content of an entire image
• Sometimes we want to enhance the details over a small area
• Solution: transformation should be based on gray-level distribution in the neighborhood of every
pixel
• Local histogram processing: – At each location the histogram of the points in the neighborhood is
computed and a histogram equalization or histogram specification transformation function is obtained
– The gray level of the pixel centered in the neighborhood is mapped – The center of the neighborhood
is moved the next pixel and the procedure repeated

IMAGE RESTORATION:
Restoration improves image in some predefined sense. It is an objective process. Restoration
attempts to reconstruct an image that has been degraded by using a priori knowledge of the
degradation phenomenon. These techniques are oriented toward modeling the degradation and
then applying the inverse process in order to recover the original image.
Restorationtechniquesarebasedonmathematicalorprobabilisticmodels of image processing.
Enhancement, on the other hand is based on human subjective preferences regarding what
constitutes a “good” enhancement result. Image Restoration refers to a class of methods that aim
to remove or reduce the degradations that have occurred while the digital imag e was being
obtained .All-natural images when displayed have gone through some sort of degradation:
1. During display mode
2. Acquisition mode, or
3. Processing mode
4. Sensor noise
5. Blur due to camera misfocus
6. Relative object-camera motion
7. Random atmospheric turbulence
8. Others

DEGRADATIONMODEL:
Degradation process operates on a degradation function that operates on an input image with
an additive noise term. Input image is represented by using the notation f(x, y),noise term
can be represented as η(x,y). These two terms when combined gives the result as g(x, y). If
we are given g(x, y), some knowledge about the degradation function H or J and some
knowledgeabouttheadditivenoiseteemη(x,y),theobjectiveofrestoration is to obtain an estimate
f'(x, y) of the original image. We want the estimate to be as close as possible to the original
image. The more we know about h and η, the closer f(x, y)will be t of'(x ,y).If it is a linear
position invariant process, then degraded image is given in the s patial domain by
g(x, y)=f(x, y)*h(x, y)+η(x ,y)
where h(x ,y) is spatial representation of degradation function and symbol* represents
convolution. Infrequency domain we may write this equation as G(u ,v)=F(u, v)H(u, v)+N(u,
v)
The terms in the capital letters are the Fourier Transform of the Corresponding terms in the
spatial domain

Fig: A model of the image Degradation/Restoration Process NOISE

MODELS:

The principal source of noise in digital images arises during image acquisition and /or
transmission. The performance of imaging sensors is affected by a variety of factors, such as
environmental conditions during image acquisition and by the quality of the sensing elements
themselves. Images are corrupted during transmission principally due to interference in the
channels used for transmission. Since main sources of noise
presentedindigitalimagesareresultedfromatmosphericdisturbanceand image sensor circuitry,
following assumptions can be made i.e. the noise model is spatial invariant (independent of
spatial location). The noise model is uncorrelated with the object function.

Gaussian Noise:

These noise models are used frequently in practices because of its tractability in both spatial
and frequency domain. The PDF of Gaussian random variable is as follows, where z
represents the gray level, μ= mean of average value 0
Rayleigh Noise:
Unlike Gaussian distribution, the Rayleigh distribution is no symmetric. It is given by the
formula.

The mean and variance of this density is

Gamma Noise:
The PDF of Erlang noise is given by

The mean and variance of this density are given by


Its shape is similar to Rayleigh disruption. This equation is referred to as gamma density it
is correct only when the denominator is the gamma function.

Exponential Noise:
Exponential distribution has an exponential shape. The PDF of exponential noise is given as

Where a>0.Themeanandvarianceofthisdensityaregivenby

Uniform Noise:
The PDF of uniform noise is given by
The mean and variance of this noise is

Impulse(Salt &Pepper)Noise:

In this case, the noise is signal dependent, and is multiplied to the image. The PDF of bipolar
(impulse) noise is given by If b>a, gray level b will appear as a light dot in image. Level a
will appear like a dark dot.
Inverse Filtering:

The simplest approach to restoration is direct inverse filtering where we complete an estimateof
the transform of the original image simply y dividing the transform of the degraded image
G(u,v)by degradation function H(u,v)

We know that

Therefore
From the above equation we observe that we cannot recover the un degraded image exactly
because N(u,v) is a random function whose Fourier transform is not known. One approach to get
around the zero or small-value problem is to limit the filter frequencies to values near the origin.
We know that H(0,0) is equal to the average values of h(x,y). By Limiting the analysist f of
frequenciesn earth eorigin we reduse the probability of encountering zero valu sses.
Image Reconstruction from Projections

Fig. 5.32(a), which consists of a single object on a uniform background. In


order to bring physical meaning to the following explanation, suppose that
this image is a cross-section of a 3-D region of a humanbody

as Fig. 5.32(b) shows, and assume that the energy of the beam is absorbed
more by the object than by the background, as typically is the case. Using a
strip of X-ray absorption detectors on the other side of the region will yield
the signal (absorption profile) shown, whose amplitude (intensity) is
proportional to absorption.†
Fig. 5.32(c) shows. The process of back pro jecting a 1-D signal across a
2-D area sometimes is referred to as smearing the projection back across the
area.in terms of digital images, this means duplicating the same 1-D signal
across the image, perpendicularly to the direction of the beam
in Fig. 5.32(d). Repeating the procedure explained in the previous
paragraph yields a backprojection image in the vertical direction, as Fig.
5.32(e) shows. We continue the reconstruction by adding this result to the
previous backprojection, resulting in Fig. 5.32(f). Now, we begin to suspect
that the object of interest is contained in the square shown, whose amplitude
is twice the amplitude of the individual backprojec- tions because the signals
were added.
Figs. 5.32 and 5.33 that back projections 180° apart are mirror images of each
other, so we have to consider only angle increments halfway around a circle
in order to generate all the backprojections required for reconstruc- tion.

PRINCIPLES OF X-RAY COMPUTED TOMOGRAPHY (CT)


First-generation (G1) CT scanners employ a “pencil” X-ray beam and a single
detector, as Fig. 5.35(a) shows. For a given angle of rotation, the
source/detector pair is translated incrementally along the linear direction
shown. A projection (like the ones in Fig. 5.32), is generated by measuring the
output of the detector at each increment of translation. After a complete
linear translation, the source/detector assembly is rotated and the procedure
is repeated to generate another projection at a different angle.
Second-generation (G2) CT scanners [Fig. 5.35(b)] operate on the same
principle as G1 scanners, but the beam used is in the shape of a fan. This allows
the use of mul- tiple detectors, thus requiring fewer translations of the
source/detector pair.

Third-generation (G3) scanners are a significant improvement over the earlier


two generations of CT geometries. As Fig. 5.35(c) shows, G3 scanners employ a
bank of detectors long enough (on the order of 1000 individual detectors) to
cover the entire field of view of a wider beam.

Fourth-generation (G4) scanners go a step further. By employing a circular ring


of detectors (on the order of 5000 individual detectors), only the source has to
rotate. The key advantage of G3 and G4 scanners is speed; key disadvantages are
cost and greater X-ray scatter. The latter implies higher X- ray doses than G1
and G2 scan- ners to achieve comparable signal-to-noise characteristics.
fifth-gener- ation (G5) CT scanners, also known as electron beam computed
tomography (EBCT) scanners, eliminate all mechanical motion by employing
electron beams controlled electromagnetically.

sixth-generation (G6) CT. In this approach, a G3 or G4 scanner is configured


using so-called slip rings that eliminate the need for electrical and signal
cabling between the source/detectors and the processing unit.
Seventh-generation (G7) scanners (also called multislice CT scanners) are
emerg- ing in which “thick” fan beams are used in conjunction with parallel
banks of detec- tors to collect volumetric CT data simultaneously. That is, 3-
D cross-sectional “slabs,” rather than single cross-sectional
images are generated per X-ray burst.

PROJECTIONS AND THE RADON TRANSFORM


Next, we develop in detail the mathematics needed for image reconstruction in the
context of X-ray computed tomography. The same basic principles apply to otherCT
imaging modalities, such as SPECT (single photon emission tomography), PET (positron
emission tomography), MRI (magnetic resonance imaging), and some modalities of
ultrasound imaging.
A straight line in Cartesian coordinates can be described either by its slope-inter- cept
form, y ax b, or, as in Fig. 5.36, by its normal representation:
required to span the M N area (with u fixed) yields one projection. Changing u
and repeating this procedure yields another projection, and so forth. This is precisely how
the projections in Figs. 5.32-5.34 were generated.

When the Radon transform, g(r, u), is displayed as an image with r and u as recti-
linear coordinates, the result is called a sinogram, similar in concept to displaying the
Fourier spectrum. Like the Fourier transform, a sinogram contains the data neces- sary
to reconstruct f (x, y). Unlike the Fourier transform, however, g(r, u) is always a real
function. As is the case with displays of the Fourier spectrum, sinograms can be readily
interpreted for simple regions, but become increasingly difficult to “read” as the region
being projected becomes more complex.

BACKPROJECTIONS
To obtain a formal expression for a back projected image from the Radon transform, let us
begin with a single point, g(rj , uk ), of the complete projection, g(r, uk ), for a fixed value
of rotation, u k (see Fig. 5.37). Forming part of an image by back projecting this single point
is nothing more than copying the line L(rj , uk ) onto the image,
for the image due to back projecting the projection obtained with a fixed
angle, uk , as in Fig. 5.32(b). This equation holds for an arbitrary value
of uk , so we may write in general that the image formed from a single
back projection obtained at an angleu is given by

fu(x, y) g(x cosu y sin u, u)

We form the final image by integrating over all the backprojected images :
RECONSTRUCTION USING PARALLEL-BEAM FILTERED
BACKPROJECTIONS
As we saw in Figs. 5.33, 5.34, and 5.40, obtaining backprojections directly yields unac -
ceptably blurred results. Fortunately, there is a straightforward solution to this prob - lem
based simply on filtering the projections before computing the backprojections. From Eq.
(4-60), the 2-D inverse Fourier transform of F(u, v) is
Recall from Eq. (5-107) that G(v, u) is the 1-D Fourier transform of
g(r, u), which is a single projection obtained at a fixed angle, u. Equation
(5-115) states that the complete, backprojected image f (x, y) is obtained
as follows:
1. Compute the 1-D Fourier transform of each projection.
2. Multiply each 1-D Fourier transform by the filter transfer
function v which,as explained above, has been multiplied by a
suitable (e.g., Hamming) window.
3. Obtain the inverse 1-D Fourier transform of each resulting filtered
transform.
4. Integrate (sum) all the 1-D inverse transforms from Step 3.
RECONSTRUCTION USING FAN-BEAM FILTERED BACKPROJECTIONS
The discussion thus far has centered on parallel beams. Because of its simplicity and
intuitiveness, this is the imaging geometry used traditionally to introduce computed
tomography. However, more modern CT systems use a fan-beam geometry (see Fig. 5.35),
which is the topic of the following discussion.
Figure 5.45 shows a basic fan-beam imaging geometry in which the detectors are
arranged on a circular arc and the angular increments of the source are assumed to be
equal. Let p(a, b) denote a fan-beam projection, where a is the angular position of a
particular detector measured with respect to the center ray, and b is the angular
displacement of the source, measured with respect to the y-axis, as shown in the figure.
We also note in Fig. 5.45 that a ray in the fan beam can be represented as a line, L(r,
u), in normal form, which is the approach we used to represent a ray in the parallel-
beam imaging geometry discussed earlier. This allows us to utilize parallel - beam
results as the starting point for deriving the corresponding equations for the fan-beam
geometry. We proceed to show this by deriving the fan-beam filtered back- projection
based on convolution.†
We begin by noticing in Fig. 5.45 that the parameters of line L(r, u) are related to the
parameters of a fan-beam ray by
u b a
where we used the fact mentioned earlier that projections 180° apart
are mirror images of each other. In this way, the limits of the outer
integral in Eq. (5-120) are made to span a full circle, as required by a
fan-beam arrangement in which the detectors are arranged in a circle.
We are interested in integrating with respect to a and b. To do
this, we change to polar coordinates, (r, w). That is, we let x cosw
and y sin w, from which it follows that

x cos u+ y sin u r cosw cos u r sin wsin u


r cos(u w)

You might also like