0% found this document useful (0 votes)
56 views6 pages

Rods Work at Very Low Levels of Light

Rods are used for night vision because they can detect small amounts of light, but they do not help with color vision. Rods outnumber cones in the human eye by over 10 to 1. Cones require more light than rods and are used for color vision. There are three types of cones sensitive to blue, green, and red light. Cones are more densely packed in the fovea, which aids sharp central vision.

Uploaded by

Surendar P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views6 pages

Rods Work at Very Low Levels of Light

Rods are used for night vision because they can detect small amounts of light, but they do not help with color vision. Rods outnumber cones in the human eye by over 10 to 1. Cones require more light than rods and are used for color vision. There are three types of cones sensitive to blue, green, and red light. Cones are more densely packed in the fovea, which aids sharp central vision.

Uploaded by

Surendar P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Rods work at very low levels of light.

We use these for night vision because only a few bits


of light (photons) can activate a rod. Rods don't help with color vision, which is why at night,
we see everything in a gray scale. The human eye has over 100 million rod cells.

Cones require a lot more light and they are used to see color. We have three types of cones:
blue, green, and red. The human eye only has about 6 million cones. Many of these are
packed into the fovea, a small pit in the back of the eye that helps with the sharpness or detail
of images.

The need for transform is most of the signals or images are time domain signal (ie) signals
can be measured with a function of time. This representation is not always best.

Hue is what most people perceive of color. It is calculated in degrees of the color wheel and
goes from red, yellow, lime. Aqua, blue, magenta and red again. On the other hand, saturation
is a show of how pure the hue is. 0% saturation is always black while 100% saturation is red.

Zoom in a digital image means replacing the number of display pixels per image pixel only in
appearance. Digital image shrinking is done in same manner as zoom in. In this paper we are
using interpolation methods which have many advantages as we discussed. KEYWORDS:
Shrinking, Interpolation, Bilinear, PDA

D4 (p, q) = |x - s| + |y - t| = |0 - 5| + |0 - 5| = 5 + 5 = 10 units. D8 (p, q) = max (|x - s| , |y - t| )


= max (|0 - 5| , |0 - 5| ) = 5 units. Dm distance between two pixels depends on the values of
the pixels along with the path and also on the values of their neighbours.

Image derivatives can be computed by using small convolution filters of size 2 × 2 or 3 × 3,


such as the Laplacian, Sobel, Roberts and Prewitt operators. However, a larger mask will
generally give a better approximation of the derivative and examples of such filters are
Gaussian derivatives and Gabor filters.

The aim of image enhancement is to improve the interpretability or perception of information


in images for human viewers, or to provide `better' input for other automated image
processing techniques.

Unsharp masking, an old technique known to photographers, is used to change the relative
highpass content in an image by subtracting a blurred (lowpass filtered) version of the image
[5].

Since the smoothing spatial filter performs the average of the pixels, it is also called as
averaging filter.

High pass filter give emphasis on the high frequencies in the image. The difference between
Butterworth and Gaussian filters is that the former is much sharper than latter. The resultant
images by BHPF is much sharper than GHPF ,while analysis the FFT of CT and MRI image,
one sharp spike is concentrated in the middle. By
Procedure involved in Histogram Matching

The procedure involved in histogram matching is as follows:

Compute the histograms of the two images. The histogram of an image is a plot of the
number of pixels at each gray level.

Find the cumulative distribution functions (CDFs) of the two histograms. The CDF of a
histogram is a plot of the cumulative sum of the values in the histogram.

Map the CDF of the input image to the CDF of the reference image. This can be done by
finding the corresponding gray level in the reference image for each gray level in the input
image.

Apply the mapping to the input image. This will transform the input image so that its
histogram matches the histogram of the reference image.

Here is an example of histogram matching. Let's say we have two images, one of a cat and
one of a dog. The cat image has a histogram that is skewed to the left, meaning that there are
more dark pixels than light pixels. The dog image has a histogram that is more evenly
distributed. We can use histogram matching to make the cat image look more like the dog
image by matching the histograms of the two images.

To do this, we would first compute the histograms of the two images. The histogram of the
cat image would be skewed to the left, while the histogram of the dog image would be more
evenly distributed. We would then find the CDFs of the two histograms. The CDF of the cat
image would be a curve that is lower than the CDF of the dog image.

Next, we would map the CDF of the cat image to the CDF of the dog image. This would be
done by finding the corresponding gray level in the dog image for each gray level in the cat
image. For example, if the gray level 50 in the cat image corresponds to the gray level 100 in
the dog image, then all pixels in the cat image with a gray level of 50 would be mapped to a
gray level of 100 in the output image.

Finally, we would apply the mapping to the cat image. This would transform the cat image so
that its histogram matches the histogram of the dog image. The result would be an image of a
cat that looks more like a dog.

Histogram matching is a powerful technique that can be used to improve the appearance of
images. It can be used to make images look more uniform, to correct for uneven illumination,
and to align images that have been taken under different conditions.
In the field of Image Processing, Butterworth Lowpass Filter (BLPF) is
used for image smoothing in the frequency domain. It removes high-
frequency noise from a digital image and preserves low-frequency
components. The transfer function of BLPF of order   is defined as-

Where,
  is a positive constant. BLPF passes all the frequencies less than   
value without attenuation and cuts off all the frequencies greater than it.
 This   is the transition point between H(u, v) = 1 and H(u, v) = 0, so
this is termed as cutoff frequency. But instead of making a sharp cut-off
(like, Ideal Lowpass Filter (ILPF) ), it introduces a smooth transition from 1
to 0 to reduce ringing artifacts.
  is the Euclidean Distance from any point (u, v) to the origin of

the frequency plane, i.e, 


Approach:
Step 1: Input – Read an image
Step 2: Saving the size of the input image in pixels
Step 3: Get the Fourier Transform of the input_image
Step 4: Assign the order   and cut-off frequency 
Step 5: Designing filter: Butterworth Low Pass Filter
Step 6: Convolution between the Fourier Transformed input image and the
filtering mask
Step 7: Take Inverse Fourier Transform of the convoluted image
Step 8: Display the resultant image as output
In the field of Image Processing, Butterworth Highpass Filter (BHPF) is
used for image sharpening in the frequency domain. Image Sharpening is a
technique to enhance the fine details and highlight the edges in a digital
image. It removes low-frequency components from an image and preserves
high-frequency components.
This Butterworth highpass filter is the reverse operation of the Butterworth
lowpass filter. It can be determined using the
relation-   where,   is the
transfer function of the highpass filter and   is the transfer
function of the corresponding lowpass filter.
The transfer function of BHPF of order   is defined as-

Where,
  is a positive constant. BHPF passes all the frequencies greater
than   value without attenuation and cuts off all the frequencies less
than it.
 This   is the transition point between H(u, v) = 1 and H(u, v) = 0, so
this is termed as cutoff frequency. But instead of making a sharp cut-off
(like, Ideal Highpass Filter (IHPF) ), it introduces a smooth transition from
0 to 1 to reduce ringing artifacts.
  is the Euclidean Distance from any point (u, v) to the origin of

the frequency plane, i.e, 


Approach:
Step 1: Input – Read an image
Step 2: Saving the size of the input image in pixels
Step 3: Get the Fourier Transform of the input_image
Step 4: Assign the order   and cut-off frequency 
Step 5: Designing filter: Butterworth High Pass Filter
Step 6: Convolution between the Fourier Transformed input image and the
filtering mask
Step 7: Take Inverse Fourier Transform of the convoluted image
Step 8: Display the resultant image as output

A Gaussian filter is a commonly used image processing technique for smoothing or


blurring images. It derives its name from the Gaussian distribution because it applies
a Gaussian function to each pixel in the image. This filter is a linear filter and is used
to reduce noise and detail in an image, making it useful for various image processing
tasks, such as edge detection, image enhancement, and feature extraction.

Here's how a Gaussian filter works:

1. Kernel Generation: The first step in applying a Gaussian filter is to generate a


Gaussian kernel, which is a 2D matrix representing a 2D Gaussian function. The kernel
size and standard deviation (σ) of the Gaussian distribution are the two important
parameters. The size determines the extent of the smoothing effect, and σ controls
the spread of the Gaussian curve. Larger σ values result in more extensive smoothing.
2. Convolution: Once the Gaussian kernel is generated, it is convolved (applied) to the
input image. Convolution involves sliding the kernel over the image and calculating
the weighted sum of pixel values under the kernel at each position. The weights are
determined by the Gaussian function. The central pixel in the kernel contributes the
most to the result, and the weights decrease as you move away from the center.
The convolution operation at each pixel (x, y) can be expressed as:
�smoothed(�,�)=∑�=−��∑�=−���(�,�)⋅�(�+�,�+�)Ismoot
hed(x,y)=∑i=−kk∑j=−kkG(i,j)⋅I(x+i,y+j)
Where:
 �smoothed(�,�)Ismoothed(x,y) is the smoothed pixel value at position (x,
y) in the output image.
 �(�,�)G(i,j) is the value in the Gaussian kernel at position (i, j).
 �(�+�,�+�)I(x+i,y+j) is the pixel value at position (x+i, y+j) in the
input image.
 The summation is performed over the kernel size, which is 2�+12k+1 in
both dimensions.
3. Normalization: After the convolution, it is common to normalize the result by
dividing each pixel value by the sum of the values in the Gaussian kernel. This step
ensures that the brightness of the image is preserved, and the filter does not
introduce any significant brightness changes.

The result of applying a Gaussian filter to an image is a smoothed version of the


original image, where high-frequency noise and small details are reduced or
eliminated, while larger structures and edges are preserved. The choice of kernel size
and σ depends on the specific application and the amount of smoothing desired.
Smaller σ values and larger kernel sizes result in stronger smoothing, while larger σ
values and smaller kernel sizes produce milder smoothing effects.

You might also like