0% found this document useful (0 votes)
22 views9 pages

Lecture-03 (Unit-02)

notes

Uploaded by

anamikabisht2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views9 pages

Lecture-03 (Unit-02)

notes

Uploaded by

anamikabisht2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Color Image Processing:

Color Image Processing refers to the techniques used to manipulate and


analyze color images. It involves handling the information in an image where
colors are represented by different color models, such as RGB (Red, Green,
Blue), or CMYK. The goal is to enhance, analyze, or extract useful
information from the image based on its color components.

Different models:
RGB:
• R: Red, G: Green, B: Blue
• Primary Colors
• Screens, digital displays (e.g., TVs, monitors, phones)

CMYK:
• C: Cyan, M: Magenta, Y: Yellow, K: Black (Key)
• Secondary Colors
• Printing (e.g., paper, physical media)

Example: Suppose you have a photograph of a flower. Using color image


processing, you could:

1. Enhance the image by adjusting the brightness, contrast, or


saturation of specific color channels (e.g., making the reds more
vibrant).
2. Segmentation: You can segment the image based on color to identify
only the petals of the flower, differentiating them from the
background.

For instance, in an RGB image, you could separate out the red channel to focus
only on the red elements in the photo, isolating the flower's petals from other
colors.

Akash Chauhan, Assistant Professor, Graphic Era Hill University, Dehradun


Image Restoration:
Image Restoration is a process in image processing where a corrupted or noisy
image is improved to recover its original quality. The goal is to reduce noise,
blur, or other distortions introduced during the capture or transmission of the
image.

Example:
Imagine a photograph taken in low light that appears grainy due to sensor
noise. Using image restoration techniques like Wiener filtering or Inverse
filtering, the noise can be reduced, restoring the image to a clearer, more
detailed version.

For example, if an image is blurred due to camera shake, you can apply
deblurring algorithms to recover sharper details.

Steps in Image Restoration:

1. Identify the type of distortion (e.g., blur, noise).


2. Apply restoration filters like Gaussian smoothing or inverse
filtering.
3. Result: A cleaner, clearer version of the original image is obtained.

Image restoration is commonly used in medical imaging, satellite imaging,


and old photo recovery.

Akash Chauhan, Assistant Professor, Graphic Era Hill University, Dehradun


Stereo Vision:
Stereo Vision is a technique in computer vision that allows machines to
perceive depth by using two images captured from slightly different
viewpoints, similar to how human eyes work. The process involves matching
corresponding points in the two images and calculating the depth based on the
difference in their positions (disparity).

Example:
Imagine two cameras positioned side by side capturing an image of a scene.
Each camera sees the scene from a slightly different angle. By comparing
the two images, you can compute the depth of objects in the scene. Closer
objects will have more significant differences (disparity) between the two
images, while distant objects will have smaller differences.

Application:
Stereo vision is often used in autonomous vehicles to measure the distance
to obstacles, enabling the vehicle to navigate safely by creating a 3D map
of its surroundings.

Akash Chauhan, Assistant Professor, Graphic Era Hill University, Dehradun


Motion Analysis:
Motion Analysis refers to the process of detecting, tracking, and interpreting
the movement of objects in a sequence of images or video frames. It plays a
key role in computer vision applications like video surveillance, robotics,
autonomous driving, and animation. The goal is to understand the dynamics
of motion, such as speed, direction, and the trajectory of moving objects.

Example: Tracking a Moving Car

Imagine a security camera is recording traffic on a street. Motion analysis


techniques can:

1. Detect the car in the video by identifying pixels that change between
frames.
2. Track the car’s movement by following its position across multiple
frames.
3. Analyze the car’s speed and direction based on how fast and in
which direction it moves from one frame to the next.

This information could be used for various purposes, such as detecting


speeding vehicles or understanding traffic patterns.

Akash Chauhan, Assistant Professor, Graphic Era Hill University, Dehradun


Common Techniques in Motion Analysis:

• Optical Flow: Measures the motion of objects based on pixel


changes between consecutive frames.
• Background Subtraction: Separates moving objects from the static
background.
• Kalman Filtering: A mathematical method used for predicting the
future position of a moving object, often used in tracking.

Motion analysis is widely applied in fields like video editing, sports


analysis, and autonomous vehicle navigation.

Local and Global Image Features:

Akash Chauhan, Assistant Professor, Graphic Era Hill University, Dehradun


Local features refer to distinct patterns or points of interest within a small
region of the image. They capture localized information and are typically
robust to changes in viewpoint, lighting, and partial occlusions.

Global features capture information about the entire image rather than
localized sections. They represent the overall characteristics of the image,
such as color distribution, texture, and shape.

Akash Chauhan, Assistant Professor, Graphic Era Hill University, Dehradun


Wavelet Transformation:

Wavelet transformation is a technique used to analyze signals or data by


breaking them down into smaller pieces, called wavelets. Unlike traditional
methods that look at data as a whole, wavelets allow us to examine both the
details and the overall shape of the data at different scales. Examples {haar,
Continuous wavelet transform, Discrete wavelet transform and etc.)

Let’s go through the wavelet transformation step by step using a simple linear
function as our example. We will use a linear function for clarity and show
how wavelet transformation can be applied to it.

Step 1: Define the Linear Function

Let’s consider a simple linear function:

f(x)=2x+1

This is a straight line with a slope of 2 and a y-intercept of 1. We can


visualize it over a specific range, say x=0 to x=5.

Akash Chauhan, Assistant Professor, Graphic Era Hill University, Dehradun


Step 2: Generate the Data

First, we will create data points for this function:

• Data Points:
o f(0)=1
o f(1)=3
o f(2)=5
o f(3)=7
o f(4)=9
o f(5)=11

The data points are [1,3,5,7,9,11]

Step 3: Apply Wavelet Transformation

1. Haar Wavelet:
o Let’s use the Haar wavelet. It is the simplest wavelet,
consisting of two parts: one for averaging and one for
detail.

2. DWT Wavelet Transformation:


o We will apply the Discrete Wavelet Transformation
(DWT), which breaks down our data into approximation
and detail coefficients.

Example of the DWT Steps

Let’s walk through the steps of DWT on our linear data points:

1. Initial Data:
o Original Signal: [1,3,5,7,9,11]

2. First Level of Transformation:

• Approximation Coefficients (cA): Average pairs of data


points. This is also called Low-pass filter. This represents the
smooth or low-frequency information of the signal. (A_2)

• Detail Coefficients (cD): Calculate the difference between


pairs of data points. This is also called High-pass filter. This
represents the fine details or high-frequency information of
the signal. (D_1)

Akash Chauhan, Assistant Professor, Graphic Era Hill University, Dehradun


Calculations:

• Approximation:
o cA[0] = (1 + 3) / 2 = 2
o cA[1] = (5 + 7) / 2 = 6
o cA[2] = (9 + 11) / 2 = 10

• Difference:
o cD[0] = 3 - 1 = 2
o cD[1] = 7 - 5 = 2
o cD[2] = 11 - 9 = 2

• Resulting Coefficients:
o Approximation: cA = [2, 6, 10]
o Details: cD = [2, 2, 2]

3. Second Level of Transformation (on cA):


Now, we will apply the same steps to the approximation
coefficients [2, 6, 10].

Calculations:

• Average:
o cA[0] = (2 + 6) / 2 = 4
o cA[1] = 10
• Difference:
o cD[0] = 6 - 2 = 4
• New Coefficients:
o Approximation: cA = [4, 10]
o Details: cD = [4]

5. Summary of Coefficients
After the wavelet transformation, we have:

• Final Approximation Coefficients:


[4, 10]
• Detail Coefficients:
[2, 2, 2, 4]

In practice, the stopping point for the decomposition process is often


determined by how much resolution you want to capture in the signal. Each
level of decomposition captures finer and finer details of the signal.

Akash Chauhan, Assistant Professor, Graphic Era Hill University, Dehradun

You might also like