DIPManual
DIPManual
1. Write a Program to read a digital image. Split and display image into 4 quadrants, up,
down, right, and left.
To write a program to read a digital image, split it into four quadrants and display
them, we will need to use the Python Imaging Library (PIL). Here is a sample code
that demonstrates how to accomplish this:
In the above code, we first import the Image module from PIL and then load the image
from a specified path. Next, we get the width and height of the image using the size
attribute.
We then split the image into four quadrants using the crop method of the Image object.
The crop method takes a tuple of four values representing the left, top, right, and bottom
coordinates of the region to be cropped. We use integer division (//) to get the center point
of the image.
Finally, we display the four quadrants using the show method of the Image object. This
will open the four quadrants in separate windows.
Input Image:
Output:
2. W
r i
te a program to show rotation, scaling, and translation of an image.
In the above code, we first import the Image module from PIL and then load the image
from a specified path.
Next, we rotate the image by 45 degrees using the rotate method of the Image object. We
store the result in the rotated_image variable.
After that, we scale the image by a factor of 2 using the resize method of the Image object.
We multiply the original dimensions of the image by 2 and pass them as a tuple to the
resize method. We store the result in the scaled_image variable.
Finally, we translate the image by (100, 50) pixels using the transform method of the
Image object. We pass three arguments to the transform method: the size of the output
image, the transformation matrix (in this case, an affine transformation matrix), and a
tuple representing the translation vector. We store the result in the translated_image
variable.
Finally, we display the original, rotated, scaled, and translated images using the show
method of the Image object. This will open each image in a separate window.
Input Image:
Output:
3.
Read an image, first apply erosion to the image and then subtract the result from the original.
Demonstrate the difference in the edge image if you use dilation instead of erosion.
import cv2
import numpy as np
cv2.imshow('Input', img)
cv2.imshow('Erosion', img_erosion)
cv2.imshow('Dilation', img_dilation)
cv2.waitKey(0)
In the above code, we first import the necessary modules: OpenCV and NumPy. We
then load the image in grayscale using the imread function of OpenCV.
Next, we define the kernel size for erosion and dilation. The kernel size determines the
extent of the erosion or dilation effect on the image.
We then apply erosion and dilation to the image using the erode and dilate functions of
OpenCV, respectively. We pass a kernel of ones to both functions to achieve the
maximum erosion and dilation effect.
After that, we subtract the eroded image from the original using the absdiff function of
OpenCV. This results in an edge image with thicker edges compared to the original.
Finally, we subtract the dilated image from the original using the absdiff function of
OpenCV. This results in an edge image with thinner edges compared to the original.
We display the original, eroded-subtracted, and dilated-subtracted images using the
imshow function of OpenCV. This will open each image in a separate window.
In summary, erosion removes the edges of the image while dilation adds to the edges.
Subtracting the eroded image from the original results in thicker edges, while subtracting
the dilated image results in thinner edges.
Input Image:
Output:
4. Read an image and extract and display low-level features such as edges, textures using
filtering Techniques.
To extract low-level features such as edges and textures from an image using
filtering techniques, we can use Python and the OpenCV library. Here's a sample
code that shows how to apply various filters to an image to extract edges and
textures:
import cv2
import numpy as np
image_path = r'C:\Users\AIML\Documents\moon.png'
image = cv2.imread(image_path)
# Display the original, blurred, Laplacian, Sobel_x, Sobel_y, and Canny images
cv2.imshow('image',image)
cv2.imshow('blurred_image',blurred_image)
cv2.imshow('laplacian_image', laplacian_image)
cv2.imshow('sobel_x_image', sobel_x_image)
cv2.imshow('sobel_y_image', sobel_y_image)
cv2.imshow('canny_image', canny_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Input Image:
Output:
Input Image:
Output: