CGV Lab Manual 23-24
CGV Lab Manual 23-24
Overview
Objectives
Description
Prerequisites
C Programming Language
Python
Base Course
Computer Graphics
Fundamentals of Image Processing
Page 1
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Introduction
Computer Graphics:
Definition: Computer Graphics is concerned with all aspects of producing, storing and rendering
pictures or images using computer.
Types of Computer Graphics:
1. Interactive Computer Graphics/ Active Computer Graphics
Interactive Computer Graphics involves a two way communication between computer and user.
Here the observer is given some control over the image by providing him with an input device
for example the video game controller of the ping pong game. This helps him to signal his request
to the computer. The computer on receiving signals from the input device can modify the displayed
picture appropriately. To the user it appears that the picture is changing instantaneously in response
to his commands. He can give a series of commands, each one generating a graphical response from
the computer. In this way he maintains a conversation, or dialogue, with the computer.
Interactive computer graphics affects our lives in a number of indirect ways. For example, it
helps to train the pilots of our airplanes. We can create a flight simulator which may help the
pilots to get trained not in a real aircraft but on the grounds at the control of the flight
simulator. The flight simulator is a mock up of an aircraft flight deck, containing all the usual
controls and surrounded by screens on which we have the projected computer generated views
of the terrain visible on takeoff and landing. Flight simulators have many advantages over the
real aircrafts for training purposes, including fuel savings, safety, and the ability to familiarize
the trainee with a large number of the world’s airports.
2. Non-interactive Computer Graphics / Passive Computer Graphics
In non interactive computer graphics otherwise known as passive computer graphics. It is the
computer graphics in which user does not have any kind of control over the image. Image is
merely the product of static stored program and will work according to the instructions given
in the program linearly. The image is totally under the control of program instructions not
under the user. Example: screen savers.
A Graphics System
A high level computer graphics system consists of five major elements,
1. Input devices
2. Processor
3. Memory
4. Frame buffer
5. Output devices
Page 2
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Resolution – the number of pixels in the frame buffer determines the detail that one can see in the
image.
The depth / precision of the frame buffer defined as the number of bits that are used for each
pixel, determines properties such as how many colors can be represented on a given system.
Example 1-bit frame buffer allows only 2 colors (black and white), 8-bit deep frame buffer
allows 28 = 256 colors and so on.
In full color system there are 24 bits / pixel. Such a system can display image realistically. They are
also called
true color systems or RGB color systems.
In a very simple system, the frame buffer holds only the colored pixels that are displayed on
the screen. In most systems, frame buffer holds far more information, such as depth
information needed for creating images from three dimensional data.
Memory
Frame buffer is usually implemented with the special type of memory chips that enable the fast
redisplay of the content of frame buffer.
The frame buffer is part of the system memory.
Processor
in simple systems, there may be only one processor, the central processing unit (CPU) of the
system, which must do both the normal processing and the graphical processing.
The main graphical function of the processor is to take specification of the graphical primitives
such as lines, polygons generated by the applications program and to assign values to the
pixels in the frame buffer that best represent these entities.
The conversion of geometric entities to pixel colors in frame buffer is known as rasterization
or scan conversion.
Today virtually all graphics systems are characterized by special purpose graphics processing
Page 3
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
units (GPUS). The GPU can be either on the motherboard of the system or on a graphics card.
Output devices
The dominant type of display is cathode ray tube (CRT).
When electrons strike the phosphor coating on the tube light is emitted.
The direction of the beam is controlled by two pairs of deflection plates.
The output of the computer is converted, by digital- to-analog converters, to voltages across
the x and y deflection plates. Light appears on the surface of the CRT when a sufficiently
intense beam of electrons is directed at the phosphor.
If the voltages steering the beam change at a constant rate, the beam will trace a straight line,
visible to a viewer. Such a device is known as the random-scan, calligraphic, or vector CRT,
because the beam can be moved directly from any position to any other position.
If intensity of the beam is turned off, the beam can be moved to a new position without
changing any visible display. This configuration was the basis of early graphics systems that
predated the present raster technology.
A typical CRT will emit light for only a short time—usually, a few milliseconds after the
phosphor is excited by the electron beam. For a human to see a steady, flicker-free image on
most CRT displays, the same path must be retraced, or refreshed, by the beam at a sufficiently
high rate, the refresh rate.
In older systems, the refresh rate is determined by the frequency of the power system, 60
cycles per second or 60 Hertz (Hz) in the United States and 50 Hz in much of the rest of the
world.
In a raster system, the graphics system takes pixels from the frame buffer and displays them as
points on the surface of the display in one of two fundamental ways.
In a non-interlaced system, the pixels are displayed row by row, or scan line by scan line, at
the refresh rate. In an interlaced display, odd rows and even rows are refreshed alternately.
Interlaced displays are used in commercial television.
In an interlaced display operating at 60 Hz, the screen is redrawn in its entirety only 30 times
per second, although the visual system is tricked into thinking the refresh rate is 60 Hz rather
than 30Hz.
Color CRTs have three different colored phosphors (red, green, and blue), arranged in small
groups. One common style arranges the phosphors in triangular groups called triads, each triad
consisting of three phosphors, one of each primary.
In the shadow-mask CRT a metal screen with small holes (the shadow mask) ensures that an
electron beam excites only phosphors of the proper color.
Page 4
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
files.
In most implementations, one of the
include lines #include
<GL/glut.h>
or
#include <GLUT/glut.h>
is sufficient to read in glut.h and gl.h.
Figure shows the organization of the libraries for an X Window System environment.
Image Processing:
Definition: Image processing is the process of transforming an image into a digital form and performing certain
operations to get some useful information from it. The image processing system usually treats all images as 2D
signals when applying certain predetermined signal processing methods.
Page 5
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Improved image quality: Digital image processing algorithms can improve the visual quality of images,
making them clearer, sharper, and more informative.
Automated image-based tasks: Digital image processing can automate many image-based tasks, such as
object recognition, pattern detection, and measurement.
Increased efficiency: Digital image processing algorithms can process images much faster than humans,
making it possible to analyze large amounts of data in a short amount of time.
Increased accuracy: Digital image processing algorithms can provide more accurate results than
humans, especially for tasks that require precise measurements or quantitative analysis.
Disadvantages of Digital Image Processing:
High computational cost: Some digital image processing algorithms are computationally intensive and
require significant computational resources.
Limited interpretability: Some digital image processing algorithms may produce results that are
difficult for humans to interpret, especially for complex or sophisticated algorithms.
Dependence on quality of input: The quality of the output of digital image processing algorithms is
highly dependent on the quality of the input images. Poor quality input images can result in poor quality
output.
Limitations of algorithms: Digital image processing algorithms have limitations, such as the difficulty
of recognizing objects in cluttered or poorly lit scenes, or the inability to recognize objects with
significant deformations or occlusions.
Dependence on good training data: The performance of many digital image processing algorithms is
dependent on the quality of the training data used to develop the algorithms. Poor quality training data
can result in poor performance of the algorithm.
Resources Required
Anaconda
Open CV
Code Blocks
Free Glut
General Instructions
1. Student should be punctual to the Lab
2. Required to prepare the Lab report every week
3. Required to maintain the Lab record properly
Page 6
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
4. Should use the resources properly Student should be punctual to the Lab
5. Required to prepare the Lab report every week
6. Required to maintain the Lab record properly
7. Should use the resources properly
CONTENTS
Expt. Date
Experiments Date Planned
No. Conducted
1 Develop a program to draw a line using Bresenham’s line
drawing technique
2 Develop a program to demonstrate basic geometric operations
on the 2D object
3 Develop a program to demonstrate basic geometric operations
on the 3D object
4 Develop a program to demonstrate 2D transformation on basic
objects
5 Develop a program to demonstrate 3D transformation on 3D
objects
6 Develop a program to demonstrate Animation effects on
simple objects.
7 Write a Program to read a digital image. Split and display
image into 4 quadrants, up, down, right and left.
8 Write a program to show rotation, scaling, and translation on
an image.
9 Read an image and extract and display low-level features
such as edges, textures using filtering techniques.
10 Write a program to blur and smoothing an image.
11 Write a program to contour an image.
12 Write a program to detect a face/s in an image.
PART B Practical Based Learning
Student should develop a mini project and it should be demonstrate in the laboratory
examination, Some of the projects are listed and it is not limited to:
Recognition of License Plate through Image Processing
Recognition of Face Emotion in Real-Time
Detection of Drowsy Driver in Real-Time
Recognition of Handwriting by Image Processing
Detection of Kidney Stone
Verification of Signature
Compression of Color Image
Classification of Image Category
Detection of Skin Cancer
Marking System of Attendance using Image Processing
Detection of Liver Tumor
IRIS Segmentation
Detection of Skin Disease and / or Plant Disease
Biometric Sensing System.
Projects which helps to formers to understand the present developments in agriculture.
Projects which helps high school/college students to understand the scientific problems.
Simulation projects which helps to understand innovations in science and technology
Page 7
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Evaluation Scheme
CIE marks for the practical course is 50 Marks.
The split-up of CIE marks for record/ journal and test are in the ratio 60:40.
Each experiment to be evaluated for conduction with observation sheet and record write-up. Rubrics
for the evaluation of the journal/write-up for hardware/software experiments designed by the faculty
who is handling the laboratory session and is made known to students at the beginning of the practical
session.
Record should contain all the specified experiments in the syllabus and each experiment write- up will
be evaluated for 10 marks.
Total marks scored by the students are scaled downed to 30 marks (60% of maximum marks).
Weightage to be given for neatness and submission of record/write-up on time.
Department shall conduct 02 tests for 100 marks, the first test shall be conducted after the 8th
week of the semester and the second test shall be conducted after the 14th week of the semester.
In each test, test write-up, conduction of experiment, acceptable result, and procedural knowledge
will carry a weightage of 60% and the rest 40% for viva-voce.
The suitable rubrics can be designed to evaluate each student’s performance and learning ability.
Rubrics suggested in Annexure-II of Regulation book
The average of 02 tests is scaled down to 20 marks (40% of the maximum marks).
The Sum of scaled-down marks scored in the report write-up/journal and average marks of two tests is the total
CIE marks scored by the student.
Reference
https://nptel.ac.in/courses/106/106/106106090/
https://nptel.ac.in/courses/106/102/106102063/
https://nptel.ac.in/courses/106/103/106103224/
https://nptel.ac.in/courses/106/102/106102065/
https://www.tutorialspoint.com/opencv/
https://medium.com/analytics-vidhya/introduction-to-computer-vision-opencv-in-python- fb722e805e8b
Page 8
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Experiments
1.0 Experiment
TITLE: Develop a program to draw a line using Bresenham’s line drawing technique
#include <GL/glut.h>
#include <stdio.h>
int x1,x2,y1,y2;
void myInit()
{
glClearColor(1.0,1.0,1.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
gluOrtho2D(0,500,0,500);
}
void draw_pixel(int x,int y)
{
glColor3f(1.0,0.0,0.0);
glBegin(GL_POINTS);
glVertex2i(x,y);
glEnd();
}
void draw_line(int x1,int x2,int y1,int y2)
{
int dx,dy,i,e;
Page 9
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
int incx,incy,inc1,inc2;
int x,y;
dx=x2-x1;
dy=y2-y1;
if(dx<0)
dx=-dx;
if(dy<0)
dy=-dy;
incx=1;
if(x2<x1)
incx=-1;
incy=1;
if(y2<y1)
incy=-1;
x=x1;
y=y1;
if(dx>dy)
{
draw_pixel(x,y);
e=2*dy-dx;
inc1=2*(dy-dx);
inc2=2*dy;
for(i=0;i<dx;i++)
{
if(e>=0)
{
y+=incy;
e+=inc1;
}
else
e+=inc2;
x+=incx;
draw_pixel(x,y);
}
}
else
{
draw_pixel(x,y);
e=2*dx-dy;
inc1=2*(dx-dy);
inc2=2*dx;
for(i=0;i<dy;i++)
{
if(e>0)
Page
10
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
{
x+=incx;
e+=inc1;
}
else
e+=inc2;
y+=incy;
draw_pixel(x,y);
}
}
}
void myDisplay()
{
draw_line(x1,x2,y1,y2);
glFlush();
Enter (x1,y1,x2,y2)
100 100 400 400
Page
11
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
1.5 Remarks
FACULTY SIGNATURE
Page
12
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Experiments
1.0 Experiment
#include <GL/glut.h>
float rectWidth = 100;
float rectHeight = 50;
float translationX = 0;
float translationY = 0;
float rotationAngle = 0;
float scaleFactor = 1;
float x=0, y=0;
void init()
{
glClearColor(1, 1, 1, 1); // White background
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, 500, 0, 500);
}
void drawRectangle()
{
Page
13
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
glBegin(GL_POLYGON);
glVertex2f(x, y);
glVertex2f(x + rectWidth, y);
glVertex2f(x + rectWidth, y + rectHeight);
glVertex2f(x, y + rectHeight);
glEnd();
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Apply transformations
glTranslatef(translationX, translationY, 0);
glRotatef(rotationAngle, 0, 0, 1);
glScalef(scaleFactor, scaleFactor, 1);
// Draw rectangle
glColor3f(1, 0, 0); // Red color
drawRectangle();
glFlush();
}
Page
14
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
// Main function
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(500, 500);
glutCreateWindow("Geometric Operations in 2D");
init();
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutMainLoop();
}
Page
15
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
1.5 Remarks
FACULTY SIGNATURE
Page
16
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Experiments
1.0 Experiment
We define a drawCube function that draws a simple cube using GL_QUADS for each face.
The display function sets up the model-view matrix and applies translation (glTranslatef), rotation
(glRotatef), and scaling (glScalef) transformations to the cube before drawing it.
Keyboard input is handled by the keyboard function. Pressing keys ‘x’, ‘X’, ‘y’, and ‘Y’ rotates the
cube around the X and Y axes, while ‘+’ and ‘-‘ keys scale the cube up and down, respectively.
The initializeOpenGL function sets up the GLUT window, initializes OpenGL settings (including
depth testing for 3D rendering), and registers display and keyboard callback functions.
The main function initializes the OpenGL context using initializeOpenGL and starts the GLUT main
loop.
#include<GL/glut.h>
float translationX=0;
float translationY=0;
float rotationX=0,rotationY=0,rotationZ=0;
float scaleFactor=1;
void init()
{
glClearColor(1,1,1,1);
glClear(GL_COLOR_BUFFER_BIT);
}
void drawCube()
{
glutSolidCube(0.2);
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(translationX,translationY,0);
glRotatef(rotationX,1,0,0);
glRotatef(rotationY,0,1,0);
glRotatef(rotationZ,0,0,1);
glScalef(scaleFactor,scaleFactor,scaleFactor);
Page
17
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
glColor3f(1,0,0);
drawCube();
glFlush();
}
void keyboard(unsigned char key,int keyX,int keyY)
{
switch(key)
{
case 'd':
translation +=0.2;
break;
case 'w':
translation +=0.2;
break;
case 'X':
rotation +=10;
break;
case 'x':
rotation -=10;
break;
case 'Y':
rotation +=10;
break;
case 'y':
rotation -=10;
break;
case 'Z':
rotationZ +=10;
break;
case 'z':
rotationZ -=10;
break;
case 'S':
scaleFactor *=1.1;
break;
case 's':
scaleFactor *=0.9;
break;
case 27:
translationX=0;
translationY=0;
rotationX=0;
rotationY=0;
rotationZ=0;
scaleFactor=1;
break;
case 'q':
exit(0);
break;
}
glutPostRedisplay();
Page
18
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
}
int main(int argc,char** argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowSize(500,500);
glutCreateWindow("Geometric Operations in 3D");
init();
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutMainLoop();
}
1.5 Remarks
FACULTY SIGNATURE
Page
19
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Experiments
1.0 Experiment
#include <GL/glut.h>
float rectWidth = 100;
float rectHeight = 50;
float translationX = 0;
float translationY = 0;
float rotationAngle = 0;
float scaleFactor = 1;
float x=0, y=0;
void init()
{
glClearColor(1, 1, 1, 1); // White background
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, 500, 0, 500);
}
void drawRectangle()
{
Page
20
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
glBegin(GL_POLYGON);
glVertex2f(x, y);
glVertex2f(x + rectWidth, y);
glVertex2f(x + rectWidth, y + rectHeight);
glVertex2f(x, y + rectHeight);
glEnd();
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Apply transformations
glTranslatef(translationX, translationY, 0);
glRotatef(rotationAngle, 0, 0, 1);
glScalef(scaleFactor, scaleFactor, 1);
// Draw rectangle
glColor3f(1, 0, 0); // Red color
drawRectangle();
glFlush();
}
Page
21
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
// Main function
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(500, 500);
glutCreateWindow("Geometric Operations in 2D");
init();
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutMainLoop();
}
Page
22
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
1.5 Remarks
FACULTY SIGNATURE
Page
23
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Experiments
1.0 Experiment
We define a drawCube function that draws a simple cube using GL_QUADS for each face.
The display function sets up the model-view matrix and applies translation (glTranslatef), rotation
(glRotatef), and scaling (glScalef) transformations to the cube before drawing it.
Keyboard input is handled by the keyboard function. Pressing keys ‘x’, ‘X’, ‘y’, and ‘Y’ rotates the
cube around the X and Y axes, while ‘+’ and ‘-‘ keys scale the cube up and down, respectively.
The initializeOpenGL function sets up the GLUT window, initializes OpenGL settings (including
depth testing for 3D rendering), and registers display and keyboard callback functions.
The main function initializes the OpenGL context using initializeOpenGL and starts the GLUT main
loop.
#include<GL/glut.h>
float translationX=0;
float translationY=0;
float rotationX=0,rotationY=0,rotationZ=0;
float scaleFactor=1;
void init()
{
glClearColor(1,1,1,1);
glClear(GL_COLOR_BUFFER_BIT);
}
void drawCube()
{
glutSolidCube(0.2);
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(translationX,translationY,0);
glRotatef(rotationX,1,0,0);
glRotatef(rotationY,0,1,0);
glRotatef(rotationZ,0,0,1);
glScalef(scaleFactor,scaleFactor,scaleFactor);
Page
24
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
glColor3f(1,0,0);
drawCube();
glFlush();
}
void keyboard(unsigned char key,int keyX,int keyY)
{
switch(key)
{
case 'd':
translation +=0.2;
break;
case 'w':
translation +=0.2;
break;
case 'X':
rotation +=10;
break;
case 'x':
rotation -=10;
break;
case 'Y':
rotation +=10;
break;
case 'y':
rotation -=10;
break;
case 'Z':
rotationZ +=10;
break;
case 'z':
rotationZ -=10;
break;
case 'S':
scaleFactor *=1.1;
break;
case 's':
scaleFactor *=0.9;
break;
case 27:
translationX=0;
translationY=0;
rotationX=0;
rotationY=0;
rotationZ=0;
scaleFactor=1;
break;
case 'q':
exit(0);
break;
}
glutPostRedisplay();
}
Page
25
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
1.5 Remarks
FACULTY SIGNATURE
Page
26
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Experiments
1.0 Experiment
glutInitDisplayMode (GLUT_DOUBLE);
This provides two buffers, called the front buffer and the back buffer, that we can use alternately to refresh
the screen display. While one buffer is acting as the refresh buffer for the current display window, the next
frame of an animation can be constructed in the other buffer. We specify when the roles of the two buffers are to
be interchanged using
glutSwapBuffers ( );
#include <GL/glut.h>
#include <stdlib.h>
#include <math.h>
void init()
{
glClearColor(1, 1, 1, 1); // White background
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-2, 2, -2, 2);
}
Page
27
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
void display(void)
{
c=3.142/180;
x = cos(t*c);
y = sin(t*c);
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 0.0, 0.0);
glBegin(GL_POLYGON);
glVertex2f(x, y);
glVertex2f(-y, x);
glVertex2f(-x, -y);
glVertex2f(y, -x);
glEnd();
glutSwapBuffers( );
//glFlush();
}
void rotate()
{
if(t<=360)
t=t+speed;
else
t=0;
glutPostRedisplay();
}
Page
28
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
1.5 Remarks
FACULTY SIGNATURE
Page
29
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
Experiments
1.0 Experiment
TITLE: Write a Program to read a digital image. Split and display image into 4 quadrants, up, down, right and
left.
Defining Functions:
split_image: This function takes an image as input and splits it into four quadrants: top left, top right,
bottom left, and bottom right. It calculates the dimensions of the image and then slices the image array
accordingly to extract each quadrant.
display_images: This function takes a list of images and a list of window names as input. It displays
each image in a separate window with the corresponding window name.
We specify the path to the image file that we want to read. If the image is loaded successfully, it is stored in the
variable image. If the image loading fails (e.g., due to an incorrect file path), an error message is printed.
The program then calls the display_images function to display each quadrant in a separate window. It passes a
list containing the four quadrants as the first argument and a list of window names as the second argument.
Each window will display one quadrant of the original image.
1. cv2.waitKey(0):
This function waits for a keyboard event indefinitely (0 milliseconds).
It allows the program to wait until a key is pressed by the user.
In this program, it’s used to keep the windows open until a key is pressed, preventing them from
closing immediately after being displayed.
2. cv2.destroyAllWindows():
This function closes all the OpenCV windows.
It’s used to clean up and close all the windows opened by OpenCV at the end of the program.
In this program, it’s called after cv2.waitKey(0) to ensure that all windows are closed when the user
presses a key to exit the program.
Together, cv2.waitKey(0) and cv2.destroyAllWindows() ensure that the program waits for user input to exit
and then closes all windows properly when the program terminates.
Page
30
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
import cv2
if image is None:
print("Failed to load the image.")
else:
# Split the image into quadrants
top_left, top_right, bottom_left, bottom_right = split_image(image)
Page
31
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
ORIGINAL IMAGE
FACULTY SIGNATURE
Experiments
1.0 Experiment
08
In this program:
We read an image from a file specified by image_path.
If the image is loaded successfully, we display the original image using cv2.imshow.
We then perform three transformations on the image:
1. Rotation: We rotate the image by an angle of 45 degrees around its center
using cv2.getRotationMatrix2D and cv2.warpAffine.
2. Scaling: We scale down the image by a factor of 0.5 using cv2.resize.
3. Translation: We translate the image by 100 pixels to the right and 50 pixels up using a translation
matrix and cv2.warpAffine.
Finally, we display the transformed images (rotated_image, scaled_image, and translated_image)
using cv2.imshow.
We wait for any key press to close the windows and then use cv2.destroyAllWindows() to clean up
and close all OpenCV windows.
You can replace “image.png” with the path to your own image file. Run this script, and you’ll see the original
image and the transformed images (rotated, scaled, and translated) displayed in separate windows.
import cv2
import numpy as np
if image is None:
print("Failed to load the image.")
else:
# Display the original image
cv2.imshow("Original Image", image)
# Rotation
angle = 45 # Rotation angle in degrees
center = (image.shape[1] // 2, image.shape[0] // 2) # Center of rotation
rotation_matrix = cv2.getRotationMatrix2D(center, angle, 1.0) # Rotation matrix
rotated_image = cv2.warpAffine(image, rotation_matrix, (image.shape[1], image.shape[0]))
# Scaling
scale_factor = 0.5 # Scaling factor (0.5 means half the size)
scaled_image = cv2.resize(image, None, fx=scale_factor, fy=scale_factor)
# Translation
Page
33
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
translation_matrix = np.float32([[1, 0, 100], [0, 1, -50]]) # Translation matrix (100 pixels right, 50
pixels up)
translated_image = cv2.warpAffine(image, translation_matrix, (image.shape[1], image.shape[0]))
cv2.waitKey(0)
cv2.destroyAllWindows()
1.5 Remarks
FACULTY SIGNATURE
Experiments
1.0 Experiment
Page
34
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
09
TITLE: Read an image and extract and display low-level features such as edges, textures using
filtering techniques.
In this program:
You can replace "image.png" with the path to your own image file. Run this script, and you’ll see the original
grayscale image along with the edges extracted using Sobel and Laplacian filters, as well as the image with
Gaussian blur applied to it.
import cv2
import numpy as np
if image is None:
print("Failed to load the image.")
else:
# Display the original image
cv2.imshow("Original Image", image)
Page
35
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
cv2.waitKey(0)
cv2.destroyAllWindows()
Page
36
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
1.5 Remarks
FACULTY SIGNATURE
Experiments
1.0 Experiment
10
In this program:
We read an image from a file specified by image_path using cv2.imread.
We display the original image using cv2.imshow.
We apply three different blurring techniques:
1. Blur: We apply a simple averaging filter to the image using cv2.blur.
2. Gaussian Blur: We apply Gaussian blur to the image using cv2.GaussianBlur. This is more
effective in reducing noise while preserving edges compared to simple blur.
3. Median Blur: We apply median blur to the image using cv2.medianBlur. This is effective in
removing salt-and-pepper noise.
We display the blurred images using cv2.imshow.
We use cv2.waitKey(0) to wait for any key press to close the windows,
and cv2.destroyAllWindows() to clean up and close all OpenCV windows.
You can replace "art.png" with the path to your own image file. Run this script, and you’ll see the original
image along with the images after applying different blur and smoothing filters.
import cv2
if image is None:
print("Failed to load the image.")
else:
# Display the original image
cv2.imshow("Original Image", image)
Page
38
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
cv2.waitKey(0)
cv2.destroyAllWindows()
ORIGINAL IMAGE
Page
39
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
1.5 Remarks
FACULTY SIGNATURE
Experiments
Page
40
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
1.0 Experiment
import cv2
if image is None:
print("Failed to load the image.")
else:
# Convert the image to grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.waitKey(0)
cv2.destroyAllWindows()
ORIGINAL IMAGE
1.6 Remarks
FACULTY SIGNATURE
Experiments
Page
42
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology
1.0 Experiment
import cv2
if image is None:
print("Failed to load the image.")
else:
# Convert the image to grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ORIGINAL IMAGE
1.5 Remarks
FACULTY SIGNATURE
Page
44