0% found this document useful (0 votes)
25 views44 pages

CGV Lab Manual 23-24

Uploaded by

supriya.gurav353
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views44 pages

CGV Lab Manual 23-24

Uploaded by

supriya.gurav353
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Lab Manual / Semester - 6th

Computer Science and Engineering Department, Hirasugar Institute of Technology

Overview

Year / Semester III Year Academic Year 2023-24


Laboratory Title Computer Graphics and Image Processing Laboratory Code 21CSL66
Laboratory
Total Contact Hours 24 Hours Duration of SEE 3 Hours
IA Marks 50 Marks SEE Marks 50 Marks
Lab Manual Author Dr. M. G. Huddar Sign - Date
Checked By Dr. M. G. Huddar Sign - Date

Objectives

1. Demonstrate the use of Open GL.


2. Demonstrate the different geometric object drawing using openGL
3. Demonstration of 2D/3D transformation on simple objects.
4. Demonstration of lighting effects on the created objects.
5. Demonstration of Image processing operations on image/s.

Description

1.0 Learning Objectives

This course will enable students to


1. Use openGL /OpenCV for the development of mini Projects.
2. Analyze the necessity mathematics and design required to demonstrate basic geometric transformation
techniques.
3. Demonstrate the ability to design and develop input interactive techniques.
4. Apply the concepts to Develop user friendly applications using Graphics and IP concepts.

2.0 Learning Outcomes

 Use OpenGL /OpenCV for the development of mini Projects.


 Analyze the necessity mathematics and design required to demonstrate basic geometric transformation
techniques.
 Demonstrate the ability to design and develop input interactive techniques
 Apply the concepts to Develop user friendly applications using Graphics and IP concepts

Prerequisites

 C Programming Language
 Python

Base Course

 Computer Graphics
 Fundamentals of Image Processing

Page 1
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Introduction

Computer Graphics:
Definition: Computer Graphics is concerned with all aspects of producing, storing and rendering
pictures or images using computer.
Types of Computer Graphics:
1. Interactive Computer Graphics/ Active Computer Graphics
Interactive Computer Graphics involves a two way communication between computer and user.
Here the observer is given some control over the image by providing him with an input device
for example the video game controller of the ping pong game. This helps him to signal his request
to the computer. The computer on receiving signals from the input device can modify the displayed
picture appropriately. To the user it appears that the picture is changing instantaneously in response
to his commands. He can give a series of commands, each one generating a graphical response from
the computer. In this way he maintains a conversation, or dialogue, with the computer.
Interactive computer graphics affects our lives in a number of indirect ways. For example, it
helps to train the pilots of our airplanes. We can create a flight simulator which may help the
pilots to get trained not in a real aircraft but on the grounds at the control of the flight
simulator. The flight simulator is a mock up of an aircraft flight deck, containing all the usual
controls and surrounded by screens on which we have the projected computer generated views
of the terrain visible on takeoff and landing. Flight simulators have many advantages over the
real aircrafts for training purposes, including fuel savings, safety, and the ability to familiarize
the trainee with a large number of the world’s airports.
2. Non-interactive Computer Graphics / Passive Computer Graphics
In non interactive computer graphics otherwise known as passive computer graphics. It is the
computer graphics in which user does not have any kind of control over the image. Image is
merely the product of static stored program and will work according to the instructions given
in the program linearly. The image is totally under the control of program instructions not
under the user. Example: screen savers.

A Graphics System
A high level computer graphics system consists of five major elements,
1. Input devices
2. Processor
3. Memory
4. Frame buffer
5. Output devices

Page 2
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Pixels and Frame buffer


 Almost all graphics systems are raster based. A picture is produced as an array of picture
elements or pixels within the graphics system.
 Pixels are stored in a part of memory called frame buffer. Frame buffer can be viewed as the
core element of graphics system.

 Resolution – the number of pixels in the frame buffer determines the detail that one can see in the
image.
 The depth / precision of the frame buffer defined as the number of bits that are used for each
pixel, determines properties such as how many colors can be represented on a given system.
Example 1-bit frame buffer allows only 2 colors (black and white), 8-bit deep frame buffer
allows 28 = 256 colors and so on.
 In full color system there are 24 bits / pixel. Such a system can display image realistically. They are
also called
true color systems or RGB color systems.
 In a very simple system, the frame buffer holds only the colored pixels that are displayed on
the screen. In most systems, frame buffer holds far more information, such as depth
information needed for creating images from three dimensional data.
Memory
 Frame buffer is usually implemented with the special type of memory chips that enable the fast
redisplay of the content of frame buffer.
 The frame buffer is part of the system memory.
Processor
 in simple systems, there may be only one processor, the central processing unit (CPU) of the
system, which must do both the normal processing and the graphical processing.
 The main graphical function of the processor is to take specification of the graphical primitives
such as lines, polygons generated by the applications program and to assign values to the
pixels in the frame buffer that best represent these entities.
 The conversion of geometric entities to pixel colors in frame buffer is known as rasterization
or scan conversion.

 Today virtually all graphics systems are characterized by special purpose graphics processing

Page 3
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

units (GPUS). The GPU can be either on the motherboard of the system or on a graphics card.
Output devices
 The dominant type of display is cathode ray tube (CRT).
 When electrons strike the phosphor coating on the tube light is emitted.
 The direction of the beam is controlled by two pairs of deflection plates.
 The output of the computer is converted, by digital- to-analog converters, to voltages across
the x and y deflection plates. Light appears on the surface of the CRT when a sufficiently
intense beam of electrons is directed at the phosphor.
 If the voltages steering the beam change at a constant rate, the beam will trace a straight line,
visible to a viewer. Such a device is known as the random-scan, calligraphic, or vector CRT,
because the beam can be moved directly from any position to any other position.
 If intensity of the beam is turned off, the beam can be moved to a new position without
changing any visible display. This configuration was the basis of early graphics systems that
predated the present raster technology.
 A typical CRT will emit light for only a short time—usually, a few milliseconds after the
phosphor is excited by the electron beam. For a human to see a steady, flicker-free image on
most CRT displays, the same path must be retraced, or refreshed, by the beam at a sufficiently
high rate, the refresh rate.
 In older systems, the refresh rate is determined by the frequency of the power system, 60
cycles per second or 60 Hertz (Hz) in the United States and 50 Hz in much of the rest of the
world.
 In a raster system, the graphics system takes pixels from the frame buffer and displays them as
points on the surface of the display in one of two fundamental ways.
 In a non-interlaced system, the pixels are displayed row by row, or scan line by scan line, at
the refresh rate. In an interlaced display, odd rows and even rows are refreshed alternately.
Interlaced displays are used in commercial television.
 In an interlaced display operating at 60 Hz, the screen is redrawn in its entirety only 30 times
per second, although the visual system is tricked into thinking the refresh rate is 60 Hz rather
than 30Hz.
 Color CRTs have three different colored phosphors (red, green, and blue), arranged in small
groups. One common style arranges the phosphors in triangular groups called triads, each triad
consisting of three phosphors, one of each primary.
 In the shadow-mask CRT a metal screen with small holes (the shadow mask) ensures that an
electron beam excites only phosphors of the proper color.

The OpenGL Interface


 OpenGL functions are in a single library named GL (or OpenGL in Windows). Function
names begin with the letters gl.
 To interface with the window system and to get input from external devices into our programs,
we need at least one more library. For each major window system there is a system-specific
library that provides the “glue” between the window system and OpenGL.
 For the X Window System, this library is called GLX, for Windows, it is wgl, and for the
Macintosh, it is agl. Rather than using a different library for each system, we use two readily
available libraries called the OpenGL Utility Toolkit (GLUT).
 OpenGL makes heavy use of defined constants to increase code readability and avoid the use
of magic numbers. Thus, strings such as GL_FILL and GL_POINTS are defined in header (.h)

Page 4
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

files.
 In most implementations, one of the
include lines #include
<GL/glut.h>
or
#include <GLUT/glut.h>
is sufficient to read in glut.h and gl.h.
 Figure shows the organization of the libraries for an X Window System environment.

Image Processing:

Definition: Image processing is the process of transforming an image into a digital form and performing certain
operations to get some useful information from it. The image processing system usually treats all images as 2D
signals when applying certain predetermined signal processing methods.

The basic steps involved in digital image processing are:


 Image acquisition: This involves capturing an image using a digital camera or scanner, or importing an
existing image into a computer.
 Image enhancement: This involves improving the visual quality of an image, such as increasing
contrast, reducing noise, and removing artifacts.
 Image restoration: This involves removing degradation from an image, such as blurring, noise, and
distortion.
 Image segmentation: This involves dividing an image into regions or segments, each of which
corresponds to a specific object or feature in the image.
 Image representation and description: This involves representing an image in a way that can be
analyzed and manipulated by a computer, and describing the features of an image in a compact and
meaningful way.
 Image analysis: This involves using algorithms and mathematical models to extract information from
an image, such as recognizing objects, detecting patterns, and quantifying features.
 Image synthesis and compression: This involves generating new images or compressing existing
images to reduce storage and transmission requirements.
 Digital image processing is widely used in a variety of applications, including medical imaging, remote
sensing, computer vision, and multimedia.

Page 5
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Advantages of Digital Image Processing:

 Improved image quality: Digital image processing algorithms can improve the visual quality of images,
making them clearer, sharper, and more informative.
 Automated image-based tasks: Digital image processing can automate many image-based tasks, such as
object recognition, pattern detection, and measurement.
 Increased efficiency: Digital image processing algorithms can process images much faster than humans,
making it possible to analyze large amounts of data in a short amount of time.
 Increased accuracy: Digital image processing algorithms can provide more accurate results than
humans, especially for tasks that require precise measurements or quantitative analysis.
 Disadvantages of Digital Image Processing:
 High computational cost: Some digital image processing algorithms are computationally intensive and
require significant computational resources.
 Limited interpretability: Some digital image processing algorithms may produce results that are
difficult for humans to interpret, especially for complex or sophisticated algorithms.
 Dependence on quality of input: The quality of the output of digital image processing algorithms is
highly dependent on the quality of the input images. Poor quality input images can result in poor quality
output.
 Limitations of algorithms: Digital image processing algorithms have limitations, such as the difficulty
of recognizing objects in cluttered or poorly lit scenes, or the inability to recognize objects with
significant deformations or occlusions.
 Dependence on good training data: The performance of many digital image processing algorithms is
dependent on the quality of the training data used to develop the algorithms. Poor quality training data
can result in poor performance of the algorithm.

Resources Required
 Anaconda
 Open CV
 Code Blocks
 Free Glut

General Instructions
1. Student should be punctual to the Lab
2. Required to prepare the Lab report every week
3. Required to maintain the Lab record properly

Page 6
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

4. Should use the resources properly Student should be punctual to the Lab
5. Required to prepare the Lab report every week
6. Required to maintain the Lab record properly
7. Should use the resources properly

CONTENTS
Expt. Date
Experiments Date Planned
No. Conducted
1 Develop a program to draw a line using Bresenham’s line
drawing technique
2 Develop a program to demonstrate basic geometric operations
on the 2D object
3 Develop a program to demonstrate basic geometric operations
on the 3D object
4 Develop a program to demonstrate 2D transformation on basic
objects
5 Develop a program to demonstrate 3D transformation on 3D
objects
6 Develop a program to demonstrate Animation effects on
simple objects.
7 Write a Program to read a digital image. Split and display
image into 4 quadrants, up, down, right and left.
8 Write a program to show rotation, scaling, and translation on
an image.
9 Read an image and extract and display low-level features
such as edges, textures using filtering techniques.
10 Write a program to blur and smoothing an image.
11 Write a program to contour an image.
12 Write a program to detect a face/s in an image.
PART B Practical Based Learning
Student should develop a mini project and it should be demonstrate in the laboratory
examination, Some of the projects are listed and it is not limited to:
 Recognition of License Plate through Image Processing
 Recognition of Face Emotion in Real-Time
 Detection of Drowsy Driver in Real-Time
 Recognition of Handwriting by Image Processing
 Detection of Kidney Stone
 Verification of Signature
 Compression of Color Image
 Classification of Image Category
 Detection of Skin Cancer
 Marking System of Attendance using Image Processing
 Detection of Liver Tumor
 IRIS Segmentation
 Detection of Skin Disease and / or Plant Disease
 Biometric Sensing System.
 Projects which helps to formers to understand the present developments in agriculture.
 Projects which helps high school/college students to understand the scientific problems.
 Simulation projects which helps to understand innovations in science and technology

Page 7
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Evaluation Scheme
CIE marks for the practical course is 50 Marks.

The split-up of CIE marks for record/ journal and test are in the ratio 60:40.
 Each experiment to be evaluated for conduction with observation sheet and record write-up. Rubrics
for the evaluation of the journal/write-up for hardware/software experiments designed by the faculty
who is handling the laboratory session and is made known to students at the beginning of the practical
session.
 Record should contain all the specified experiments in the syllabus and each experiment write- up will
be evaluated for 10 marks.
 Total marks scored by the students are scaled downed to 30 marks (60% of maximum marks).
 Weightage to be given for neatness and submission of record/write-up on time.
 Department shall conduct 02 tests for 100 marks, the first test shall be conducted after the 8th
week of the semester and the second test shall be conducted after the 14th week of the semester.
 In each test, test write-up, conduction of experiment, acceptable result, and procedural knowledge
will carry a weightage of 60% and the rest 40% for viva-voce.
 The suitable rubrics can be designed to evaluate each student’s performance and learning ability.
Rubrics suggested in Annexure-II of Regulation book
 The average of 02 tests is scaled down to 20 marks (40% of the maximum marks).

The Sum of scaled-down marks scored in the report write-up/journal and average marks of two tests is the total
CIE marks scored by the student.

Reference

 https://nptel.ac.in/courses/106/106/106106090/
 https://nptel.ac.in/courses/106/102/106102063/
 https://nptel.ac.in/courses/106/103/106103224/
 https://nptel.ac.in/courses/106/102/106102065/
 https://www.tutorialspoint.com/opencv/
 https://medium.com/analytics-vidhya/introduction-to-computer-vision-opencv-in-python- fb722e805e8b

Page 8
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Experiments

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


01.

1.1 Learning Objectives

TITLE: Develop a program to draw a line using Bresenham’s line drawing technique

1.2 Theory / Hypothesis

 The drawLine function implements Bresenham’s line drawing algorithm.


 The display function is the GLUT display callback, where we clear the screen and draw the line using
drawLine.
 The initializeOpenGL function sets up the GLUT window and OpenGL context. It specifies the display
mode, window size, and title. It also sets up the projection matrix and registers the display callback.
 In main, we initialize the OpenGL context using initializeOpenGL and start the GLUT main loop with
glutMainLoop().
 Compile this program with a C++ compiler that links against GLUT and OpenGL libraries. When you
run the compiled program, it should open an OpenGL window displaying a white line drawn using
Bresenham’s algorithm from (50, 50) to (500, 500) given as input from the user. Adjust the window
size and line coordinates as needed.

1.3 Procedure / Program / Activity

#include <GL/glut.h>
#include <stdio.h>
int x1,x2,y1,y2;
void myInit()
{
glClearColor(1.0,1.0,1.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
gluOrtho2D(0,500,0,500);

}
void draw_pixel(int x,int y)
{
glColor3f(1.0,0.0,0.0);
glBegin(GL_POINTS);
glVertex2i(x,y);
glEnd();
}
void draw_line(int x1,int x2,int y1,int y2)
{

int dx,dy,i,e;
Page 9
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

int incx,incy,inc1,inc2;
int x,y;
dx=x2-x1;
dy=y2-y1;
if(dx<0)
dx=-dx;
if(dy<0)
dy=-dy;
incx=1;
if(x2<x1)
incx=-1;
incy=1;
if(y2<y1)
incy=-1;
x=x1;
y=y1;
if(dx>dy)
{
draw_pixel(x,y);
e=2*dy-dx;
inc1=2*(dy-dx);
inc2=2*dy;
for(i=0;i<dx;i++)
{

if(e>=0)
{
y+=incy;
e+=inc1;
}
else
e+=inc2;
x+=incx;
draw_pixel(x,y);

}
}
else
{
draw_pixel(x,y);
e=2*dx-dy;
inc1=2*(dx-dy);
inc2=2*dx;
for(i=0;i<dy;i++)
{
if(e>0)
Page
10
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

{
x+=incx;
e+=inc1;

}
else
e+=inc2;
y+=incy;
draw_pixel(x,y);
}
}
}
void myDisplay()
{
draw_line(x1,x2,y1,y2);
glFlush();

int main(int argc,char**argv)


{
printf("enter (x1,y1,x2,y2)\n");
scanf("%d%d%d%d",&x1,&y1,&x2,&y2);
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowSize(500,500);
glutInitWindowPosition(0,0);
glutCreateWindow("Bresenham's Line Drawing");
myInit();
glutDisplayFunc(myDisplay);
glutMainLoop();
}

1.4 Results & Analysis

Enter (x1,y1,x2,y2)
100 100 400 400

Page
11
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

1.5 Remarks

FACULTY SIGNATURE

Page
12
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Experiments

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


02

TITLE: Develop a program to demonstrate basic geometric operations on the 2D object

1.2 Theory / Hypothesis

 The glMatrixMode(GL_PROJECTION) and glLoadIdentity() in the initializeOpenGL function


ensure that the projection matrix is properly set up.
 We define a simple drawRectangle function that draws a rectangle using OpenGL primitives.
 The display function is responsible for rendering the scene. It applies translation (glTranslatef),
rotation (glRotatef), and scaling (glScalef) transformations to the rectangle before drawing it.
 The glTranslatef, glRotatef, and glScalef calls in the display function are applied correctly within
the GL_MODELVIEW matrix mode.
 The glColor3f(1.0f, 0.0f, 0.0f) call sets the drawing color to red (RGB values: 1.0, 0.0, 0.0).
 Keyboard input is handled by the keyboard function. Pressing keys ‘d’, ‘a’, ‘w’, ‘z’, ’r’, ’R’ and ‘s’
triggers translation, rotation, scaling, and reset operations, respectively.
 The initializeOpenGL function sets up the GLUT window and initializes OpenGL settings.
 In main, we initialize the OpenGL context using initializeOpenGL and start the GLUT main loop.

1.3 Procedure / Program / Activity

#include <GL/glut.h>
float rectWidth = 100;
float rectHeight = 50;
float translationX = 0;
float translationY = 0;
float rotationAngle = 0;
float scaleFactor = 1;
float x=0, y=0;

void init()
{
glClearColor(1, 1, 1, 1); // White background
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, 500, 0, 500);
}

void drawRectangle()
{
Page
13
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

glBegin(GL_POLYGON);
glVertex2f(x, y);
glVertex2f(x + rectWidth, y);
glVertex2f(x + rectWidth, y + rectHeight);
glVertex2f(x, y + rectHeight);
glEnd();
}

void display()
{
glClear(GL_COLOR_BUFFER_BIT);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

// Apply transformations
glTranslatef(translationX, translationY, 0);
glRotatef(rotationAngle, 0, 0, 1);
glScalef(scaleFactor, scaleFactor, 1);

// Draw rectangle
glColor3f(1, 0, 0); // Red color
drawRectangle();

glFlush();
}

void keyboard(unsigned char key, int keyx, int keyy)


{
switch (key)
{
case 'd':
// Translate the rectangle by 10 units in the x-direction
translationX += 10;
break;
case 'a':
// Translate the rectangle by 10 units in the Negative x-direction
translationX -= 10;
break;
case 'w':
// Translate the rectangle by 10 units in the y-direction
translationY += 10;
break;
case 'z':
// Translate the rectangle by 10 units in the Negative y-direction
translationY -= 10;
break;
case 'R':
// Rotate the rectangle by 10 degrees anticlockwise
rotationAngle += 10;
break;
case 'r':

Page
14
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

// Rotate the rectangle by 10 degrees clockwise


rotationAngle -= 10;
break;
case 'S':
// Scale the rectangle by 10% (scaleFactor = 1.1f)
scaleFactor *= 1.1;
break;
case 's':
// Scale the rectangle by 10% (scaleFactor = 1.1f)
scaleFactor *= 0.9;
break;
case 27:
// Reset transformations
translationX = 0;
translationY = 0;
rotationAngle = 0;
scaleFactor = 1;
break;
case 'q': // Exit
exit(0);
break;
}
glutPostRedisplay(); // Trigger a redraw
}

// Main function
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(500, 500);
glutCreateWindow("Geometric Operations in 2D");
init();
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutMainLoop();
}

Page
15
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

1.4 Results & Analysis

1.5 Remarks

FACULTY SIGNATURE

Page
16
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Experiments

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


03

TITLE: Develop a program to demonstrate basic geometric operations on the 3D object

1.2 Theory / Hypothesis

 We define a drawCube function that draws a simple cube using GL_QUADS for each face.
 The display function sets up the model-view matrix and applies translation (glTranslatef), rotation
(glRotatef), and scaling (glScalef) transformations to the cube before drawing it.
 Keyboard input is handled by the keyboard function. Pressing keys ‘x’, ‘X’, ‘y’, and ‘Y’ rotates the
cube around the X and Y axes, while ‘+’ and ‘-‘ keys scale the cube up and down, respectively.
 The initializeOpenGL function sets up the GLUT window, initializes OpenGL settings (including
depth testing for 3D rendering), and registers display and keyboard callback functions.
 The main function initializes the OpenGL context using initializeOpenGL and starts the GLUT main
loop.

1.3 Procedure / Program / Activity

#include<GL/glut.h>
float translationX=0;
float translationY=0;
float rotationX=0,rotationY=0,rotationZ=0;
float scaleFactor=1;
void init()
{
glClearColor(1,1,1,1);
glClear(GL_COLOR_BUFFER_BIT);
}
void drawCube()
{
glutSolidCube(0.2);
}
void display()
{

glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(translationX,translationY,0);
glRotatef(rotationX,1,0,0);
glRotatef(rotationY,0,1,0);
glRotatef(rotationZ,0,0,1);
glScalef(scaleFactor,scaleFactor,scaleFactor);
Page
17
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

glColor3f(1,0,0);
drawCube();
glFlush();
}
void keyboard(unsigned char key,int keyX,int keyY)
{
switch(key)
{
case 'd':
translation +=0.2;
break;
case 'w':
translation +=0.2;
break;
case 'X':
rotation +=10;
break;
case 'x':
rotation -=10;
break;
case 'Y':
rotation +=10;
break;
case 'y':
rotation -=10;
break;
case 'Z':
rotationZ +=10;
break;
case 'z':
rotationZ -=10;
break;
case 'S':
scaleFactor *=1.1;
break;
case 's':
scaleFactor *=0.9;
break;
case 27:
translationX=0;
translationY=0;
rotationX=0;
rotationY=0;
rotationZ=0;
scaleFactor=1;
break;
case 'q':
exit(0);
break;
}
glutPostRedisplay();

Page
18
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

}
int main(int argc,char** argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowSize(500,500);
glutCreateWindow("Geometric Operations in 3D");
init();
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutMainLoop();
}

1.4 Results & Analysis

1.5 Remarks

FACULTY SIGNATURE

Page
19
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Experiments

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


04

TITLE: Develop a program to demonstrate 2D transformation on basic objects

1.2 Theory / Hypothesis

 The glMatrixMode(GL_PROJECTION) and glLoadIdentity() in the initializeOpenGL function


ensure that the projection matrix is properly set up.
 We define a simple drawRectangle function that draws a rectangle using OpenGL primitives.
 The display function is responsible for rendering the scene. It applies translation (glTranslatef),
rotation (glRotatef), and scaling (glScalef) transformations to the rectangle before drawing it.
 The glTranslatef, glRotatef, and glScalef calls in the display function are applied correctly within
the GL_MODELVIEW matrix mode.
 The glColor3f(1.0f, 0.0f, 0.0f) call sets the drawing color to red (RGB values: 1.0, 0.0, 0.0).
 Keyboard input is handled by the keyboard function. Pressing keys ‘d’, ‘a’, ‘w’, ‘z’, ’r’, ’R’ and ‘s’
triggers translation, rotation, scaling, and reset operations, respectively.
 The initializeOpenGL function sets up the GLUT window and initializes OpenGL settings.
 In main, we initialize the OpenGL context using initializeOpenGL and start the GLUT main loop.

1.3 Procedure / Program / Activity

#include <GL/glut.h>
float rectWidth = 100;
float rectHeight = 50;
float translationX = 0;
float translationY = 0;
float rotationAngle = 0;
float scaleFactor = 1;
float x=0, y=0;

void init()
{
glClearColor(1, 1, 1, 1); // White background
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, 500, 0, 500);
}

void drawRectangle()
{
Page
20
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

glBegin(GL_POLYGON);
glVertex2f(x, y);
glVertex2f(x + rectWidth, y);
glVertex2f(x + rectWidth, y + rectHeight);
glVertex2f(x, y + rectHeight);
glEnd();
}

void display()
{
glClear(GL_COLOR_BUFFER_BIT);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

// Apply transformations
glTranslatef(translationX, translationY, 0);
glRotatef(rotationAngle, 0, 0, 1);
glScalef(scaleFactor, scaleFactor, 1);

// Draw rectangle
glColor3f(1, 0, 0); // Red color
drawRectangle();

glFlush();
}

void keyboard(unsigned char key, int keyx, int keyy)


{
switch (key)
{
case 'd':
// Translate the rectangle by 10 units in the x-direction
translationX += 10;
break;
case 'a':
// Translate the rectangle by 10 units in the Negative x-direction
translationX -= 10;
break;
case 'w':
// Translate the rectangle by 10 units in the y-direction
translationY += 10;
break;
case 'z':
// Translate the rectangle by 10 units in the Negative y-direction
translationY -= 10;
break;
case 'R':
// Rotate the rectangle by 10 degrees anticlockwise
rotationAngle += 10;
break;
case 'r':

Page
21
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

// Rotate the rectangle by 10 degrees clockwise


rotationAngle -= 10;
break;
case 'S':
// Scale the rectangle by 10% (scaleFactor = 1.1f)
scaleFactor *= 1.1;
break;
case 's':
// Scale the rectangle by 10% (scaleFactor = 1.1f)
scaleFactor *= 0.9;
break;
case 27:
// Reset transformations
translationX = 0;
translationY = 0;
rotationAngle = 0;
scaleFactor = 1;
break;
case 'q': // Exit
exit(0);
break;
}
glutPostRedisplay(); // Trigger a redraw
}

// Main function
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(500, 500);
glutCreateWindow("Geometric Operations in 2D");
init();
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutMainLoop();
}

Page
22
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

1.4 Results & Analysis

1.5 Remarks

FACULTY SIGNATURE

Page
23
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Experiments

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


05

TITLE: Develop a program to demonstrate 3D transformation on 3D objects

1.2 Theory / Hypothesis

 We define a drawCube function that draws a simple cube using GL_QUADS for each face.
 The display function sets up the model-view matrix and applies translation (glTranslatef), rotation
(glRotatef), and scaling (glScalef) transformations to the cube before drawing it.
 Keyboard input is handled by the keyboard function. Pressing keys ‘x’, ‘X’, ‘y’, and ‘Y’ rotates the
cube around the X and Y axes, while ‘+’ and ‘-‘ keys scale the cube up and down, respectively.
 The initializeOpenGL function sets up the GLUT window, initializes OpenGL settings (including
depth testing for 3D rendering), and registers display and keyboard callback functions.
 The main function initializes the OpenGL context using initializeOpenGL and starts the GLUT main
loop.

1.3 Procedure / Program / Activity

#include<GL/glut.h>
float translationX=0;
float translationY=0;
float rotationX=0,rotationY=0,rotationZ=0;
float scaleFactor=1;
void init()
{
glClearColor(1,1,1,1);
glClear(GL_COLOR_BUFFER_BIT);
}
void drawCube()
{
glutSolidCube(0.2);
}
void display()
{

glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(translationX,translationY,0);
glRotatef(rotationX,1,0,0);
glRotatef(rotationY,0,1,0);
glRotatef(rotationZ,0,0,1);
glScalef(scaleFactor,scaleFactor,scaleFactor);

Page
24
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

glColor3f(1,0,0);
drawCube();
glFlush();
}
void keyboard(unsigned char key,int keyX,int keyY)
{
switch(key)
{
case 'd':
translation +=0.2;
break;
case 'w':
translation +=0.2;
break;
case 'X':
rotation +=10;
break;
case 'x':
rotation -=10;
break;
case 'Y':
rotation +=10;
break;
case 'y':
rotation -=10;
break;
case 'Z':
rotationZ +=10;
break;
case 'z':
rotationZ -=10;
break;
case 'S':
scaleFactor *=1.1;
break;
case 's':
scaleFactor *=0.9;
break;
case 27:
translationX=0;
translationY=0;
rotationX=0;
rotationY=0;
rotationZ=0;
scaleFactor=1;
break;
case 'q':
exit(0);
break;
}
glutPostRedisplay();
}

Page
25
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

int main(int argc,char** argv)


{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowSize(500,500);
glutCreateWindow("Geometric Operations in 3D");
init();
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutMainLoop();
}

1.4 Results & Analysis

1.5 Remarks

FACULTY SIGNATURE

Page
26
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Experiments

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


06

TITLE: Develop a program to demonstrate Animation effects on simple objects.

1.2 Theory / Hypothesis

glutInitDisplayMode (GLUT_DOUBLE);
This provides two buffers, called the front buffer and the back buffer, that we can use alternately to refresh
the screen display. While one buffer is acting as the refresh buffer for the current display window, the next
frame of an animation can be constructed in the other buffer. We specify when the roles of the two buffers are to
be interchanged using
glutSwapBuffers ( );

For a continuous animation, we can use


glutIdleFunc (animationFcn);
where parameter animationFcn can be assigned the name of a procedure that is to perform the operations for
incrementing the animation parameters. This procedure is continuously executed whenever there are no display-
window events that must be processed. The following program illustrates animation, which continuously rotates
a regular square in the xy plane about the z axis.

1.3 Procedure / Program / Activity

#include <GL/glut.h>

#include <stdlib.h>
#include <math.h>

GLint ww=500, hh=500;

GLfloat c, x, y, t=0, speed=1;

void init()
{
glClearColor(1, 1, 1, 1); // White background
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-2, 2, -2, 2);
}

Page
27
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

void display(void)
{
c=3.142/180;
x = cos(t*c);
y = sin(t*c);
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 0.0, 0.0);
glBegin(GL_POLYGON);
glVertex2f(x, y);
glVertex2f(-y, x);
glVertex2f(-x, -y);
glVertex2f(y, -x);
glEnd();
glutSwapBuffers( );
//glFlush();
}

void rotate()
{
if(t<=360)
t=t+speed;
else
t=0;
glutPostRedisplay();
}

void myMouse(GLint button, GLint state, GLint x, GLint y)


{
if (button==GLUT_LEFT_BUTTON and state == GLUT_DOWN)
{
glutIdleFunc(rotate);
}
if (button==GLUT_RIGHT_BUTTON and state == GLUT_DOWN)
{
glutIdleFunc(NULL);
}
}

void myKeyboard(unsigned char key, int x, int y )


{
if(key=='s')
speed=speed-0.1;
if(key=='S')
speed=speed+0.1;
}

Page
28
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

int main(int argc, char** argv)


{
glutInit(&argc,argv);
glutInitWindowSize(ww,hh);
glutInitDisplayMode (GLUT_DOUBLE| GLUT_RGB);
glutInitWindowPosition(100,100);
glutCreateWindow("Animation Effects");
glutDisplayFunc(display);
glutIdleFunc(rotate);
init();
glutKeyboardFunc(myKeyboard);
glutMouseFunc(myMouse);
glutMainLoop();
}

1.4 Results & Analysis

1.5 Remarks

FACULTY SIGNATURE

Page
29
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

Experiments

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


07

TITLE: Write a Program to read a digital image. Split and display image into 4 quadrants, up, down, right and
left.

1.2 Theory / Hypothesis

Defining Functions:

 split_image: This function takes an image as input and splits it into four quadrants: top left, top right,
bottom left, and bottom right. It calculates the dimensions of the image and then slices the image array
accordingly to extract each quadrant.
 display_images: This function takes a list of images and a list of window names as input. It displays
each image in a separate window with the corresponding window name.

Reading the Image:

We specify the path to the image file that we want to read. If the image is loaded successfully, it is stored in the
variable image. If the image loading fails (e.g., due to an incorrect file path), an error message is printed.

Displaying the Quadrants:

The program then calls the display_images function to display each quadrant in a separate window. It passes a
list containing the four quadrants as the first argument and a list of window names as the second argument.
Each window will display one quadrant of the original image.

1. cv2.waitKey(0):
 This function waits for a keyboard event indefinitely (0 milliseconds).
 It allows the program to wait until a key is pressed by the user.
 In this program, it’s used to keep the windows open until a key is pressed, preventing them from
closing immediately after being displayed.

2. cv2.destroyAllWindows():
 This function closes all the OpenCV windows.
 It’s used to clean up and close all the windows opened by OpenCV at the end of the program.
 In this program, it’s called after cv2.waitKey(0) to ensure that all windows are closed when the user
presses a key to exit the program.

Together, cv2.waitKey(0) and cv2.destroyAllWindows() ensure that the program waits for user input to exit
and then closes all windows properly when the program terminates.

Page
30
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

1.3 Procedure / Program / Activity

import cv2

# Function to split the image into four quadrants


def split_image(image):
height, width, _ = image.shape
half_height = height // 2
half_width = width // 2

# Split the image into four quadrants


top_left = image[:half_height, :half_width]
top_right = image[:half_height, half_width:]
bottom_left = image[half_height:, :half_width]
bottom_right = image[half_height:, half_width:]

return top_left, top_right, bottom_left, bottom_right

# Function to display images


def display_images(window_names, images):
for name, img in zip(window_names, images):
cv2.imshow(name, img)

print("Press any key to terminate.")


cv2.waitKey(0)
cv2.destroyAllWindows()

# Read the image


image_path = "Image.png"
image = cv2.imread(image_path)

if image is None:
print("Failed to load the image.")
else:
# Split the image into quadrants
top_left, top_right, bottom_left, bottom_right = split_image(image)

# Display the quadrants


display_images(["Top Left", "Top Right", "Bottom Left", "Bottom Right"], [top_left, top_right,
bottom_left, bottom_right])

Page
31
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

1.4 Results & Analysis

ORIGINAL IMAGE

1.5 Remarks TOP LEFT TOP RIGHT

BOTTOM LEFT BOTTOM RIGHT

FACULTY SIGNATURE
Experiments

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


Page
32
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

08

TITLE: Write a program to show rotation, scaling, and translation on an image.

1.2 Theory / Hypothesis

In this program:
 We read an image from a file specified by image_path.
 If the image is loaded successfully, we display the original image using cv2.imshow.
 We then perform three transformations on the image:
1. Rotation: We rotate the image by an angle of 45 degrees around its center
using cv2.getRotationMatrix2D and cv2.warpAffine.
2. Scaling: We scale down the image by a factor of 0.5 using cv2.resize.
3. Translation: We translate the image by 100 pixels to the right and 50 pixels up using a translation
matrix and cv2.warpAffine.
 Finally, we display the transformed images (rotated_image, scaled_image, and translated_image)
using cv2.imshow.
 We wait for any key press to close the windows and then use cv2.destroyAllWindows() to clean up
and close all OpenCV windows.

You can replace “image.png” with the path to your own image file. Run this script, and you’ll see the original
image and the transformed images (rotated, scaled, and translated) displayed in separate windows.

1.3 Procedure / Program / Activity

import cv2
import numpy as np

# Read the image


image_path = "Image.png"
image = cv2.imread(image_path)

if image is None:
print("Failed to load the image.")
else:
# Display the original image
cv2.imshow("Original Image", image)

# Rotation
angle = 45 # Rotation angle in degrees
center = (image.shape[1] // 2, image.shape[0] // 2) # Center of rotation
rotation_matrix = cv2.getRotationMatrix2D(center, angle, 1.0) # Rotation matrix
rotated_image = cv2.warpAffine(image, rotation_matrix, (image.shape[1], image.shape[0]))

# Scaling
scale_factor = 0.5 # Scaling factor (0.5 means half the size)
scaled_image = cv2.resize(image, None, fx=scale_factor, fy=scale_factor)

# Translation
Page
33
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

translation_matrix = np.float32([[1, 0, 100], [0, 1, -50]]) # Translation matrix (100 pixels right, 50
pixels up)
translated_image = cv2.warpAffine(image, translation_matrix, (image.shape[1], image.shape[0]))

# Display the transformed images


cv2.imshow("Rotated Image", rotated_image)
cv2.imshow("Scaled Image", scaled_image)
cv2.imshow("Translated Image", translated_image)

cv2.waitKey(0)
cv2.destroyAllWindows()

1.4 Results & Analysis

1.5 Remarks

FACULTY SIGNATURE
Experiments

1.0 Experiment

Experiment No Date Planed Date Conducted Marks

Page
34
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

09

TITLE: Read an image and extract and display low-level features such as edges, textures using
filtering techniques.

1.2 Theory / Hypothesis

In this program:

 We read an image from a file specified by image_path using cv2.imread.


 We convert the image to grayscale since most filtering techniques operate on single-channel images.
 We display the original grayscale image using cv2.imshow.
 We apply the Sobel filter in both horizontal and vertical directions to extract edges using cv2.Sobel.
 We compute the magnitude of gradients obtained from the Sobel filter using cv2.magnitude.
 We normalize the result to the range [0, 255] using cv2.normalize.
 We display the edges extracted using the Sobel filter.
 We apply the Laplacian filter to extract edges using cv2.Laplacian.
 We normalize the result to the range [0, 255].
 We display the edges extracted using the Laplacian filter.
 We apply Gaussian blur to the image to extract textures using cv2.GaussianBlur.
 We display the image with Gaussian blur.
 We use cv2.waitKey(0) to wait for any key press to close the windows,
and cv2.destroyAllWindows() to clean up and close all OpenCV windows.

You can replace "image.png" with the path to your own image file. Run this script, and you’ll see the original
grayscale image along with the edges extracted using Sobel and Laplacian filters, as well as the image with
Gaussian blur applied to it.

1.3 Procedure / Program / Activity

import cv2
import numpy as np

# Read the image


image_path = "Image.png"
image = cv2.imread(image_path)#, cv2.IMREAD_GRAYSCALE)

if image is None:
print("Failed to load the image.")
else:
# Display the original image
cv2.imshow("Original Image", image)

# Apply Sobel filter to extract edges


sobel_x = cv2.Sobel(image, cv2.CV_64F, 1, 0, ksize=3)
sobel_y = cv2.Sobel(image, cv2.CV_64F, 0, 1, ksize=3)
sobel_edges = cv2.magnitude(sobel_x, sobel_y)
sobel_edges = cv2.normalize(sobel_edges, None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U)

Page
35
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

# Display edges extracted using Sobel filter


cv2.imshow("Edges (Sobel Filter)", sobel_edges)

# Apply Laplacian filter to extract edges


laplacian_edges = cv2.Laplacian(image, cv2.CV_64F)
laplacian_edges = cv2.normalize(laplacian_edges, None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U)

# Display edges extracted using Laplacian filter


cv2.imshow("Edges (Laplacian Filter)", laplacian_edges)

# Apply Gaussian blur to extract textures


gaussian_blur = cv2.GaussianBlur(image, (5, 5), 0)

# Display image with Gaussian blur


cv2.imshow("Gaussian Blur", gaussian_blur)

cv2.waitKey(0)
cv2.destroyAllWindows()

1.4 Results & Analysis

ORIGINAL IMAGE EDGES (SOBEL FILTER)

EDGES (LAPLACIAN FILTER) GAUSSIAN BLUR

Page
36
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

1.5 Remarks

FACULTY SIGNATURE

Experiments

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


Page
37
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

10

TITLE: Write a program to blur and smoothing an image.

1.2 Theory / Hypothesis

In this program:
 We read an image from a file specified by image_path using cv2.imread.
 We display the original image using cv2.imshow.
 We apply three different blurring techniques:
1. Blur: We apply a simple averaging filter to the image using cv2.blur.
2. Gaussian Blur: We apply Gaussian blur to the image using cv2.GaussianBlur. This is more
effective in reducing noise while preserving edges compared to simple blur.
3. Median Blur: We apply median blur to the image using cv2.medianBlur. This is effective in
removing salt-and-pepper noise.
 We display the blurred images using cv2.imshow.
 We use cv2.waitKey(0) to wait for any key press to close the windows,
and cv2.destroyAllWindows() to clean up and close all OpenCV windows.
You can replace "art.png" with the path to your own image file. Run this script, and you’ll see the original
image along with the images after applying different blur and smoothing filters.

1.3 Procedure / Program / Activity

import cv2

# Read the image


image_path = "Image.png"
image = cv2.imread(image_path)

if image is None:
print("Failed to load the image.")
else:
# Display the original image
cv2.imshow("Original Image", image)

# Apply blur to the image


blur_kernel_size = (5, 5) # Kernel size for blur filter
blurred_image = cv2.blur(image, blur_kernel_size)

# Display the blurred image


cv2.imshow("Blurred Image", blurred_image)

# Apply Gaussian blur to the image


gaussian_blur_kernel_size = (5, 5) # Kernel size for Gaussian blur filter
gaussian_blurred_image = cv2.GaussianBlur(image, gaussian_blur_kernel_size, 0)

# Display the Gaussian blurred image


cv2.imshow("Gaussian Blurred Image", gaussian_blurred_image)

Page
38
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

# Apply median blur to the image


median_blur_kernel_size = 5 # Kernel size for median blur filter (should be odd)
median_blurred_image = cv2.medianBlur(image, median_blur_kernel_size)

# Display the median blurred image


cv2.imshow("Median Blurred Image", median_blurred_image)

cv2.waitKey(0)
cv2.destroyAllWindows()

1.4 Results & Analysis

ORIGINAL IMAGE

Page
39
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

1.5 Remarks

FACULTY SIGNATURE

Experiments

Page
40
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


11

TITLE: Write a program to contour an image.

1.2 Theory / Hypothesis

 We read an image from a file specified by image_path using cv2.imread.


 We convert the image to grayscale using cv2.cvtColor.
 We apply adaptive thresholding (cv2.threshold) to create a binary image where the regions of interest
are highlighted.
 We find contours in the thresholded image.
 We find contours in the thresholded image using cv2.findContours.
The cv2.RETR_EXTERNAL flag retrieves only the external contours,
and cv2.CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal segments and
leaves only their end points.
 We draw the detected contours on a copy of the original image using cv2.drawContours. The contours
are drawn with green color and thickness 2.
 We display the original image with contours using cv2.imshow.
 We use cv2.waitKey(0) to wait for any key press to close the window,
and cv2.destroyAllWindows() to clean up and close all OpenCV windows.

1.3 Procedure / Program / Activity

import cv2

# Read the image


image_path = "Image.png"
image = cv2.imread(image_path)

if image is None:
print("Failed to load the image.")
else:
# Convert the image to grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Apply adaptive thresholding


_, thresh = cv2.threshold(gray_image, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)

# Find contours in the thresholded image


contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# Draw contours on the original image


contour_image = image.copy()
cv2.drawContours(contour_image, contours, -1, (0, 255, 0), 2) # Draw all contours with green color and
thickness 2
Page
41
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

# Display the original image with contours


cv2.imshow("Image with Contours", contour_image)

cv2.waitKey(0)
cv2.destroyAllWindows()

1.4 Results & Analysis

ORIGINAL IMAGE

IMAGE WITH CONTOURS

1.6 Remarks

FACULTY SIGNATURE

Experiments

Page
42
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

1.0 Experiment

Experiment No Date Planed Date Conducted Marks


12

TITLE: Write a program to detect a face/s in an image.

1.2 Theory / Hypothesis

 Import Libraries: Import the necessary libraries, mainly cv2 (OpenCV).


 Load the Haar Cascade Classifier: Use cv2.CascadeClassifier to load the Haar Cascade XML file for
face detection. This XML file (haarcascade_frontalface_default.xml) is provided by OpenCV and is
trained to detect frontal faces.
 Load and Preprocess Image: Load the image using cv2.imread() and convert it to grayscale using
cv2.cvtColor() since the face detection works on grayscale images.
 Detect Faces: Use face_cascade.detectMultiScale() to detect faces in the grayscale image. This
function returns a list of rectangles where faces are detected.
 Draw Rectangles: Iterate over the detected faces and draw rectangles around them using
cv2.rectangle().
 Display the Image: Display the image with rectangles drawn around the detected faces using
cv2.imshow(). Wait for the user to press any key (cv2.waitKey(0)) and then close all windows
(cv2.destroyAllWindows()).
 Example Usage: Replace 'path_to_your_image.jpg' with the path to your own image file.

1.3 Procedure / Program / Activity

import cv2

# Load the pre-trained Haar Cascade classifier for face detection


face_cascade=cv2.CascadeClassifier(cv2.data.haarcascades +'haarcascade_frontalface_default.xml')

# Read the image


image_path = "Image.png" # Replace "ucl.png" with the path to your image
image = cv2.imread(image_path)

if image is None:
print("Failed to load the image.")
else:
# Convert the image to grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Detect faces in the image


faces = face_cascade.detectMultiScale(gray_image, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30))

# Draw rectangles around the detected faces


for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)

# Display the image with detected faces


Page
43
Lab Manual / Semester - 6th
Computer Science and Engineering Department, Hirasugar Institute of Technology

cv2.imshow("Image with Detected Faces", image)


cv2.waitKey(0)
cv2.destroyAllWindows()

1.4 Results & Analysis

ORIGINAL IMAGE

IMAGE WITH DETECTED FACES

1.5 Remarks

FACULTY SIGNATURE

Page
44

You might also like