0% found this document useful (0 votes)
26 views20 pages

Cgip Rep

Uploaded by

puneeth T
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views20 pages

Cgip Rep

Uploaded by

puneeth T
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Iris Segmentation

CHAPTER 1
INTRODUCTION
Computer graphics are graphics created using computers and more generally, the representation and
manipulation of image data by a computer. The term computer graphics has been used in a broad sense to
describe "almost everything on computers that is not text or sound".

The development of computer graphics has made computers easier to interact with, better for understanding
and interpreting many types of data. Developments in computer graphics have had a profound impact on
many types of media and have revolutionized animation, movies and the video game industry.

The various applications of computer graphics are


 Graphs and charts
 Computer-Aided design
 Virtual-Reality environment
 Data Visualization
 Education and Training
 Computer Art
 Entertainment
 Image Processing
 Graphical User interfaces

Graphs and Charts:


An early application for computer graphics is the display of simple data graphs, usually plotted on a
character printer. Data plotting is still one of the most common graphics applications, but today one can
easily generate graphs showing highly complex data relationships for printed reports orfor presentations
using 35 mm slides, transparencies, or animated videos. Graphs and charts are commonly used to
summarize financial, statistical, mathematical, scientific, engineering, and economic data for research
reports, managerial summaries, consumer information bulletins, and other types of publications.

Computer Aided Design:

A major use of computer graphics is in design processes—particularly for engineering and architectural
systems, although most products are now computer designed. Generally referred to as CAD, computer-
aided design, or CADD, computer-aided drafting and design, these methods are now routinely used in
the design of buildings, automobiles, aircraft, watercraft, spacecraft, computers, textiles, home
appliances, and a multitude of other products. The manufacturing process is also tied in to the computer
description of designed objects so that the fabrication of a product can be automated,using methods that
are referred to as CAM, computer-aided manufacturing.

Dept of CSE, RRIT 2023 - 2024 1|Page


Iris Segmentation
Virtual Reality Environment:
It is a recent application of computer graphics which is used to create virtual-reality environments in
which a user can interact with the objects in a three-dimensional scene. Specialized hardware devices
provide three-dimensional viewing effects and allow the user to ―pick up‖ objects in a scene.
Animations in virtual-reality environments are often used to train heavy equipment operators or to
analyze the effectiveness of various cabin configurations and control placements. This allows the
designer to explore various positions of the bucket or backhoe that might obstruct the operator‘s view,
which can then be taken into account in the overall tractor design.

Data Visualization:
Producing graphical representations for scientific, engineering, and medical data sets and processes is
another fairly new application of computer graphics, which is generally referred to as scientific
visualization. The term business visualization is used in connection with data sets related to commerce,
industry, and other nonscientific areas. Numerical computer simulations, for example, frequently
produce data files containing thousands and even millions of values. Similarly, satellite cameras and
other recording sources are amassing large data files faster than they can be interpreted. Other
visualization techniques include contour plots, renderings for constant-value surfaces or other spatial
regions, and specially designed shapes that are used to represent different data types.

Education and Training:


Computer-generated models of physical, financial, political, social, economic, and other systems are
often used as educational aids. Models of physical processes, physiological functions, population trends,
or equipment, such as the color-coded diagram in for some training applications, special hardware
systems are designed. Examples of such specialized systems are the simulators for practice sessions or
training of ship captains, aircraft pilots, heavy-equipment operators, and air traffic- control personnel.
Some simulators have no video screens; a flight simulator with only a control panel for instrument
flying. But most simulators provide screens for visual displays of the external environment with
multiple panels is mounted in front of the simulator.

Entertainment:
Television productions, motion pictures, and music videos routinely use computer-graphics methods.
Sometimes graphics images are combined with live actors and scenes, and sometimes the films are
completely generated using computer-rendering and animation techniques. Many TV series regularly
employ computer-graphics methods to produce special effects, such as the scene in Figure from the
television series Deep Space Nine. Some television programs also use animation techniques to combine
computer-generated figures of people, animals, or cartoon characters with the live actors in a scene or to
transform an actor‘s face into another shape. And many programs employ computer graphics to generate
buildings, terrain features, or other backgrounds for a scene.

Dept of CSE, RRIT 2023 - 2024 2|Page


Iris Segmentation

Computer Art:
Both fine art and commercial art make use of computer-graphics methods. Artists now have available a
variety of computer methods and tools, including specialized hardware, commercial software packages
(such as Lumena), symbolic mathematics programs (such as Mathematica), CAD packages, desktop
publishing software, and animation systems that provide facilities for designing object shapes and
specifying object motions. Example: use of a paintbrush program that allows an artist to ―paint‖
pictures on the screen of a video monitor. A paintbrush system, with a Wacom cordless,pressure-
sensitive stylus, was used to produce the electronic painting. The stylus translates changing hand
pressure into variable line widths, brush sizes, and color gradations.

Image Processing:
The modification or interpretation of existing pictures, such as photographs and TV scans, is called
image processing. In computer graphics, a computer is used to create a picture. Image- processing
techniques, on the other hand, are used to improve picture quality, analyze images, or recognize visual
patterns for robotics applications. However, image-processing methods are often used in computer
graphics, and computer-graphics methods are frequently applied in image processing. Typically, a
photograph or other picture is digitized into an image file before image- processing methods are
employed. Then digital methods can be used to rearrange picture parts, to enhance color separations, or
to improve the quality of shading OpenGL (Open Graphics Library) is a standard specification defining
a cross-language,cross-platform API for writing applications that produce 2D and 3D computer
graphics. The interface consists of over 250 different function calls which can be used to draw complex
three dimensional scenes from simple primitives. OpenGL was developed by Silicon Graphics Inc.
(SGI) in 1992 and is widely used in CAD, virtual reality, scientific visualization, information
visualization, and flight simulation.

OpenGL
OpenGL has become a widely accepted standard for developing graphics application. Most of our
applications will be designed to access OpenGL directly through functions in three libraries. Functions
in main GL library have names that begin with the letters gl and are stored in a library usually referred
to as GL.

The second is the OpenGL Utility Library (GLU). This library uses only GL functions but contains code
for creating common objects and simplifying viewing. All functions in GLU can be created from the
core GL library. The GLU library is available in all OpenGL implementations; functions in the GLU
library begin with the letters glu.

The third is called the OpenGL Utility Toolkit (GLUT), which provides the minimum functionality that
should be expected in any modern windowing system.

Dept of CSE, RRIT 2023 - 2024 3|Page


Iris Segmentation

Fig 1: Library organization of OpenGL

1.1 Overview of the project


Iris segmentation is a crucial step in the field of biometric recognition, particularly in iris recognition
systems, which are used for security and identification purposes. This project focuses on developing an
efficient and accurate method for segmenting the iris region from an eye image, which involves isolating
the iris from other parts of the eye, such as the sclera, pupil, and eyelashes.

1.2 Objective
The primary objective of this project is to create an automated iris segmentation system that can
accurately detect and segment the iris region from a given eye image. This involves dealing with various
challenges, such as variations in eye size, occlusions from eyelids and eyelashes, reflections, and
different lighting conditions.

1.3 Aim of the Project


The primary aim of the Iris Segmentation project is to develop an automated, efficient, and highly
accurate method for isolating and extracting the iris region from eye images. This involves creating a
robust system that can handle various challenges such as occlusions, reflections, and varying lighting
conditions, thereby enhancing the reliability and performance of biometric identification and recognition
systems.

 Accuracy: To achieve high precision in detecting and segmenting the iris region, minimizing errors
and false detections.
 Robustness: To develop an algorithm that can reliably segment the iris under diverse conditions,
including different eye sizes, shapes, and image qualities.
 Efficiency: To create a system that performs segmentation quickly and can be integrated into real-
time applications.
 Scalability: To ensure the method can be applied to large datasets and different iris recognition
systems.
 Practical Application: To enhance the overall effectiveness of iris recognition technology in
various fields such as security, access control, and identity verification.
Dept of CSE, RRIT 2023 - 2024 4|Page
Iris Segmentation

CHAPTER 2

REQUIREMENT SPECIFICATION
The requirement specification for the iris segmentation project outlines the functional and non-
functional requirements necessary to achieve the project's aim. It serves as a comprehensive guide for
the development and implementation of the iris segmentation system.

2.1 Functional Requirements

In software engineering, a functional requirement defines a function of a software system or its


component. A function is described as a set of inputs, the behavior, and outputs (see alsosoftware).
Functional requirements may be calculations, technical details, data manipulation and processing and
other specific functionality that define what a system is supposed to accomplish. Behavioral
requirements describing all the cases where the system uses the functional requirements are captured in
use cases.

 Image Acquisition:

The system should support the input of high-resolution eye images in standard formats (e.g.,
JPEG, PNG, BMP).
The system should be capable of handling images taken under varying lighting conditions.
 Preprocessing:

Implement noise reduction techniques to improve image quality.


Apply contrast enhancement methods to ensure clear visibility of the iris region.
Normalize images to a standard size and format.
 Segmentation Algorithm:

The system should accurately detect the boundaries of the iris.


Implement edge detection methods (e.g., Canny edge detector) to identify iris contours.
Use circular Hough transform to detect circular boundaries of the iris and pupil.
Optionally, integrate deep learning models (e.g., CNNs) for improved segmentation accuracy.
 Post-Processing:

Apply morphological operations to refine the segmented iris boundary.


Remove small artifacts and smooth the boundary for precise segmentation.
 User Interface:

Develop a user-friendly interface for uploading images and viewing segmentation results.
Provide visualization tools for displaying the segmented iris region.

Dept of CSE, RRIT 2023 - 2024 5|Page


Iris Segmentation

2.2 Non-Functional Requirements

 Performance:
The system should perform segmentation within a reasonable time frame, suitable for real-time
applications.
Ensure that the system can handle batch processing of multiple images efficiently.
 Scalability:
The system should be scalable to handle large datasets and high volumes of image inputs.
Ensure the algorithm can be adapted for various iris recognition systems.
 Reliability:
The system should provide consistent and accurate segmentation results under diverse conditions.
Implement error handling to manage invalid or corrupted image inputs.
 Security:
Ensure the system is secure and can protect sensitive biometric data.
Implement data encryption and secure storage for image inputs and segmentation results.
 Maintainability:
The codebase should be modular and well-documented to facilitate maintenance and future
enhancements.
Implement version control to manage updates and track changes to the system.
 Compatibility:
Ensure the system is compatible with various operating systems (Windows, macOS, Linux).
Support integration with existing biometric recognition systems and databases.

2.3 Details of the software


Here, the coding of our project is done in Jupyter Notebook which is a commercial integrated
development environment (IDE) with OpenGL (Open Graphics Library) which is a standard
specification to produce 2D and 3D computer graphics. We use, the OpenGL Utility Toolkit called
GLUT which is a library of utilities for OpenGL programs.

2.3.1 Jupyter Notebook

Jupyter Notebook is an open-source web application that allows users to create and share documents
containing live code, equations, visualizations, and narrative text. It supports over 40 programming
languages, including Python, and is widely used in data science, machine learning, and scientific
computing for its interactive and collaborative features. Jupyter Notebooks facilitate reproducible
research and provide an excellent platform for teaching and learning programming concepts.

Dept of CSE, RRIT 2023 - 2024 6|Page


Iris Segmentation

2.3.2 OpenGL and GLUT


OpenGL (Open Graphics Library) is a standard specification defining a cross-language, cross- platform
API for writing applications that produce 2D and 3D computer graphics, describing a set of functions
and the precise behaviors that they must perform. From this specification, hardware vendors create
implementations - libraries of functions created to match the functions stated in the OpenGL
specification, making use of hardware acceleration where possible. Hardware vendors have to meet
specific tests to be able to qualify their implementation as an OpenGL implementation.

GLUT is the OpenGL Utility Toolkit, a window system independent toolkit for writing OpenGL
programs. It implements a simple windowing application programming interface (API)for OpenGL.
GLUT makes it considerably easier to learn about and explore OpenGL programming. GLUT provides a
portable API so you can write a single OpenGL program that works across all PC and workstation OS
platforms.

2.4 Software requirements

 OPERATING SYSTEM : Windows 98, Windows XP,Windows Vista,


Windows 7
 BACK END : Jupyter Notebook
 CODING LANGUAGE : C++

2.5 Hardware requirements


 SYSTEM : Pentium IV 2.4 GHz or above

 HARD DISK : 40 GB, 80 GB, 160 GB or above

 MONITOR : 15 VGA colour

 RAM : 256 MB, 512 MB, 1 GB or above

2.6 Libraries

 NumPy: A library for efficient numerical computation, providing support for large, multi-
dimensional

 arrays and matrices.

 Jupyter Notebook: A web-based interactive computing environment for working with Python
code,

 visualizations, and more.

 Tqdm: A library for displaying progress bars in Python, helping track the progress of loops and
iterations.
Dept of CSE, RRIT 2023 - 2024 7|Page
Iris Segmentation
 OpenCV-contrib-python: An extension of OpenCV, providing additional features and
functionalities for computer vision tasks.

Dept of CSE, RRIT 2023 - 2024 8|Page


Iris Segmentation

CHAPTER 3

DESIGN

Fig 2: Flow diagram of the iris segmentation.


The flowchart outlines an iris segmentation process starting with eye detection and localization of the
limbic boundary using Algorithm A. If the result isn't satisfactory or the eye is closed, Algorithm B is
used. The better result between the two algorithms is accepted as the final limbic boundary.
Subsequently, the pupillary boundary and eyelids are localized, followed by specular highlight removal,
producing the final binary image.

Iris segmentation techniques represent an active topic of research within the research community. an
overview of the proposed segmentation method, which consists of, roughly speaking, eye detection,
limbic and then pupillary boundary localization, followed by upper and lower eye- lid detection.The
interest in iris segmentation is fueled by iris recognition technology, where the detection of the region-
of-interest (ROI) is the first (and of the most important) step is the overall processing pipeline. By
segmenting the iris from the input image, irrelevant data that would otherwise interfere with the success
of the recognition process is removed.

Dept of CSE, RRIT 2023 - 2024 9|Page


Iris Segmentation

Eye Detection based on the AdaBoost Algorithm

Is an eye detected?
N

Clip the original image to get the coarse eye region The original image is used as the coarse eye region

Image segmentation based on


K-Means clustering plus edge
detection to get an edge map

Eye fitting with an elliptical Hough transform to get a fine


eye region

Improved circular Hough

transform to locate the limbic boundary

Output CIDK-Means

Fig 3: Flow diagram of Algorithm A that uses K-Means clustering

The flowchart describes a process for iris segmentation starting with eye detection using the AdaBoost algorithm.
If an eye is detected, the original image is clipped to obtain a coarse eye region; otherwise, the original image is
used directly. Image segmentation is then performed using K-Means clustering and edge detection to generate an
edge map. An elliptical Hough transform refines the eye region. The circular Hough transform is applied to locate
the limbic boundary. Finally, the output is a computed value, CIDK−Means\text{CID}_{K-Means}CIDK−Means
, which indicates the accuracy or confidence of the segmentation process.

Dept of CSE, RRIT 2023 - 2024 10 | P a g e


Iris Segmentation

CHAPTER 4
IMPLEMENTATION
4.1 USER DEFINED FUNCTIONS

 bitmap_ output(int x, int y, char *string,void *fonth)

Specify the raster position for pixel operations and render the bitmap character at required
position.
 void drawstring(int x,int y,char*string,void*font)

Used to display text.

 void display(void)

It‘s user-defined display function.

 void myReshape(Glsizei w,Glsizei h)

It defines what to do when the window is resized.

 Void keys(unsigned char key,int x,int y)


It‘s used to get filename from the keyboard.

4.2 BUILT IN FUNCTIONS

 glClear (GLbitfieldmask);

o mask – Bitwise OR of masks that indicate the buffers to be cleared. The four masks are
GL_COLOR_BUFFER_BIT, GL_DEPTH_BUFFER_BIT, GL_ACCUM_BUFFER_BIT
and GL_STENCIL_BUFFER_BIT.
o It clears buffers to preset values.

 glClearColor (GLclampfred, GLclampfgreen, GLclampfblue, GLclampfalpha);

o red, green, blue, alpha – specify the red, green, blue and alpha values used when the color
buffers are cleared. The initial values are all 0.
o It specifies clear values for the color buffers

 glutCreateWindow (char *name);

o name – ASCII character string for use as window name.

o It creates a top-level window.

Dept of CSE, RRIT 2023 - 2024 11 | P a g e


Iris Segmentation
 glutInitWindowSize (intwidth, intheight);

o width– width in pixels.

o height – height in pixels.

o It is used to set the initial window size.

 glutMainLoop ( );

o It enters the GLUT event processing group. This routine should be called at most once in a
GLUT program. Once called, this routine will never return. It will call as necessary any
callbacks that have been registered.

 glutPostRedisplay ( );

o It marks the current window as needing to be redisplayed.

 glutKeyboardFunc(void *(func)(unsigned char key,int x,int y));

o glutKeyboardFunc sets the keyboard callback for the current window.

 glutIdleFunc (void (*func) (void));

o It sets the global idle callback.

 glColor3f (GLfloatred, GLfloatgreen, GLfloatblue);

o red, green, blue – specify new red, green, and blue values for the current color.

o It sets the current color

Dept of CSE, RRIT 2023 - 2024 12 | P a g e


Iris Segmentation

CHAPTER 5

ANALYSIS
FUNCTIONS
A function is a block of code that has a name and it has a property that it is reusable that is it can
be executed from as many different points in a c program as required.

The partial code of various function that have been used in the program are:

5.1 Code
import cv2

import numpy as np

import matplotlib.pyplot as plt

# Load the image

img = cv2.imread("C:\\Users\\Suhas\\Downloads\\iris_segmentation_DIP-main\\aeval1.jpg",
cv2.IMREAD_COLOR)

# Check the dimensions of the image

print(f"Image dimensions: {img.shape}")

# Convert the image to grayscale

img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Calculate the mean and standard deviation of the grayscale image

mean, std_dev = cv2.meanStdDev(img_gray)

# Check if the image has noise and apply appropriate filtering

if std_dev > 10:

img_filtered = cv2.medianBlur(img_gray, 5)

else:

img_filtered = cv2.GaussianBlur(img_gray, (5, 5), 0)

# Apply adaptive thresholding to the filtered image

img_thresh = cv2.adaptiveThreshold(img_filtered, 255, cv2.ADAPTIVE_THRESH_MEAN_C,


cv2.THRESH_BINARY_INV, 11, 2)

Dept of CSE, RRIT 2023 - 2024 13 | P a g e


Iris Segmentation
# Apply canny edge detection

edges = cv2.Canny(img_filtered, threshold1=30, threshold2=100)

# Display the filtered and processed images

cv2.imshow('Original Image', img)

cv2.imshow('Filtered Image', img_filtered)

cv2.imshow('Thresholding', img_thresh)

cv2.imshow('Canny Edge Detection', edges)

# Erosion operation

kernel = np.ones((3,3),np.uint8)

erosion = cv2.erode(img_thresh, kernel, iterations=1)

# Find circles in the eroded image

circles = cv2.HoughCircles(erosion, cv2.HOUGH_GRADIENT, 1, 20, param1=50, param2=30,


minRadius=1, maxRadius=40)

# Draw circles on the original image

if circles is not None:

circles = np.round(circles[0, :]).astype("int")

for (x, y, r) in circles:

cv2.circle(img, (x, y), r, (0, 255, 0), 2)

# Display the image with detected circles

cv2.imshow('Detected Circles', img)

# Find and draw contours

ret, im = cv2.threshold(erosion, 100, 255, cv2.THRESH_BINARY_INV)

contours, hierarchy = cv2.findContours(im, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)

con = cv2.drawContours(img.copy(), contours, -1, (0, 255, 0), 2)

# Convert the image to grayscale

gray = cv2.cvtColor(con, cv2.COLOR_BGR2GRAY)

# Apply a threshold to convert the grayscale image to binary

Dept of CSE, RRIT 2023 - 2024 14 | P a g e


Iris Segmentation
ret, binary = cv2.threshold(gray, 100, 255, cv2.THRESH_BINARY_INV)

# Find contours in the binary image

contours, hierarchy = cv2.findContours(binary, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

# Process contours to extract iris

if len(contours) >= 2:

# Sort contours by area

contours = sorted(contours, key=cv2.contourArea, reverse=True)

outer_contour = contours[0]

inner_contour = contours[1]

# Create a mask and apply bitwise operations

mask = np.zeros_like(gray)

cv2.drawContours(mask, [outer_contour], 0, (255), thickness=cv2.FILLED)

cv2.drawContours(mask, [inner_contour], 0, (0), thickness=cv2.FILLED)

result = cv2.bitwise_and(gray, gray, mask=mask)

# Display the result

cv2.imshow('Segmented Iris', result)

else:

print("Iris not found")

# Wait for user input and close windows

cv2.waitKey(0)

cv2.destroyAllWindows()

Dept of CSE, RRIT 2023 - 2024 15 | P a g e


Iris Segmentation

CHAPTER 6

SNAPSHOTS

Fig 4: Original Image

Fig 5: Filtered Image

Dept of CSE, RRIT 2023 - 2024 16 | P a g e


Iris Segmentation

Fig 6: Thresholding Image

Fig 7: Canny Edge Detection Image

Dept of CSE, RRIT 2023 - 2024 17 | P a g e


Iris Segmentation

Fig 8: Detected Circle Image

Fig 9: Segmented Iris Image

Dept of CSE, RRIT 2023 - 2024 18 | P a g e


Iris Segmentation

CHAPTER 6

CONCLUSION
Iris segmentation is a pivotal process in the field of biometric recognition, significantly impacting the
accuracy and reliability of iris recognition systems. By accurately isolating the iris from other eye
structures, such as the sclera, pupil, and eyelids, segmentation enhances the precision of biometric
matching algorithms. The development of robust segmentation techniques, whether through traditional
image processing methods or advanced machine learning models, addresses various challenges such as
occlusions, reflections, and varying lighting conditions. As technology advances, the integration of
more sophisticated algorithms and deep learning approaches holds promise for even more accurate and
efficient iris segmentation. Ultimately, successful iris segmentation contributes to the effectiveness and
security of biometric systems, facilitating their application in diverse areas including security, access
control, and identity verification.

Dept of CSE, RRIT 2023 - 2024 19 | P a g e


Iris Segmentation

BIBILOGRAPHY
Reference Books

1. K.W. Bowyer, K. Hollingsworth, P.J. Flynn, Image understanding for iris biometrics: a
survey, Comput. Vision Image Understand. 110 (2) (2008) 281– 307.
2. J.G. Daugman, High confidence visual recognition of persons by a test of statistical
independence, IEEE Trans. Pattern Anal. Mach. Intell. 15 (11) (1993) 1148–1160.
3. . Zhao and A. Kumar. Towards more accurate iris recognition using deeply learned spatially
corresponding features. In ICCV, pages 22–29, 2017.
4. X. Tang, J. Xie, and P.a Li. Deep convolutional features for iris recognition. In CCBR,
pages 391–400. Springer, 2017.
5. M. Arsalan, H.G. Hong, R.A. Naqvi, M.B. Lee, M.C. Kim, D.S. Kim, C.S. Kim, and K.R.
Park. Deep learning-based iris segmentation for iris recognition in visible light environment.
Symmetry, 9(11):263, 2017

List of Websites:

 https://www.bing.com/search?
pglt=41&q=iris+segmentation&cvid=14f40628fcc04e46a117262bf211b861&gs_lcrp=EgZjaHJvbWUyBgg
AEEUYOTIGCAEQABhAMgYIAhAAGEAyBggDEAAYQDIGCAQQABhAMgYIBRAAGEAyBggGEA
AYQDIGCAcQRRg8MgYICBBFGDzSAQkyMDc1M2owajGoAgiwAgE&FORM=ANSPA1&PC=ASTS

 Iris Segmentation Using Interactive Deep Learning | IEEE Journals & Magazine | IEEE Xplore

 https://www.bing.com/search?
pglt=41&q=iris+segmentation&cvid=14f40628fcc04e46a117262bf211b861&gs_lcrp=EgZjaHJvbWUyBggAEEUYOT
IGCAEQABhAMgYIAhAAGEAyBggDEAAYQDIGCAQQABhAMgYIBRAAGEAyBggGEAAYQDIGCAcQRRg8
MgYICBBFGDzSAQkyMDc1M2owajGoAgiwAgE&FORM=ANSPA1&PC=ASTS

Dept of CSE, RRIT 2023 - 2024 20 | P a g e

You might also like