Cgip Rep
Cgip Rep
CHAPTER 1
INTRODUCTION
Computer graphics are graphics created using computers and more generally, the representation and
manipulation of image data by a computer. The term computer graphics has been used in a broad sense to
describe "almost everything on computers that is not text or sound".
The development of computer graphics has made computers easier to interact with, better for understanding
and interpreting many types of data. Developments in computer graphics have had a profound impact on
many types of media and have revolutionized animation, movies and the video game industry.
A major use of computer graphics is in design processes—particularly for engineering and architectural
systems, although most products are now computer designed. Generally referred to as CAD, computer-
aided design, or CADD, computer-aided drafting and design, these methods are now routinely used in
the design of buildings, automobiles, aircraft, watercraft, spacecraft, computers, textiles, home
appliances, and a multitude of other products. The manufacturing process is also tied in to the computer
description of designed objects so that the fabrication of a product can be automated,using methods that
are referred to as CAM, computer-aided manufacturing.
Data Visualization:
Producing graphical representations for scientific, engineering, and medical data sets and processes is
another fairly new application of computer graphics, which is generally referred to as scientific
visualization. The term business visualization is used in connection with data sets related to commerce,
industry, and other nonscientific areas. Numerical computer simulations, for example, frequently
produce data files containing thousands and even millions of values. Similarly, satellite cameras and
other recording sources are amassing large data files faster than they can be interpreted. Other
visualization techniques include contour plots, renderings for constant-value surfaces or other spatial
regions, and specially designed shapes that are used to represent different data types.
Entertainment:
Television productions, motion pictures, and music videos routinely use computer-graphics methods.
Sometimes graphics images are combined with live actors and scenes, and sometimes the films are
completely generated using computer-rendering and animation techniques. Many TV series regularly
employ computer-graphics methods to produce special effects, such as the scene in Figure from the
television series Deep Space Nine. Some television programs also use animation techniques to combine
computer-generated figures of people, animals, or cartoon characters with the live actors in a scene or to
transform an actor‘s face into another shape. And many programs employ computer graphics to generate
buildings, terrain features, or other backgrounds for a scene.
Computer Art:
Both fine art and commercial art make use of computer-graphics methods. Artists now have available a
variety of computer methods and tools, including specialized hardware, commercial software packages
(such as Lumena), symbolic mathematics programs (such as Mathematica), CAD packages, desktop
publishing software, and animation systems that provide facilities for designing object shapes and
specifying object motions. Example: use of a paintbrush program that allows an artist to ―paint‖
pictures on the screen of a video monitor. A paintbrush system, with a Wacom cordless,pressure-
sensitive stylus, was used to produce the electronic painting. The stylus translates changing hand
pressure into variable line widths, brush sizes, and color gradations.
Image Processing:
The modification or interpretation of existing pictures, such as photographs and TV scans, is called
image processing. In computer graphics, a computer is used to create a picture. Image- processing
techniques, on the other hand, are used to improve picture quality, analyze images, or recognize visual
patterns for robotics applications. However, image-processing methods are often used in computer
graphics, and computer-graphics methods are frequently applied in image processing. Typically, a
photograph or other picture is digitized into an image file before image- processing methods are
employed. Then digital methods can be used to rearrange picture parts, to enhance color separations, or
to improve the quality of shading OpenGL (Open Graphics Library) is a standard specification defining
a cross-language,cross-platform API for writing applications that produce 2D and 3D computer
graphics. The interface consists of over 250 different function calls which can be used to draw complex
three dimensional scenes from simple primitives. OpenGL was developed by Silicon Graphics Inc.
(SGI) in 1992 and is widely used in CAD, virtual reality, scientific visualization, information
visualization, and flight simulation.
OpenGL
OpenGL has become a widely accepted standard for developing graphics application. Most of our
applications will be designed to access OpenGL directly through functions in three libraries. Functions
in main GL library have names that begin with the letters gl and are stored in a library usually referred
to as GL.
The second is the OpenGL Utility Library (GLU). This library uses only GL functions but contains code
for creating common objects and simplifying viewing. All functions in GLU can be created from the
core GL library. The GLU library is available in all OpenGL implementations; functions in the GLU
library begin with the letters glu.
The third is called the OpenGL Utility Toolkit (GLUT), which provides the minimum functionality that
should be expected in any modern windowing system.
1.2 Objective
The primary objective of this project is to create an automated iris segmentation system that can
accurately detect and segment the iris region from a given eye image. This involves dealing with various
challenges, such as variations in eye size, occlusions from eyelids and eyelashes, reflections, and
different lighting conditions.
Accuracy: To achieve high precision in detecting and segmenting the iris region, minimizing errors
and false detections.
Robustness: To develop an algorithm that can reliably segment the iris under diverse conditions,
including different eye sizes, shapes, and image qualities.
Efficiency: To create a system that performs segmentation quickly and can be integrated into real-
time applications.
Scalability: To ensure the method can be applied to large datasets and different iris recognition
systems.
Practical Application: To enhance the overall effectiveness of iris recognition technology in
various fields such as security, access control, and identity verification.
Dept of CSE, RRIT 2023 - 2024 4|Page
Iris Segmentation
CHAPTER 2
REQUIREMENT SPECIFICATION
The requirement specification for the iris segmentation project outlines the functional and non-
functional requirements necessary to achieve the project's aim. It serves as a comprehensive guide for
the development and implementation of the iris segmentation system.
Image Acquisition:
The system should support the input of high-resolution eye images in standard formats (e.g.,
JPEG, PNG, BMP).
The system should be capable of handling images taken under varying lighting conditions.
Preprocessing:
Develop a user-friendly interface for uploading images and viewing segmentation results.
Provide visualization tools for displaying the segmented iris region.
Performance:
The system should perform segmentation within a reasonable time frame, suitable for real-time
applications.
Ensure that the system can handle batch processing of multiple images efficiently.
Scalability:
The system should be scalable to handle large datasets and high volumes of image inputs.
Ensure the algorithm can be adapted for various iris recognition systems.
Reliability:
The system should provide consistent and accurate segmentation results under diverse conditions.
Implement error handling to manage invalid or corrupted image inputs.
Security:
Ensure the system is secure and can protect sensitive biometric data.
Implement data encryption and secure storage for image inputs and segmentation results.
Maintainability:
The codebase should be modular and well-documented to facilitate maintenance and future
enhancements.
Implement version control to manage updates and track changes to the system.
Compatibility:
Ensure the system is compatible with various operating systems (Windows, macOS, Linux).
Support integration with existing biometric recognition systems and databases.
Jupyter Notebook is an open-source web application that allows users to create and share documents
containing live code, equations, visualizations, and narrative text. It supports over 40 programming
languages, including Python, and is widely used in data science, machine learning, and scientific
computing for its interactive and collaborative features. Jupyter Notebooks facilitate reproducible
research and provide an excellent platform for teaching and learning programming concepts.
GLUT is the OpenGL Utility Toolkit, a window system independent toolkit for writing OpenGL
programs. It implements a simple windowing application programming interface (API)for OpenGL.
GLUT makes it considerably easier to learn about and explore OpenGL programming. GLUT provides a
portable API so you can write a single OpenGL program that works across all PC and workstation OS
platforms.
2.6 Libraries
NumPy: A library for efficient numerical computation, providing support for large, multi-
dimensional
Jupyter Notebook: A web-based interactive computing environment for working with Python
code,
Tqdm: A library for displaying progress bars in Python, helping track the progress of loops and
iterations.
Dept of CSE, RRIT 2023 - 2024 7|Page
Iris Segmentation
OpenCV-contrib-python: An extension of OpenCV, providing additional features and
functionalities for computer vision tasks.
CHAPTER 3
DESIGN
Iris segmentation techniques represent an active topic of research within the research community. an
overview of the proposed segmentation method, which consists of, roughly speaking, eye detection,
limbic and then pupillary boundary localization, followed by upper and lower eye- lid detection.The
interest in iris segmentation is fueled by iris recognition technology, where the detection of the region-
of-interest (ROI) is the first (and of the most important) step is the overall processing pipeline. By
segmenting the iris from the input image, irrelevant data that would otherwise interfere with the success
of the recognition process is removed.
Is an eye detected?
N
Clip the original image to get the coarse eye region The original image is used as the coarse eye region
Output CIDK-Means
The flowchart describes a process for iris segmentation starting with eye detection using the AdaBoost algorithm.
If an eye is detected, the original image is clipped to obtain a coarse eye region; otherwise, the original image is
used directly. Image segmentation is then performed using K-Means clustering and edge detection to generate an
edge map. An elliptical Hough transform refines the eye region. The circular Hough transform is applied to locate
the limbic boundary. Finally, the output is a computed value, CIDK−Means\text{CID}_{K-Means}CIDK−Means
, which indicates the accuracy or confidence of the segmentation process.
CHAPTER 4
IMPLEMENTATION
4.1 USER DEFINED FUNCTIONS
Specify the raster position for pixel operations and render the bitmap character at required
position.
void drawstring(int x,int y,char*string,void*font)
void display(void)
glClear (GLbitfieldmask);
o mask – Bitwise OR of masks that indicate the buffers to be cleared. The four masks are
GL_COLOR_BUFFER_BIT, GL_DEPTH_BUFFER_BIT, GL_ACCUM_BUFFER_BIT
and GL_STENCIL_BUFFER_BIT.
o It clears buffers to preset values.
o red, green, blue, alpha – specify the red, green, blue and alpha values used when the color
buffers are cleared. The initial values are all 0.
o It specifies clear values for the color buffers
glutMainLoop ( );
o It enters the GLUT event processing group. This routine should be called at most once in a
GLUT program. Once called, this routine will never return. It will call as necessary any
callbacks that have been registered.
glutPostRedisplay ( );
o red, green, blue – specify new red, green, and blue values for the current color.
CHAPTER 5
ANALYSIS
FUNCTIONS
A function is a block of code that has a name and it has a property that it is reusable that is it can
be executed from as many different points in a c program as required.
The partial code of various function that have been used in the program are:
5.1 Code
import cv2
import numpy as np
img = cv2.imread("C:\\Users\\Suhas\\Downloads\\iris_segmentation_DIP-main\\aeval1.jpg",
cv2.IMREAD_COLOR)
img_filtered = cv2.medianBlur(img_gray, 5)
else:
cv2.imshow('Thresholding', img_thresh)
# Erosion operation
kernel = np.ones((3,3),np.uint8)
if len(contours) >= 2:
outer_contour = contours[0]
inner_contour = contours[1]
mask = np.zeros_like(gray)
else:
cv2.waitKey(0)
cv2.destroyAllWindows()
CHAPTER 6
SNAPSHOTS
CHAPTER 6
CONCLUSION
Iris segmentation is a pivotal process in the field of biometric recognition, significantly impacting the
accuracy and reliability of iris recognition systems. By accurately isolating the iris from other eye
structures, such as the sclera, pupil, and eyelids, segmentation enhances the precision of biometric
matching algorithms. The development of robust segmentation techniques, whether through traditional
image processing methods or advanced machine learning models, addresses various challenges such as
occlusions, reflections, and varying lighting conditions. As technology advances, the integration of
more sophisticated algorithms and deep learning approaches holds promise for even more accurate and
efficient iris segmentation. Ultimately, successful iris segmentation contributes to the effectiveness and
security of biometric systems, facilitating their application in diverse areas including security, access
control, and identity verification.
BIBILOGRAPHY
Reference Books
1. K.W. Bowyer, K. Hollingsworth, P.J. Flynn, Image understanding for iris biometrics: a
survey, Comput. Vision Image Understand. 110 (2) (2008) 281– 307.
2. J.G. Daugman, High confidence visual recognition of persons by a test of statistical
independence, IEEE Trans. Pattern Anal. Mach. Intell. 15 (11) (1993) 1148–1160.
3. . Zhao and A. Kumar. Towards more accurate iris recognition using deeply learned spatially
corresponding features. In ICCV, pages 22–29, 2017.
4. X. Tang, J. Xie, and P.a Li. Deep convolutional features for iris recognition. In CCBR,
pages 391–400. Springer, 2017.
5. M. Arsalan, H.G. Hong, R.A. Naqvi, M.B. Lee, M.C. Kim, D.S. Kim, C.S. Kim, and K.R.
Park. Deep learning-based iris segmentation for iris recognition in visible light environment.
Symmetry, 9(11):263, 2017
List of Websites:
https://www.bing.com/search?
pglt=41&q=iris+segmentation&cvid=14f40628fcc04e46a117262bf211b861&gs_lcrp=EgZjaHJvbWUyBgg
AEEUYOTIGCAEQABhAMgYIAhAAGEAyBggDEAAYQDIGCAQQABhAMgYIBRAAGEAyBggGEA
AYQDIGCAcQRRg8MgYICBBFGDzSAQkyMDc1M2owajGoAgiwAgE&FORM=ANSPA1&PC=ASTS
Iris Segmentation Using Interactive Deep Learning | IEEE Journals & Magazine | IEEE Xplore
https://www.bing.com/search?
pglt=41&q=iris+segmentation&cvid=14f40628fcc04e46a117262bf211b861&gs_lcrp=EgZjaHJvbWUyBggAEEUYOT
IGCAEQABhAMgYIAhAAGEAyBggDEAAYQDIGCAQQABhAMgYIBRAAGEAyBggGEAAYQDIGCAcQRRg8
MgYICBBFGDzSAQkyMDc1M2owajGoAgiwAgE&FORM=ANSPA1&PC=ASTS