0% found this document useful (0 votes)
13 views139 pages

Unit 1 Graphics1

The document provides an overview of computer graphics (CG), detailing its definition, types (passive and active), and applications across various fields such as science, entertainment, and engineering. It explains the components of CG systems, including frame buffers, display controllers, and output devices, as well as the importance of interactive computer graphics in training and simulations. Additionally, it covers input devices, hardcopy devices, graphics networks, and the role of graphics on the internet, emphasizing their significance in modern technology.

Uploaded by

ayish1457
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views139 pages

Unit 1 Graphics1

The document provides an overview of computer graphics (CG), detailing its definition, types (passive and active), and applications across various fields such as science, entertainment, and engineering. It explains the components of CG systems, including frame buffers, display controllers, and output devices, as well as the importance of interactive computer graphics in training and simulations. Additionally, it covers input devices, hardcopy devices, graphics networks, and the role of graphics on the internet, emphasizing their significance in modern technology.

Uploaded by

ayish1457
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 139

UNIT 1

INTRODUCTION TO COMPUTER
GRAPHICS
Definition and Importance of Computer Graphics
The term computer graphics (CG) describes the use of computers to create
and manipulate images.
Graphics can be two- or three-dimensional
Computer Graphics is the creation and manipulation of images or pictures
with the help of computers.

There are two types of computer graphics :

1) Passive Computer Graphics (Non-interactive Computer Graphics)


2) Active Computer Graphics (Interactive Computer Graphics)
• The major product of computer graphics is a picture. With the help of CG,
pictures can be represented in 2D and 3D space.
• Many applications show various parts of the displayed picture changing in size
and orientation. Such type of transformations i.e. the pictures can be made to
grow, shrink, rotate and etc. can be achieved through CG.
• The display on them is often too big to be shown in their entirety. Thus, with
the help of CG, a technique called clipping can be used to select just those
parts of the picture that lie on the screen and to discard the rest.
• CG is in daily use in the field of science, engineering, medicine,
entertainment, advertising, the graphic arts, the fine arts, business, education
etc.
• The electronic industry is more dependent on the technologies provided by
CG such as engineers can draw their circuit in a much shorter time,
• architects can have alternative solution to design problems,
• the molecular biologist can display pictures of molecules and can study on
the structure,
• the town planners and transportation engineers use the computer generated
maps which display data useful to them in their planning work etc.
Interactive Computer Graphics
• The Interactive computer graphics (ICG) provides two way communications between
the computer and the user.
The various applications of ICG are as follows.
• Using ICG system the integrated electronic circuits which are very complex can be
drawn in a much shorter time.
• It is very useful in training of the pilots as they spend much of their training on the
ground at the controls of a flight simulator and not in a real aircraft.
• There are many tasks that can be made easier & less expensive by the use of ICG. The
effectiveness of the ICG is the speed with which the user can absorb the displayed
information.
1) Frame Buffer
The images that are to be displayed are stored in a frame buffer in the form of matrix of
intensity values.
The frame buffer contains the image stored in binary form as a matrix of 0’s and 1’s which
represent the pixel. 0 indicates the darkness and 1 indicates the image.
The Frame Buffer holds the set of intensity values for all the screen points.
The intensity values stored in a Frame Buffer are retrieved and painted on a screen one row at
a time. This row is called as scan line.
2) Display Controller
The Display Controller passes the contents of frame buffer to the T.V. Monitor.
Display Controller reads successive bytes of data from the frame buffer & then converts 0’s
and 1’s into the corresponding video signal.
These signals are fed to the T.V. Monitor.
3) T.V. Monitor
The T.V. Monitor then produces black and white pattern on the screen.
The frame Buffer contents are to be modified, in order to represent the new pattern of pixels or
if some changes are to be made on the displayed picture.
COMPUTER GRAPHICS APPLICATION
Almost any field can make some use of computer graphics, but the major consumers of computer graphics technology
include the following industries:
Video games increasingly use sophisticated 3D models and rendering algorithms.
Cartoons are often rendered directly from 3D models. Many traditional 2D cartoons use backgrounds rendered from 3D
models, which allows a continuously moving viewpoint without huge amounts of artist time.
Visual effects use almost all types of computer graphics technology. Almost every modern film uses digital compositing to
superimpose backgrounds with separately filmed foregrounds. Many films also use 3D modeling and animation to create
synthetic environments, objects, and even characters that most viewers will never suspect are not real.
Animated films use many of the same techniques that are used for visual effects, but without necessarily aiming for images
that look real.
CAD/CAM stands for computer-aided design and computer-aided manufacturing. These fields use computer technology to
design parts and products on the computer and then, using these virtual designs, to guide the manufacturing process. For
example, many mechanical parts are designed in a 3D computer modeling package and then automatically produced on a
computer-controlled milling device.
Simulation can be thought of as accurate video gaming. For example, a flight simulator uses sophisticated 3D
graphics to simulate the experience of flying an airplane. Such simulations can be extremely useful for initial
training in safety-critical domains such as driving, and for scenario training for experienced users such as
specific fire-fighting situations that are too costly or dangerous to create physically.
Medical imaging creates meaningful images of scanned patient data. For example, a computed tomography
(CT) dataset is composed of a large 3D rectangular array of density values. Computer graphics is used to create
shaded images that help doctors extract the most salient information from such data.
Information visualization creates images of data that do not necessarily have a “natural” visual depiction. For
example, the temporal trend of the price of ten different stocks does not have an obvious visual depiction, but
clever graphing techniques can help humans see the patterns in such data.
Presentation graphics: In applications like summarizing of data of financial, statistical, mathematical,
scientific and economic research reports, presentation graphics are used. It increases the understanding using
visual tools like bar charts, line graphs, pie charts and other displays.
OUTPUT DEVICES:
As shown in above figure, it consists of electron gun, focusing system, deflection plates
and a phosphor-coated screen.
Electron gun is the primary component of a CRT. When the heat is supplied to the
electron gun by directing a current, a beam of electrons emitted by an electron gun,
passes through focusing and deflection systems that direct the beam toward specified
positions on the phosphor-coated screen.
The focusing system in a CRT is needed to force the electron beam to converge into a
small spot as it strikes the phosphor.
There are two pairs of deflection plates - Horizontal deflection plates and vertical
deflection plates.
One pair of plates is mounted horizontally to control the vertical deflection, and the other
pair is mounted vertically to control horizontal deflection.
The beam passes between the two pairs of deflection plates and positioned on the screen.
The phosphor then emits a small spot of light at each position contacted by the electron
beam.
Because the light emitted by the phosphor fades very rapidly, some method is needed for
maintaining the screen picture.
One Way to keep the phosphor glowing is to redraw the picture repeatedly by quickly
directing the electron beam back over the same points. This type of display is called a
refresh CRT.
In CRT monitors there are two techniques of displaying images:
1) Raster scan displays
2) Random scan displays
What is the BEEM Penetration Method?
It’s a technique used in monochrome CRT monitors to display multiple colors, even though the screen itself
isn’t a full-color display.

🧪 Color Control:
•Low beam intensity → only the top phosphor layer glows (e.g., green).
•Higher beam intensity → penetrates deeper and excites the lower layer (e.g., red).
•Intermediate intensity → can make both layers glow, blending colors (like orange or yellow).

💡 How It Works:
•The screen is coated with phosphor layers that glow in different colors (usually red and green) when hit by
an electron beam.
•By controlling how deep the electron beam penetrates the phosphor coating, the monitor can produce
different colors.
3D Viewing Devices

3D displays are capable of conveying depth to the viewer. The most common type of 3D
display is stereoscopic displays, which works by showing different images to each eye.
This creates a sense of depth. The brain processes these two separate images and merges
them to form a three-dimensional effect.
A 3D display can be classified into several categories −

•Stereoscopic displays − These provide a basic 3D effect and are widely used in devices
such as virtual reality (VR) headsets.
•Holographic displays − These create a more realistic 3D effect by combining
stereoscopic methods with accurate focal lengths. Unlike stereoscopic displays,
holographic displays can show a true 3D image that can be visible from different angles.
Head-Mounted Displays
• Head-mounted displays (HMDs) (VR)are advanced 3D devices commonly used
for virtual reality experiences. These displays consist of two small screens placed
close to the eyes, each showing a different image. By using magnifying lenses, the
images are enlarged, and the stereoscopic effect is achieved.
Many modern HMDs come with head-tracking technology, allowing users to "look
around" in a virtual world simply by moving their heads. This eliminates the need
for external controllers. VR headsets are a great example of head-mounted displays
and are popular in gaming, simulations, and virtual tours.
• graphics workstation
A graphics workstation in computer graphics (CG) is a high-performance computer system specifically
designed for rendering, modeling, and processing graphics-intensive tasks. It’s used by professionals in
fields like animation, game design, CAD (computer-aided design), architecture, and scientific
visualization.
Key Features of a Graphics Workstation:
1.Powerful GPU (Graphics Processing Unit):
•Handles rendering, shading, and 3D transformations.
2.High-Performance CPU:
•Manages the logic, simulation, and multitasking for complex CG software (e.g., Blender,
Maya, 3ds Max).
•Often multi-core for parallel processing.
3.Large RAM Capacity:
•Needed for handling big textures, scenes, and simulations (8GB–128GB or more).
4.High-Resolution Display(s):
•Supports precise color accuracy and detail, often using 4K or better monitors.
5.Large and Fast Storage:
6.Specialized Software:

Uses of Graphics Workstations:


•3D Modeling & Animation
•Video Editing & VFX
•Scientific Visualization (e.g., molecular models)
•Architectural Rendering
•Game Development
• Viewing systems

In computer graphics, a viewing system refers to the process and components involved in projecting a 3D scene
onto a 2D screen, allowing users to visualize 3D objects correctly. It involves defining how a virtual camera
views the scene and how objects are transformed from world coordinates to screen coordinates.
Main Components of a Viewing System:
1.Modeling Coordinates (Object Space):
1. The local coordinate system of an individual object.
2.World Coordinates:
1. All objects are placed in a global scene using transformations (translate, rotate, scale).
3.Viewing Coordinates (Camera Space):
1. The scene is transformed based on the camera or viewer’s position and orientation.
4.Projection Coordinates:
1. 3D coordinates are projected onto a 2D plane using either:
1. Perspective Projection: mimics human vision, with depth.
2. Orthographic Projection: no perspective distortion (used in CAD, engineering).
5.Normalized Device Coordinates (NDC):
1. Coordinates are mapped to a standardized cube (usually -1 to 1 in all axes).
6.Viewport Transformation (Screen Space):
1. NDCs are scaled and mapped to the actual screen resolution for display.
Type Description Use Case

Objects appear smaller as they get


Perspective Viewing 3D games, simulations
further away (realistic).

All objects maintain size regardless


Orthographic Viewing CAD, architecture
of depth.

Two views for left and right eyes for


Stereo Viewing VR, 3D movies
3D depth.

Includes isometric, dimetric, and


Axonometric Views Technical illustration
trimetric projections.
Input devices and input premitives

In computer graphics, input devices and input primitives are essential for user interaction with
graphical systems, such as drawing, selecting, manipulating, or navigating objects in a scene.
Input Devices :These are the hardware tools that allow users to give input to the graphics system.

Common Input Devices:

Device Description
Mouse Used for pointing, clicking, dragging in 2D/3D space.
Keyboard For text input and shortcut commands.
Graphics Tablet Allows freehand drawing and pressure sensitivity.
Touchscreen Combines display and touch input (used in tablets, phones).
For navigating or controlling objects, mainly in simulations or
Joystick / Gamepad
games.
Light Pen An older device for drawing directly on the screen.
Trackball Similar to a mouse but stays stationary.
Devices like 3D mice or VR controllers for navigating in 3D
3D Input Devices
space.
Input Primitives

These are the basic input operations or events that a user can perform through input devices in a graphics
system. They are recognized and processed by the software.

Primitive Description
Locator Returns a position (x, y) or (x, y, z). Used for pointing.

Selects a graphical object on the screen (e.g., clicking a


Pick
shape).

Stroke Series of positions (a path or shape). Used for drawing.

String Text input via keyboard.


Choice Selection from a set of options (e.g., menus).
Valuator Returns a real number (e.g., slider, scroll wheel input).
• Hardcopy Devices in Computer Graphics

Hardcopy devices are output devices that produce permanent, physical copies of digital images, graphics, or
documents. In computer graphics, they're used to output charts, technical drawings, designs, or rendered images
for reports, presentations, and manufacturing.
Types of Hardcopy Devices:
1. Printers
Used to print digital graphics or images on paper.
•Types:
• Inkjet Printers – Good for color images, photos.
• Laser Printers – Fast, sharp text and vector graphics.
• Dot Matrix Printers – Older tech, low resolution, used for forms.
• Thermal Printers – Common in receipts, labels, and portable devices.
2. Plotters
Specialized printers used for printing vector graphics like blueprints, CAD drawings, and maps.
•Types:
• Drum Plotter – Paper moves around a drum; pens move across.
• Flatbed Plotter – Paper remains stationary; pens move on X and Y axes.
• Inkjet Plotter – Combines inkjet technology with plotting for large-scale prints.
3. 3D Printers
Used to create physical 3D models from 3D digital graphics.
•Converts 3D models into layers and prints them using plastic, resin, or other
materials.
•Popular in product design, prototyping, and architecture.

Uses of Hardcopy Devices in CG:

•Printing technical drawings (architecture, engineering).


•Producing high-quality color visuals (advertising, presentations).
•Creating storyboards and animation frames.
•Printing design prototypes and 3D models.
Graphics Network in Computer Graphics

A graphics network refers to a system or environment where multiple computers or devices are
interconnected to create, process, share, and display graphics or graphical data. It's especially important in
fields like CAD, simulation, gaming, animation studios, and collaborative visualization.

Key Components of a Graphics Network:


1.Workstations/Clients:
1. Individual computers used by designers, engineers, or artists to create or manipulate graphics.
2.Servers:
1. Central systems that store graphical data, render images, or manage shared projects.
3.Display Devices:
1. High-resolution monitors, VR headsets, or large-scale display walls used for rendering and viewing.
4.Input Devices:
1. Used to interact with the graphics (e.g., drawing tablets, 3D mice, VR controllers).
5.Communication Network:
1. The underlying network (LAN, WAN, internet) that connects all components for data sharing.
Functions of a Graphics Network:

Function Description

Rendering graphics on a powerful remote server instead


Remote Rendering
of a local machine.

Sharing large image libraries, textures, or 3D models


Resource Sharing
among users.

Multiple users work on the same graphical project in


Collaborative Design
real-time (like in animation studios).

Dividing heavy rendering tasks among several machines


Distributed Processing
for faster output.

Real-time graphics or games streamed to remote


Streaming Graphics
devices (e.g., cloud gaming).
Graphics on the Internet

Graphics on the internet play a huge role in making websites and web applications interactive, engaging,
and visually appealing. From static images to dynamic 3D content, internet graphics are used across
platforms for design, gaming, education, data visualization, and more.
Types of Graphics on the Internet:
1.Raster Graphics
1. Made of pixels.
2. Formats: JPEG, PNG, GIF, WebP
3. Good for photos, textures, and screenshots.
2.Vector Graphics
1. Made of paths and curves (scalable without loss of quality).
2. Formats: SVG, PDF
3. Used for logos, icons, illustrations.
3.3D Graphics
1. Rendered in real-time using WebGL or WebGPU.
2. Used in online games, virtual tours, simulations.
4.Animated Graphics
1. Includes GIFs, Lottie files (JSON-based animations), CSS animations, and HTML5 Canvas animations.
2. Common in UI/UX, advertisements, and social media.
5.Interactive Graphics
1. Charts, maps, and data visualizations using JavaScript libraries:
Computer Graphics Software

Computer graphics software refers to programs and tools used to create, edit, render, and manipulate visual
content—ranging from simple 2D images to complex 3D animations and simulations.

There are two broad classifications for computer-graphics software


1. Special-purpose packages: Special-purpose packages are designed for nonprogrammers
Example: generate pictures, graphs, charts, painting programs or CAD systems in some application area without
worrying about the graphics procedure
2. General programming packages: general programming package provides a library of graphics functions that
can be used in a programming language such as C, C++, Java, or FORTRAN.
Example: GL (Graphics Library), OpenGL, VRML (Virtual-Reality Modeling Language), Java 2D And Java 3D
Introdution to OpenGL

What is OpenGL?
OpenGL (Open Graphics Library) is a cross-platform, open standard API (Application Programming
Interface) used for rendering 2D and 3D graphics.
Think of it as a toolbox that helps your computer or application talk to the graphics hardware (GPU) so you
can display things like 3D models, textures, lighting effects, animations, and more.

Why Use OpenGL?


•To create interactive 3D applications (like games, simulations, visualizations).
•It gives you control over the GPU, letting you optimize rendering.
•It's supported on many platforms: Windows, Mac, Linux, and more.

What Can You Do with OpenGL?


•Draw shapes: points, lines, triangles, etc.
•Apply textures (like wrapping an image on a 3D object).
•Add lighting and shadows.
•Handle camera movement and perspective.
•Create 3D animations and effects.
OpenGL Function Naming Rules
•All function names start with gl.
•Each word in the function name starts with a capital letter.
Examples:
•glBegin
•glClear
•glCopyPixels
•glPolygonMode

OpenGL Constants
•All constants start with GL_ (in ALL CAPS).
•Words are all uppercase, separated by underscores.
Examples:
•GL_2D
•GL_RGB
•GL_CCW
•GL_POLYGON
•GL_AMBIENT_AND_DIFFUSE
OpenGL Data Types
•All data types start with GL and use lowercase for the actual type.
•They make sure data is the same size across all computers.
Examples:
•GLbyte → 8-bit integer
•GLshort → 16-bit integer
•GLint → 32-bit integer
•GLfloat → floating point number
•GLdouble → double precision float
•GLboolean → true/false

Using Arrays in OpenGL


•Some OpenGL functions let you use arrays to pass multiple values.
•This is helpful for things like coordinates (e.g., x, y, z) or colors (e.g., r, g, b).

GLfloat point[3] = {1.0, 2.0, 3.0};


glVertex3fv(point);
•This creates an array called point with three float values: 1.0, 2.0, and 3.0.
•These represent the x, y, z coordinates of a point in 3D space.
•GLfloat is just OpenGL’s way of saying "float" (so it's portable across systems).

glVertex3fv(point); tells OpenGL to use the values in the array as a 3D vertex (a point in space).
glVertex3f(x, y, z) would take three individual float values.
glVertex3fv(array) takes an array of three float values instead (the v stands for vector or array).
OpenGL Associated Libraries
GLU (OpenGL Utility Library)

•Adds extra functions not in the core OpenGL.


•Helps with:
•Viewing & projection setup
•Drawing complex shapes (like curves and surfaces)
•Handling quadrics, B-splines, etc.
•All functions start with **glu**.

Window System Libraries


OpenGL alone can’t create windows, so different platforms use special interfaces:

Platform Interface Prefix


X Window System GLX glX
Apple macOS AGL agl
Windows WGL wgl
IBM OS/2 PGL pgl
GLUT (OpenGL Utility Toolkit)
•Makes OpenGL programs device-independent.
•Used for:
•Creating windows,Handling input (mouse/keyboard),Drawing shapes like spheres, cones, etc.
•All functions start with **glut**,Works across platforms.

Required Header Files


•For Windows:
#include <windows.h> #include <GL/gl.h> #include <GL/glu.h>
•If using GLUT (simpler & portable):
#include <GL/glut.h>
•For Apple macOS:
#include <GLUT/glut.h>
•C++ standard headers:
#include <cstdio> // for printf (or cstdio) #include <cstdlib> // for exit (or cstdlib) #include <cmath> // for
math functions (or cmath)
✅ GLUT Initialization Steps
1.Initialize GLUT:
glutInit(&argc, argv);
2.Set display mode (optional but recommended):
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
•GLUT_SINGLE → Single buffering
•GLUT_RGB → RGB color mode
3.Set initial window position:
glutInitWindowPosition(50, 100); // x=50, y=100 from top-left corner
4.Set initial window size:
glutInitWindowSize(400, 300); // width=400px, height=300px
5.Create display window with title:
glutCreateWindow("An Example OpenGL Program");
6.Define what to draw using a function:
glutDisplayFunc(lineSegment); // Your drawing function
7.Start main event-processing loop:
glutMainLoop();
•Keeps the window open
•Waits for input (mouse/keyboard)
•Non-interactive programs just keep displaying the picture
📝 Extras
•GLUT makes OpenGL programs platform-independent.
•You don’t need to include gl.h and glu.h if using glut.h.
•The OpenGL origin (0,0) for screen coordinates is top-left.
•.

Coordinate Reference Frames


Describing a Picture in Computer Graphics
•Coordinate System: Use a Cartesian coordinate system (2D or 3D), called the world-coordinate reference
frame. Like graph paper, we use an X, Y (and sometimes Z) grid to place things in a picture.
• 2D system: X (horizontal) and Y (vertical).
• 3D system: X, Y, and Z (depth).
The Cartesian coordinate system is a method for locating points using numbered axes
•Object Definition: Objects are defined by geometric specifications:
• Line: Two endpoint positions.
• Polygon: Set of vertex positions.
•Scene Description includes:
• Coordinate positions,Color and other properties. We also store details like color and size.
• Coordinate extents (min/max x, y, z) → also called bounding box or bounding rectangle in 2D.
•Viewing Process:
• Scene info passed to viewing routines.
• Visible surfaces are determined.
• Objects mapped to screen positions.
•Scan Conversion:
• Converts scene data to pixel data.
• Stores color values in the frame buffer.
• Displays objects on the output device.

Screen Coordinates

•Screen Coordinates:Integer values that match pixel positions in the frame buffer.
•Pixel at (x, y) → x = column (left to right), y = scan line (top to bottom).
•Top-left corner is usually the origin (0,0).
•Custom coordinate systems can be used in software (e.g., origin at bottom-left).
•Conversion:
•Picture coordinates (e.g., Cartesian) are converted to pixel positions using viewing routines.
•Scan-Line Algorithms:
•Used to determine which pixels to fill for shapes (e.g., lines, polygons).
•Pixel size is finite — assume the coordinate points to the center of a pixel.
•Pixel Operations:
•setPixel(x, y) – sets the color at a pixel.
•getPixel(x, y, color) – gets the color (as RGB value) from a pixel.
•3D Scenes:
•Use (x, y, z) coordinates; z adds depth info.
•In 2D, z = 0.
Simple Explanation:
•The screen is like a grid of tiny squares called pixels.
•Each pixel has a position given by (x, y) numbers.
•(0,0) usually starts at the top-left corner.
•The computer figures out which pixels to light up when drawing shapes, like lines or circles.
•Colors for pixels are stored in memory using a function like setPixel(x, y).
•To find out the color at a pixel, we use getPixel(x, y, color).
For 3D images, we also track how far back things are using a z value.
Absolute and Relative Coordinate Specifications

•Absolute coordinates: Specify exact positions in the coordinate system (e.g., (3, 8)).
•Relative coordinates: Specify positions as offsets from the current position.
•Example:
•Current position: (3, 8)
•Relative move: (2, -1)
•New absolute position: (5, 7)
•Use cases: Useful in pen plotters, drawing/painting apps, publishing tools, etc.
•How it works:
•Set a current position
•Provide a sequence of relative coordinates (offsets)
•Flexible systems: Some graphics programs allow both absolute and relative coordinates.
Fill-Area Primitives

•Definition:
A fill area is a region filled with a solid color or pattern.
•Applications:
•Represent surfaces of solid 3D objects
•Used in drawing tools, CAD, simulations, etc.
•Typical Shape:
•Mostly polygons (planar surfaces).Can include circles, curves, or splines
•Why Polygons?
•Easy to process with linear equations,Efficient for rendering,Supported by most graphics libraries
•Key Term:
•Graphics Object: Any object modeled using polygon surface patches
• Approximating a curved surface with polygon facets is sometimes referred to as
surface tessellation, or fitting the surface with a polygon mesh.
• surface tessellation :Surface Tessellationis the process of dividing a curved surface into smaller flat
polygons (usually triangles or quadrilaterals) to approximate the shape for easier rendering in computer
graphics.

• Displays of such figures can be generated quickly as wire-frame views, showing only the polygon
edges to give a general indication of the surface structure. Then the wire-frame
model could be shaded to generate a display of a natural-looking material surface.
Objects described with a set of polygon surface patches are usually referred to as
standard graphics objects, or just graphics objects.
Polygon Fill Areas

• A polygon is a flat (planar) shape made by connecting 3 or more points (vertices) with straight lines (edges).
(All vertices must lie in one plane,Edges connect in sequence and form a closed loop.
No edge crossings (except at endpoints).)
•Simple Polygon: No crossing edges. Common Examples: Triangle, rectangle, octagon, decagon.
•In Graphics Sometimes vertices may not lie perfectly in one plane:
Due to round-off errors
Incorrect input
Curved surface approximations
•Solution: Divide the shape into triangles (always planar and stable).
Concave vs. Convex Polygons:
•Convex: All interior angles < 180°; all vertices on one side of any edge’s extension.
All corners (angles) point outward.
No part of the shape caves in.
A line drawn between any two points inside the shape stays inside.
Example: square, regular hexagon.
•Concave: Some vertices lie on opposite sides of an edge extension.
At least one corner (angle) points inward.
The shape has a "dent" or "cave."
A line between two points inside might go outside the shape.
Example: a star shape or arrowhead.
Splitting Concave Polygons:

•Many graphics algorithms work better with convex polygons.Concave polygons can be split into smaller
convex parts.
•This can be accomplished using edge vectors and edge cross-products;

Method: Using Vectors & Cross-Products


1.Edge Vector (Ek):
For two consecutive vertices Vk​and Vk+1
Ek=Vk+1−Vk

2.Cross Product Test:


1. Calculate cross-products of consecutive edge vectors.
2. If z-components of cross-products have both signs (positive & negative),
the polygon is concave.
3. If all z-components are same sign, the polygon is convex.
•POLYGON TABLE

•Scene Objects: Often made up of polygon meshes.


•Each Polygon Includes:
•Geometry: Vertex coordinates, surface orientation.
•Attributes: Color, transparency, reflectivity, texture.

 Polygon Data Tables


•Two main types:
• Geometric Tables: For structure (vertices, edges, surfaces).
• Attribute Tables: For appearance.

Geometric Tables (3-Table Method)


1.Vertex Table: Stores vertex coordinates.
2.Edge Table: Points to vertex pairs (edges).
3.Surface-Facet Table: Points to edges making up a polygon.
•Helps in efficient rendering and easy reference.
Plane Equations
The general equation of a plane is

Ax + B y + C z + D = 0

Plane equations can be used to identify the position of spatial points


relative
to the polygon facets of an object. For any point (x, y, z) not on a plane with
parameters A, B, C, D, we have
Ax + B y + C z + D = 0
Thus, we can identify the point as either behind or in front of a polygon
surface
contained within that plane according to the sign (negative or positive) of
Ax + By + Cz + D:
if Ax + B y + C z + D < 0, the point (x, y, z) is behind the plane
if Ax + B y + C z + D > 0, the point (x, y, z) is in front of the plane
Character Primitives
Purpose of Text in Graphics
•Used for labels, signs, and annotations in:
• Graphs and charts, Simulations and visualizations, Building signs, vehicle markings

Typeface vs Font
•A typeface is the overall design of a character family (e.g., Helvetica, Courier, Palatino).
•A font originally meant a specific size and style of a typeface (e.g., 12-point Courier Italic).
•Today, "font" and "typeface" are often used interchangeably.
Point System
•1 inch = 72 points,So, a 14-point font is about 0.5 cm tall
Font Categories
a. By Style
• Serif fonts: Have small finishing strokes (e.g., Times New Roman)
• Sans-serif fonts: Clean and without strokes (e.g., Arial)
b. By Width
• Monospace fonts: All characters have the same width
• Proportional fonts: Character width varies (more natural-looking)
Font Representations in Computers
There are two main ways to store and display fonts:
a. Bitmap Fonts (Raster Fonts)
•Each character is defined by a grid of 0s and 1s
•1s represent pixels that should be colored/displayed
•Easy and fast to display, but:
• Need more storage space for each size/style
• Scaling causes pixelation and jagged edges
• Only scalable in integer steps
b. Outline Fonts (Stroke or Vector Fonts)
•Characters defined by lines and curves
•More flexible: can be scaled smoothly
•Styles like bold or italic can be generated by modifying curves
•Takes more processing time (requires scan conversion to pixels)

Character Display in Graphics


Graphics systems often include:
•String-drawing functions (to display full words or sentences)
•Single-character functions, often used for:
• Markers in plots or networks,Common markers: dots, crosses, asterisks (*), circles
These markers are sometimes called polymarkers, similar to polylines.
GLUT Font Types
a) Bitmap Fonts
•Use glutBitmapCharacter(font, character);
•Parameters:
•font: Type of font (e.g., GLUT_BITMAP_9_BY_15) -This displays the character 9x15 pixel bitmap font.

•character: ASCII value or actual character (e.g., 'A', 65)


•Available Fonts:
•Fixed-width:
•GLUT_BITMAP_8_BY_13
•GLUT_BITMAP_9_BY_15
•Proportional (Times/Helvetica):
•GLUT_BITMAP_TIMES_ROMAN_10, GLUT_BITMAP_TIMES_ROMAN_12
•GLUT_BITMAP_HELVETICA_10, 12, 18
glRasterPosition2i(x, y);
for (int k = 0; k < 36; k++)
glutBitmapCharacter(GLUT_BITMAP_9_BY_15, text[k]);

glRasterPosition2i(x, y);
•Sets the starting position on the screen (or window) for drawing bitmap text.
•(x, y) are window coordinates where the first character will be drawn.
•This is like placing the "text cursor" at a specific spot.

for (int k = 0; k < 36; k++)


•A loop that repeats 36 times—to display 36 characters.
•text[k] gives the k-th character from a character array or string called text.

glutBitmapCharacter(GLUT_BITMAP_9_BY_15, text[k]);
•Draws the current character (text[k]) using the 9x15 pixel fixed-width bitmap font.
•Each time it's called:
•The character is drawn at the current raster position.
•Then the raster position moves to the right by the character’s width (9 pixels).
glutStrokeCharacter(font, character);

This function is used to draw characters using stroke (outline) fonts in OpenGL with the GLUT library.

Feature Bitmap Font Stroke Font

Function Used glutBitmapCharacter() glutStrokeCharacter()

Appearance Pixel-based Line-based (outline)

Scalable ❌ (Not smooth) ✅ (Smooth at any size)

Transformable (3D) ❌ ✅

Rendering Speed Fast Slower (more processing needed)


Attributes of Graphics Primitives

Attribute parameters in graphics systems control how graphics primitives (like lines, areas, or text) are displayed.
These parameters define properties such as color, size, and style. Basic attributes determine visual features (e.g.,
line thickness or text font), while special-condition attributes handle interactive features like visibility or
detectability.
There are two main methods to manage attributes:
1.Function Parameters: Include attributes directly in the function that draws the primitive.
2.State System: Maintain a list of current attribute values (state variables). Functions update these, and the
system uses them when rendering primitives.
A state system keeps track of these attribute settings and remains in a specific state until updated. OpenGL is an
example of a graphics library that uses this approach.
Color is a fundamental attribute for all graphics primitives. Users can select colors through numerical values,
menus, or sliders. In display systems, these color values are converted into signals—like electron beam
intensities for monitors or ink/pen choices for plotters.
In raster systems, color is often represented using RGB components (Red, Green, Blue). There are two main
ways to store color in the frame buffer:
1.Direct RGB Storage: RGB values are stored directly for each pixel. With 3 bits per pixel (1 per color), 8
colors are possible. Increasing to 6 or 24 bits per pixel greatly expands the color range (up to ~16.7 million
colors with 24 bits).
2.Color Table (Indexed Color): Instead of storing full RGB values, pixel values are indices pointing to a
separate color table. This saves memory but limits the number of simultaneously displayable colors.
Color Tables

A color lookup table (CLUT) or color map is a method used in computer graphics to manage color
representation efficiently. Instead of storing full RGB values for each pixel, the frame buffer stores indices that
refer to entries in a color table. Each entry in this table typically contains a 24-bit RGB color, allowing a palette
of 16.7 million possible colors.
In the example provided, each pixel uses an 8-bit index (allowing 256 possible values), and the color table maps
these to RGB values. This setup reduces memory requirements—only 1 MB for the frame buffer—while
allowing up to 256 simultaneous colors to be displayed.
Color tables are especially useful in:
•Design and visualization, where changing a table entry instantly updates all relevant pixels.
•Image processing, allowing for threshold-based color mapping.
•Systems with multiple displays, which may use different color tables.
Although this method limits simultaneous colors compared to true color systems, it offers flexibility and
efficiency, particularly in applications where full-color range is not essential. Some systems support both indexed
color and direct color storage for added versatility.
Grayscale
In modern computer-graphics systems, RGB color functions are commonly used to generate grayscale shades
by setting equal values for red, green, and blue components. When all three RGB values are the same, the
resulting color is a shade of gray:
•Values near 0.0 produce dark gray (almost black). Values near 1.0 produce light gray (almost white).
Grayscale is widely used in:
•Enhancing black-and-white photographs,Creating visualization effects,Simplifying images for analysis
This method allows for smooth transitions between light and dark areas without using full color.

 Besides the RGB model, other three-component color models are also important in computer graphics:
•CMY/CMYK (Cyan, Magenta, Yellow, and Black): Commonly used for printers and color printing processes.
•HSL/HSV (Hue, Saturation, Lightness/Value): Used in color interfaces to describe colors based on perception,
such as brightness or vividness.
Color as a Physical and Psychological Phenomenon:
•Physically, color is electromagnetic radiation within a specific frequency and energy range.
•Perceptually, color depends on human vision and interpretation.
Important Terms:
•Intensity: A physical measure of light energy emitted over time in a specific direction.
•Luminance: A psychological measure of perceived brightness.
These concepts bridge the gap between the physics of light and the psychology of color perception, providing
more accurate and flexible tools for color handling in graphics.
OpenGL Color Modes and Functions
1. RGB and RGBA Modes:
•RGB Mode: Uses Red, Green, and Blue components to define color.
•RGBA Mode: Adds an Alpha component for transparency and blending.
•Use glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB) to set RGB mode.
•Use glutInitDisplayMode(GLUT_RGBA) to enable alpha blending.
2. glColor Function:*
•Sets the current color for primitives.
•Syntax examples:
•glColor3f(0.0, 1.0, 1.0); sets color to cyan (RGB)
•glColor4f(r, g, b, a); includes alpha for blending.
•Data types: float, int, byte (e.g., glColor3ub(0, 255, 255))

Color-Index Mode:
•Uses color tables instead of RGB values.
•Set color by:
glIndexi(index);
•Example of setting a table value:
glutSetColor(index, red, green, blue);
Point Attributes
In OpenGL, points can have two key attributes: color and size. These are managed using a state system,
meaning the currently set values affect all subsequently defined points until changed.
•Color can be specified using RGB/RGBA values or by an index in a color table (in color-index mode).
•Size is set in integer multiples of pixel size, so a larger point appears as a square block of pixels on raster
systems.
In OpenGL, the color and size of a point are set using the state system and affect how points are displayed:
•Color is set using glColor (for RGB values) or glIndex (for color index mode).
•Size is set with glPointSize(size), where size is a positive floating-point value.
•size = 1.0 → 1×1 pixel
glColor3f(1.0, 0.0, 0.0); // Red color
•size = 2.0 → 2×2 pixels
glBegin(GL_POINTS);
•size = 3.0 → 3×3 pixels glVertex2i(50, 100); // Default size (1x1)
•If antialiasing is enabled, the edges of the point are smoothed.
•The default point size is 1.0. glPointSize(2.0);
•These attributes can be placed inside or outside a glColor3f(0.0, 1.0, 0.0); // Green color
glBegin/glEnd block. glVertex2i(75, 150); // 2x2 point

glPointSize(3.0);
glColor3f(0.0, 0.0, 1.0); // Blue color
glVertex2i(100, 200); // 3x3 point
glEnd();
Line Attributes
In computer graphics, a straight-line segment is defined using three basic attributes: color, width, and style.
•Line Color: Applied uniformly across all graphics primitives using a common function.
•Line Width: Depends on device capabilities. On raster displays, thick lines are rendered as adjacent parallel
pixels (multiples of standard width), while devices like pen plotters may use different pen sizes.
•Line Style: Includes options like solid, dashed, or dotted lines. These are created by modifying line-drawing
algorithms (e.g., Bresenham's algorithm) to control the pattern of visible segments and gaps.
Additional effects like pen or brush strokes can also be applied for artistic rendering.
In OpenGL, the appearance of a straight-line segment is controlled using three key attributes: line color, line
width, and line style.
•Line Width: Set using glLineWidth(width); where width is a float rounded to the nearest non-negative integer.
A value of 0.0 defaults to standard width (1.0). Antialiasing allows smoother and fractional-width lines, but
hardware support may vary.
•Line Style: Set using glLineStipple(repeatFactor, pattern);, where:
•pattern is a 16-bit integer (e.g., 0x00FF) indicating dash/dot pattern.
•repeatFactor specifies how many times each bit is repeated.
•Line stippling must be activated using glEnable(GL_LINE_STIPPLE); and can be disabled using
glDisable(GL_LINE_STIPPLE);.

•Color Interpolation: By assigning different colors to endpoints and using glShadeModel(GL_SMOOTH);,


OpenGL smoothly interpolates colors along the line. Using GL_FLAT results in a solid color from the second
vertex.
•Other Effects: Additional effects include color gradation, blending with alpha values, and simulating brush
strokes using pixmaps and blending features for artistic rendering.
Curve attributes
Curve attributes in OpenGL are similar to those for straight-line segments, including color, width, line style
(e.g., dashed or dotted), and brush or pen effects.
•Interactive Drawing: Curves can be sketched using input devices like a stylus and tablet, as in painting and
drawing programs, allowing for brush stroke effects and varied patterns.
•OpenGL and Curves: OpenGL does not treat curves as basic primitives. Instead:
• Curves can be approximated using a series of short line segments.
• More accurate curves can be drawn using splines, supported via:
• OpenGL evaluator functions.
• GLU (OpenGL Utility Library) for more advanced spline rendering.
These methods enable the representation of smooth, complex curves in OpenGL despite its lack of native curve
primitives.
Fill-Area Attributes
Most graphics systems restrict fill areas to polygons, often requiring convex polygons to simplify processing.
However, advanced systems support filling curved regions like circles and ellipses, and paint programs allow
filling of arbitrarily shaped areas.
•Fill Styles: Common fill options include:
• Solid color
• Patterned fill
• Hollow (outline only)
•Advanced Fill Options:
• Fill areas can have textured, blended, or brush-style fills.
• Polygon edges can be customized with color, width, and line style.
• Different display attributes can be applied to front and back faces.
•Tiling Patterns: Fill patterns can be specified using bit arrays or color arrays. These patterns (masks) are tiled
across the area from a starting point, repeating without overlap.
•Color-Blended Fills:
• Use transparency factors to mix pattern colors with the background.
• Soft-fill or tint-fill methods blend colors smoothly, useful for antialiased edges or repainting semi-
transparent areas while maintaining original gradients.
These techniques provide flexibility in rendering rich, visually appealing filled shapes in both vector and raster
graphics.
Character Attributes
•Font (Typeface):
Choice of styles like Helvetica, Times Roman, Courier, etc.
Fonts can be bold, italic, underlined, outlined, or shadowed.
•Color:
Text color is stored as an attribute and used to set pixel values in the frame buffer.
•Size:
Measured in points (1 point ≈ 1/72 inch).
Can scale both height and width together or separately.
•Spacing:
Control spacing between characters for better readability.
•Orientation:
Set using an up-vector to rotate text (e.g., 45° rotation).
•Arrangement:
Text can be arranged horizontally, vertically, forward, or backward.
•Alignment:
Text can be aligned by baselines, centers, or entire strings (horizontal and vertical).
•Text-Precision:
Controls the level of detail in text rendering (high or low precision).
•Special Characters:
Libraries provide extra symbols like circles and crosses for graphs and layouts.
Character Display in OpenGL
In OpenGL, characters can be displayed in two ways:
1.Designing a custom font set using bitmap functions.
2.Using GLUT’s predefined bitmap and stroke character sets.
The display color for both types is controlled by the current color state. Font size and spacing are determined by
the font type, like GLUT_BITMAP_9_BY_15 or GLUT_STROKE_MONO_ROMAN.
For stroke fonts, additional attributes can be modified:
•Line width using glLineWidth.
•Line style using glLineStipple.
Antialiasing Functions
•Aliasing:
Jagged, stair-step appearance caused by under sampling during rasterization.
•Antialiasing:
Techniques to smooth edges and reduce jaggedness in raster images.
•OpenGL Antialiasing Support:
•Use glEnable(primitiveType);
•primitiveType can be:
•GL_POINT_SMOOTH
•GL_LINE_SMOOTH
•GL_POLYGON_SMOOTH
•Color Blending for Smoothing:
•Enable blending with glEnable(GL_BLEND);
•Set blending function:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
•Use large alpha values for better smoothing.
•Antialiasing with Color Tables:
•Create a color ramp (gradual transition) between background and object colors for smooth boundaries.
 Implementation Algorithms for Graphics Primitives and Attributes

Line-Drawing Algorithms
• Line Definition:A line segment is defined by the coordinates of its two endpoints.
•In Raster Display Process:
•Endpoints are projected to integer screen coordinates.
•Nearest pixel positions along the line path are determined.
•Line color is loaded into the frame buffer at these pixel positions.
•Digitization Effect:
•Rounding (e.g., (10.48, 20.51) → (10, 21)) causes approximation.
•Results in a stair-step appearance ("the jaggies"), especially for non-horizontal/vertical lines.
•Impact:
•More visible on low-resolution screens.
•Reduced by using high-resolution displays or pixel intensity adjustments (antialiasing techniques).
1. DDA (Digital Differential Analyzer) Algorithm
Concept:
•Sample at unit intervals.
•Calculate corresponding pixel positions.
DDA (Digital Differential Analyzer) is an algorithm used in computer graphics for scan-converting and drawing
straight lines.
It works by calculating pixel positions along a line by sampling at unit intervals in one coordinate (either x or y)
and computing the corresponding value of the other coordinate using the line's slope.
The DDA algorithm incrementally generates points between two specified endpoints, using simple addition
operations and rounding to the nearest integer pixel location.
Purpose:
2.Bresenham Line Drawing Algorithm-
To draw a straight line between two points on a grid (like pixels on a
screen) using only integer calculations (no floating-point arithmetic).

Starting coordinates = (X0, Y0)


Ending coordinates = (Xn, Yn)
Step-01:
Calculate ΔX and ΔY from the given input.

These parameters are calculated as- ΔX = Xn – X0


ΔY =Yn – Y0
Step-02:
Calculate the decision parameter Pk.It is calculated as-

Pk = 2ΔY – ΔX
Step-03:Suppose the current point is
(Xk, Yk)
and the next point is (Xk+1, Yk+1).
Find the next point depending on the value of decision parameter Pk.
Follow the below two cases-

Step-04:
Keep repeating Step-03 until the end point is reached or number of iterations equals to (ΔX-1) times.
Problem-01:Calculate the points between the starting coordinates (9, 18) and ending coordinates (14, 22).
Solution-
Given- Starting coordinates = (X0, Y0) = (9, 18)
Ending coordinates = (Xn, Yn) = (14, 22)
Step-01:
Calculate ΔX and ΔY from the given input. ΔX = Xn – X0 = 14 – 9 = 5
ΔY =Yn – Y0 = 22 – 18 = 4
Step-02:
Calculate the decision parameter. Pk= 2ΔY – ΔX= 2 x 4 – 5= 3
So, decision parameter Pk = 3
Step-03:
As Pk >= 0, so case-02 is satisfied.
Thus,

Pk+1 = Pk + 2ΔY – 2ΔX = 3 + (2 x 4) – (2 x 5) = 1


Xk+1 = Xk + 1 = 9 + 1 = 10
Yk+1 = Yk + 1 = 18 + 1 = 19

Similarly, Step-03 is executed until the end point is reached or number of iterations equals to 4 times.
Circle-Generating Algorithms
Ellipse-Generating Algorithms
The equation for the ellipse can be written in terms of the ellipse center coordinates and parameters
rx and
ry as
Midpoint Ellipse Drawing Algorithm

•Similar to the midpoint circle algorithm.


•Calculate ellipse points centered at the origin (0, 0), then shift by (xc,yc)(x_c, y_c)(xc​,yc​).
•We consider only standard position ellipses (major and minor axes along x and y).
Pixel Addressing and Object Geometry
 Pixel Addressing in Graphics
•Graphics primitives are scan-converted into frame-buffer coordinates, which usually reference the center of
each pixel.
•However, some graphics systems (e.g., OpenGL) use alternate addressing methods, referencing positions
between pixels, aligning object boundaries with pixel edges instead of centers.
•This difference can cause the displayed image to misalign with the mathematical description of the object.
 World Coordinates vs. Pixel Coordinates
•World coordinates describe object positions as infinitesimally small points in real units (e.g., cm or meters).
•When rasterized, these are transformed into finite pixel units.
•To preserve the original geometric proportions, we can:
• Adjust pixel dimensions to match real-world dimensions.
• Map world coordinates between pixels to better align objects with raster grid.
 Parabolic Motion (Projectile Paths)
•Objects under gravity follow a parabolic path:
 In screen grid coordinates, each screen position is defined by grid intersections between pixels, with pixels
occupying unit squares from (x, y) to (x+1, y+1). This method simplifies raster algorithms and avoids
complications from half-integer boundaries. When drawing lines or shapes, it is important to consider the
finite size of pixels to accurately represent the intended geometry. For example, in line drawing, plotting
only the interior pixels (excluding the endpoint) ensures that the displayed line matches the mathematical
length. Similarly, when filling rectangles or circles, including border pixels can visually enlarge the shape
beyond its intended size. To maintain correct proportions, algorithms must be adjusted to avoid plotting
pixels that extend beyond the defined boundaries. This principle applies to both straight and curved objects,
ensuring that raster displays faithfully represent the original geometric specifications.
Attribute Implementations for Straight-Line Segments and Curves
Line Width:In raster graphics, line width implementation depends on device capabilities. A standard-width
line uses single pixels per step (e.g., Bresenham algorithm), while thicker lines are created by plotting extra
pixels along adjacent parallel paths. For lines with slope ≤ 1.0, this involves drawing vertical spans of pixels
in each column based on the line width. Additional enhancements like butt caps can be added by extending
the line endpoints by half the line width. Alternatively, thick lines can be represented as filled rectangles,
with endpoints extended to form square or round caps. When dealing with polylines, special joins are
needed to avoid gaps between segments. Common join types include miter (sharp corner), round (circular
cap), and bevel (flat fill). Miter joins can produce unwanted spikes at sharp angles, so systems often switch to
bevel joins in such cases to maintain visual quality.
Line Style: In raster graphics, different line styles like dashed, dotted, or thick lines are created by controlling
which pixels are turned on along the line path. A pixel mask (like 11111000) helps decide which pixels to draw
and which to skip. Thick lines are drawn by adding extra pixels next to the main line—vertically for gentle
slopes and horizontally for steep ones. However, line thickness and end shapes can look uneven, especially at
angles like 45°. To fix this, line caps are added—butt caps (flat ends), round caps (semi-circles), or projecting
caps (square extensions). For dashed lines, dash lengths can look different at different angles, so we can adjust
pixel counts or treat each dash as a small line to keep them consistent.

pen and brush options:

In raster graphics, pen and brush shapes are defined using pixel masks,
which specify the pattern of pixels to be drawn along a line. For example,
a rectangular pen moves along the line path, applying the mask at each
step. To avoid redrawing pixels, the system tracks and merges horizontal
pixel spans for each scan line. Line thickness can be adjusted by
changing the mask size—a 2×2 mask makes a thinner line, while a 4×4
mask creates a thicker one. To add patterns, the pattern can be combined
with the pen mask, allowing for custom line textures and styles.
Curve Attributes :
Curve-drawing in raster graphics can be adapted to display different widths and styles, just like line drawing.
To create thick curves, we use vertical pixel spans where the slope is ≤1 and horizontal spans where the
slope >1. Another method is to draw two parallel curves on either side of the original path, separated by half
the desired width. For example, to draw a thick circular arc, we use two concentric arcs.
Pixel masks (like 11100) can be used to add dashed or dotted patterns to curves, but the length of dashes
varies with curve slope unless adjusted. To keep dashes uniform, we can draw them along equal angular
intervals.
Pen or brush shapes (e.g., rectangular or circular) can also be replicated along a curve's path to draw thick or
styled curves. For even thickness, a circular pen or a rotated pen that aligns with the curve's slope is preferred.
General Scan-Line Polygon-Fill Algorithm
Scan-Line Fill for Convex Polygons
In a convex polygon, only one interior span exists per scan line.
•For each scan line crossing the polygon, we only need to find two edge intersections (left and right boundaries).
•Vertex crossings are treated as single intersection points.
•If a scan line intersects a single vertex (like an apex), we plot only that point.
•The algorithm is simpler than for general polygons because:
• There are no complex intersections or multiple spans.
• Fewer checks are needed for edge processing.
•Some systems simplify further by using triangles only, which makes edge processing even easier (only three
edges to handle).

Scan-Line Fill for Curved Boundaries


•Filling curved areas is slower than polygons due to nonlinear boundary equations.
•Incremental calculations like in polygons are not directly usable due to continuously changing slopes.
•For simple curves (e.g., circles, ellipses), we can use:
• Midpoint method to find two boundary intersections per scan line.
• Symmetry (between quadrants or octants) to reduce calculations.
•For mixed boundaries (curve + straight line), we combine both methods.
•For complex curves, boundaries are often approximated with line segments to simplify filling.
Fill Methods for Areas with Irregular Boundaries
Another approach for filling a specified area is to start at an inside position and “paint” the interior, point by
point, out to the boundary. This is a particularly useful technique for filling areas with irregular borders, such as a
design created with a paint program. Generally, these methods require an input starting position inside the area to
be filled and some color information about either the boundary or the interior.
1. Boundary-Fill Algorithm:
If the boundary of some region is specified in a single color, we can fill the interior of this region, pixel by pixel, until the
boundary color is encountered. This method, called the boundary-fill algorithm
2. Flood-Fill Algorithm
Sometimes we want to fill in (or recolor) an area that is not defined within a single color boundary.We can paint
such areas by replacing a specified interior color instead of searching for a particular boundary color. This fill
procedure is called a flood-fill algorithm

You might also like