Unit 1 Graphics1
Unit 1 Graphics1
INTRODUCTION TO COMPUTER
GRAPHICS
Definition and Importance of Computer Graphics
The term computer graphics (CG) describes the use of computers to create
and manipulate images.
Graphics can be two- or three-dimensional
Computer Graphics is the creation and manipulation of images or pictures
with the help of computers.
🧪 Color Control:
•Low beam intensity → only the top phosphor layer glows (e.g., green).
•Higher beam intensity → penetrates deeper and excites the lower layer (e.g., red).
•Intermediate intensity → can make both layers glow, blending colors (like orange or yellow).
💡 How It Works:
•The screen is coated with phosphor layers that glow in different colors (usually red and green) when hit by
an electron beam.
•By controlling how deep the electron beam penetrates the phosphor coating, the monitor can produce
different colors.
3D Viewing Devices
3D displays are capable of conveying depth to the viewer. The most common type of 3D
display is stereoscopic displays, which works by showing different images to each eye.
This creates a sense of depth. The brain processes these two separate images and merges
them to form a three-dimensional effect.
A 3D display can be classified into several categories −
•Stereoscopic displays − These provide a basic 3D effect and are widely used in devices
such as virtual reality (VR) headsets.
•Holographic displays − These create a more realistic 3D effect by combining
stereoscopic methods with accurate focal lengths. Unlike stereoscopic displays,
holographic displays can show a true 3D image that can be visible from different angles.
Head-Mounted Displays
• Head-mounted displays (HMDs) (VR)are advanced 3D devices commonly used
for virtual reality experiences. These displays consist of two small screens placed
close to the eyes, each showing a different image. By using magnifying lenses, the
images are enlarged, and the stereoscopic effect is achieved.
Many modern HMDs come with head-tracking technology, allowing users to "look
around" in a virtual world simply by moving their heads. This eliminates the need
for external controllers. VR headsets are a great example of head-mounted displays
and are popular in gaming, simulations, and virtual tours.
• graphics workstation
A graphics workstation in computer graphics (CG) is a high-performance computer system specifically
designed for rendering, modeling, and processing graphics-intensive tasks. It’s used by professionals in
fields like animation, game design, CAD (computer-aided design), architecture, and scientific
visualization.
Key Features of a Graphics Workstation:
1.Powerful GPU (Graphics Processing Unit):
•Handles rendering, shading, and 3D transformations.
2.High-Performance CPU:
•Manages the logic, simulation, and multitasking for complex CG software (e.g., Blender,
Maya, 3ds Max).
•Often multi-core for parallel processing.
3.Large RAM Capacity:
•Needed for handling big textures, scenes, and simulations (8GB–128GB or more).
4.High-Resolution Display(s):
•Supports precise color accuracy and detail, often using 4K or better monitors.
5.Large and Fast Storage:
6.Specialized Software:
In computer graphics, a viewing system refers to the process and components involved in projecting a 3D scene
onto a 2D screen, allowing users to visualize 3D objects correctly. It involves defining how a virtual camera
views the scene and how objects are transformed from world coordinates to screen coordinates.
Main Components of a Viewing System:
1.Modeling Coordinates (Object Space):
1. The local coordinate system of an individual object.
2.World Coordinates:
1. All objects are placed in a global scene using transformations (translate, rotate, scale).
3.Viewing Coordinates (Camera Space):
1. The scene is transformed based on the camera or viewer’s position and orientation.
4.Projection Coordinates:
1. 3D coordinates are projected onto a 2D plane using either:
1. Perspective Projection: mimics human vision, with depth.
2. Orthographic Projection: no perspective distortion (used in CAD, engineering).
5.Normalized Device Coordinates (NDC):
1. Coordinates are mapped to a standardized cube (usually -1 to 1 in all axes).
6.Viewport Transformation (Screen Space):
1. NDCs are scaled and mapped to the actual screen resolution for display.
Type Description Use Case
In computer graphics, input devices and input primitives are essential for user interaction with
graphical systems, such as drawing, selecting, manipulating, or navigating objects in a scene.
Input Devices :These are the hardware tools that allow users to give input to the graphics system.
Device Description
Mouse Used for pointing, clicking, dragging in 2D/3D space.
Keyboard For text input and shortcut commands.
Graphics Tablet Allows freehand drawing and pressure sensitivity.
Touchscreen Combines display and touch input (used in tablets, phones).
For navigating or controlling objects, mainly in simulations or
Joystick / Gamepad
games.
Light Pen An older device for drawing directly on the screen.
Trackball Similar to a mouse but stays stationary.
Devices like 3D mice or VR controllers for navigating in 3D
3D Input Devices
space.
Input Primitives
These are the basic input operations or events that a user can perform through input devices in a graphics
system. They are recognized and processed by the software.
Primitive Description
Locator Returns a position (x, y) or (x, y, z). Used for pointing.
Hardcopy devices are output devices that produce permanent, physical copies of digital images, graphics, or
documents. In computer graphics, they're used to output charts, technical drawings, designs, or rendered images
for reports, presentations, and manufacturing.
Types of Hardcopy Devices:
1. Printers
Used to print digital graphics or images on paper.
•Types:
• Inkjet Printers – Good for color images, photos.
• Laser Printers – Fast, sharp text and vector graphics.
• Dot Matrix Printers – Older tech, low resolution, used for forms.
• Thermal Printers – Common in receipts, labels, and portable devices.
2. Plotters
Specialized printers used for printing vector graphics like blueprints, CAD drawings, and maps.
•Types:
• Drum Plotter – Paper moves around a drum; pens move across.
• Flatbed Plotter – Paper remains stationary; pens move on X and Y axes.
• Inkjet Plotter – Combines inkjet technology with plotting for large-scale prints.
3. 3D Printers
Used to create physical 3D models from 3D digital graphics.
•Converts 3D models into layers and prints them using plastic, resin, or other
materials.
•Popular in product design, prototyping, and architecture.
A graphics network refers to a system or environment where multiple computers or devices are
interconnected to create, process, share, and display graphics or graphical data. It's especially important in
fields like CAD, simulation, gaming, animation studios, and collaborative visualization.
Function Description
Graphics on the internet play a huge role in making websites and web applications interactive, engaging,
and visually appealing. From static images to dynamic 3D content, internet graphics are used across
platforms for design, gaming, education, data visualization, and more.
Types of Graphics on the Internet:
1.Raster Graphics
1. Made of pixels.
2. Formats: JPEG, PNG, GIF, WebP
3. Good for photos, textures, and screenshots.
2.Vector Graphics
1. Made of paths and curves (scalable without loss of quality).
2. Formats: SVG, PDF
3. Used for logos, icons, illustrations.
3.3D Graphics
1. Rendered in real-time using WebGL or WebGPU.
2. Used in online games, virtual tours, simulations.
4.Animated Graphics
1. Includes GIFs, Lottie files (JSON-based animations), CSS animations, and HTML5 Canvas animations.
2. Common in UI/UX, advertisements, and social media.
5.Interactive Graphics
1. Charts, maps, and data visualizations using JavaScript libraries:
Computer Graphics Software
Computer graphics software refers to programs and tools used to create, edit, render, and manipulate visual
content—ranging from simple 2D images to complex 3D animations and simulations.
What is OpenGL?
OpenGL (Open Graphics Library) is a cross-platform, open standard API (Application Programming
Interface) used for rendering 2D and 3D graphics.
Think of it as a toolbox that helps your computer or application talk to the graphics hardware (GPU) so you
can display things like 3D models, textures, lighting effects, animations, and more.
OpenGL Constants
•All constants start with GL_ (in ALL CAPS).
•Words are all uppercase, separated by underscores.
Examples:
•GL_2D
•GL_RGB
•GL_CCW
•GL_POLYGON
•GL_AMBIENT_AND_DIFFUSE
OpenGL Data Types
•All data types start with GL and use lowercase for the actual type.
•They make sure data is the same size across all computers.
Examples:
•GLbyte → 8-bit integer
•GLshort → 16-bit integer
•GLint → 32-bit integer
•GLfloat → floating point number
•GLdouble → double precision float
•GLboolean → true/false
glVertex3fv(point); tells OpenGL to use the values in the array as a 3D vertex (a point in space).
glVertex3f(x, y, z) would take three individual float values.
glVertex3fv(array) takes an array of three float values instead (the v stands for vector or array).
OpenGL Associated Libraries
GLU (OpenGL Utility Library)
Screen Coordinates
•Screen Coordinates:Integer values that match pixel positions in the frame buffer.
•Pixel at (x, y) → x = column (left to right), y = scan line (top to bottom).
•Top-left corner is usually the origin (0,0).
•Custom coordinate systems can be used in software (e.g., origin at bottom-left).
•Conversion:
•Picture coordinates (e.g., Cartesian) are converted to pixel positions using viewing routines.
•Scan-Line Algorithms:
•Used to determine which pixels to fill for shapes (e.g., lines, polygons).
•Pixel size is finite — assume the coordinate points to the center of a pixel.
•Pixel Operations:
•setPixel(x, y) – sets the color at a pixel.
•getPixel(x, y, color) – gets the color (as RGB value) from a pixel.
•3D Scenes:
•Use (x, y, z) coordinates; z adds depth info.
•In 2D, z = 0.
Simple Explanation:
•The screen is like a grid of tiny squares called pixels.
•Each pixel has a position given by (x, y) numbers.
•(0,0) usually starts at the top-left corner.
•The computer figures out which pixels to light up when drawing shapes, like lines or circles.
•Colors for pixels are stored in memory using a function like setPixel(x, y).
•To find out the color at a pixel, we use getPixel(x, y, color).
For 3D images, we also track how far back things are using a z value.
Absolute and Relative Coordinate Specifications
•Absolute coordinates: Specify exact positions in the coordinate system (e.g., (3, 8)).
•Relative coordinates: Specify positions as offsets from the current position.
•Example:
•Current position: (3, 8)
•Relative move: (2, -1)
•New absolute position: (5, 7)
•Use cases: Useful in pen plotters, drawing/painting apps, publishing tools, etc.
•How it works:
•Set a current position
•Provide a sequence of relative coordinates (offsets)
•Flexible systems: Some graphics programs allow both absolute and relative coordinates.
Fill-Area Primitives
•Definition:
A fill area is a region filled with a solid color or pattern.
•Applications:
•Represent surfaces of solid 3D objects
•Used in drawing tools, CAD, simulations, etc.
•Typical Shape:
•Mostly polygons (planar surfaces).Can include circles, curves, or splines
•Why Polygons?
•Easy to process with linear equations,Efficient for rendering,Supported by most graphics libraries
•Key Term:
•Graphics Object: Any object modeled using polygon surface patches
• Approximating a curved surface with polygon facets is sometimes referred to as
surface tessellation, or fitting the surface with a polygon mesh.
• surface tessellation :Surface Tessellationis the process of dividing a curved surface into smaller flat
polygons (usually triangles or quadrilaterals) to approximate the shape for easier rendering in computer
graphics.
• Displays of such figures can be generated quickly as wire-frame views, showing only the polygon
edges to give a general indication of the surface structure. Then the wire-frame
model could be shaded to generate a display of a natural-looking material surface.
Objects described with a set of polygon surface patches are usually referred to as
standard graphics objects, or just graphics objects.
Polygon Fill Areas
• A polygon is a flat (planar) shape made by connecting 3 or more points (vertices) with straight lines (edges).
(All vertices must lie in one plane,Edges connect in sequence and form a closed loop.
No edge crossings (except at endpoints).)
•Simple Polygon: No crossing edges. Common Examples: Triangle, rectangle, octagon, decagon.
•In Graphics Sometimes vertices may not lie perfectly in one plane:
Due to round-off errors
Incorrect input
Curved surface approximations
•Solution: Divide the shape into triangles (always planar and stable).
Concave vs. Convex Polygons:
•Convex: All interior angles < 180°; all vertices on one side of any edge’s extension.
All corners (angles) point outward.
No part of the shape caves in.
A line drawn between any two points inside the shape stays inside.
Example: square, regular hexagon.
•Concave: Some vertices lie on opposite sides of an edge extension.
At least one corner (angle) points inward.
The shape has a "dent" or "cave."
A line between two points inside might go outside the shape.
Example: a star shape or arrowhead.
Splitting Concave Polygons:
•Many graphics algorithms work better with convex polygons.Concave polygons can be split into smaller
convex parts.
•This can be accomplished using edge vectors and edge cross-products;
Ax + B y + C z + D = 0
Typeface vs Font
•A typeface is the overall design of a character family (e.g., Helvetica, Courier, Palatino).
•A font originally meant a specific size and style of a typeface (e.g., 12-point Courier Italic).
•Today, "font" and "typeface" are often used interchangeably.
Point System
•1 inch = 72 points,So, a 14-point font is about 0.5 cm tall
Font Categories
a. By Style
• Serif fonts: Have small finishing strokes (e.g., Times New Roman)
• Sans-serif fonts: Clean and without strokes (e.g., Arial)
b. By Width
• Monospace fonts: All characters have the same width
• Proportional fonts: Character width varies (more natural-looking)
Font Representations in Computers
There are two main ways to store and display fonts:
a. Bitmap Fonts (Raster Fonts)
•Each character is defined by a grid of 0s and 1s
•1s represent pixels that should be colored/displayed
•Easy and fast to display, but:
• Need more storage space for each size/style
• Scaling causes pixelation and jagged edges
• Only scalable in integer steps
b. Outline Fonts (Stroke or Vector Fonts)
•Characters defined by lines and curves
•More flexible: can be scaled smoothly
•Styles like bold or italic can be generated by modifying curves
•Takes more processing time (requires scan conversion to pixels)
glRasterPosition2i(x, y);
•Sets the starting position on the screen (or window) for drawing bitmap text.
•(x, y) are window coordinates where the first character will be drawn.
•This is like placing the "text cursor" at a specific spot.
glutBitmapCharacter(GLUT_BITMAP_9_BY_15, text[k]);
•Draws the current character (text[k]) using the 9x15 pixel fixed-width bitmap font.
•Each time it's called:
•The character is drawn at the current raster position.
•Then the raster position moves to the right by the character’s width (9 pixels).
glutStrokeCharacter(font, character);
This function is used to draw characters using stroke (outline) fonts in OpenGL with the GLUT library.
Transformable (3D) ❌ ✅
Attribute parameters in graphics systems control how graphics primitives (like lines, areas, or text) are displayed.
These parameters define properties such as color, size, and style. Basic attributes determine visual features (e.g.,
line thickness or text font), while special-condition attributes handle interactive features like visibility or
detectability.
There are two main methods to manage attributes:
1.Function Parameters: Include attributes directly in the function that draws the primitive.
2.State System: Maintain a list of current attribute values (state variables). Functions update these, and the
system uses them when rendering primitives.
A state system keeps track of these attribute settings and remains in a specific state until updated. OpenGL is an
example of a graphics library that uses this approach.
Color is a fundamental attribute for all graphics primitives. Users can select colors through numerical values,
menus, or sliders. In display systems, these color values are converted into signals—like electron beam
intensities for monitors or ink/pen choices for plotters.
In raster systems, color is often represented using RGB components (Red, Green, Blue). There are two main
ways to store color in the frame buffer:
1.Direct RGB Storage: RGB values are stored directly for each pixel. With 3 bits per pixel (1 per color), 8
colors are possible. Increasing to 6 or 24 bits per pixel greatly expands the color range (up to ~16.7 million
colors with 24 bits).
2.Color Table (Indexed Color): Instead of storing full RGB values, pixel values are indices pointing to a
separate color table. This saves memory but limits the number of simultaneously displayable colors.
Color Tables
A color lookup table (CLUT) or color map is a method used in computer graphics to manage color
representation efficiently. Instead of storing full RGB values for each pixel, the frame buffer stores indices that
refer to entries in a color table. Each entry in this table typically contains a 24-bit RGB color, allowing a palette
of 16.7 million possible colors.
In the example provided, each pixel uses an 8-bit index (allowing 256 possible values), and the color table maps
these to RGB values. This setup reduces memory requirements—only 1 MB for the frame buffer—while
allowing up to 256 simultaneous colors to be displayed.
Color tables are especially useful in:
•Design and visualization, where changing a table entry instantly updates all relevant pixels.
•Image processing, allowing for threshold-based color mapping.
•Systems with multiple displays, which may use different color tables.
Although this method limits simultaneous colors compared to true color systems, it offers flexibility and
efficiency, particularly in applications where full-color range is not essential. Some systems support both indexed
color and direct color storage for added versatility.
Grayscale
In modern computer-graphics systems, RGB color functions are commonly used to generate grayscale shades
by setting equal values for red, green, and blue components. When all three RGB values are the same, the
resulting color is a shade of gray:
•Values near 0.0 produce dark gray (almost black). Values near 1.0 produce light gray (almost white).
Grayscale is widely used in:
•Enhancing black-and-white photographs,Creating visualization effects,Simplifying images for analysis
This method allows for smooth transitions between light and dark areas without using full color.
Besides the RGB model, other three-component color models are also important in computer graphics:
•CMY/CMYK (Cyan, Magenta, Yellow, and Black): Commonly used for printers and color printing processes.
•HSL/HSV (Hue, Saturation, Lightness/Value): Used in color interfaces to describe colors based on perception,
such as brightness or vividness.
Color as a Physical and Psychological Phenomenon:
•Physically, color is electromagnetic radiation within a specific frequency and energy range.
•Perceptually, color depends on human vision and interpretation.
Important Terms:
•Intensity: A physical measure of light energy emitted over time in a specific direction.
•Luminance: A psychological measure of perceived brightness.
These concepts bridge the gap between the physics of light and the psychology of color perception, providing
more accurate and flexible tools for color handling in graphics.
OpenGL Color Modes and Functions
1. RGB and RGBA Modes:
•RGB Mode: Uses Red, Green, and Blue components to define color.
•RGBA Mode: Adds an Alpha component for transparency and blending.
•Use glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB) to set RGB mode.
•Use glutInitDisplayMode(GLUT_RGBA) to enable alpha blending.
2. glColor Function:*
•Sets the current color for primitives.
•Syntax examples:
•glColor3f(0.0, 1.0, 1.0); sets color to cyan (RGB)
•glColor4f(r, g, b, a); includes alpha for blending.
•Data types: float, int, byte (e.g., glColor3ub(0, 255, 255))
Color-Index Mode:
•Uses color tables instead of RGB values.
•Set color by:
glIndexi(index);
•Example of setting a table value:
glutSetColor(index, red, green, blue);
Point Attributes
In OpenGL, points can have two key attributes: color and size. These are managed using a state system,
meaning the currently set values affect all subsequently defined points until changed.
•Color can be specified using RGB/RGBA values or by an index in a color table (in color-index mode).
•Size is set in integer multiples of pixel size, so a larger point appears as a square block of pixels on raster
systems.
In OpenGL, the color and size of a point are set using the state system and affect how points are displayed:
•Color is set using glColor (for RGB values) or glIndex (for color index mode).
•Size is set with glPointSize(size), where size is a positive floating-point value.
•size = 1.0 → 1×1 pixel
glColor3f(1.0, 0.0, 0.0); // Red color
•size = 2.0 → 2×2 pixels
glBegin(GL_POINTS);
•size = 3.0 → 3×3 pixels glVertex2i(50, 100); // Default size (1x1)
•If antialiasing is enabled, the edges of the point are smoothed.
•The default point size is 1.0. glPointSize(2.0);
•These attributes can be placed inside or outside a glColor3f(0.0, 1.0, 0.0); // Green color
glBegin/glEnd block. glVertex2i(75, 150); // 2x2 point
glPointSize(3.0);
glColor3f(0.0, 0.0, 1.0); // Blue color
glVertex2i(100, 200); // 3x3 point
glEnd();
Line Attributes
In computer graphics, a straight-line segment is defined using three basic attributes: color, width, and style.
•Line Color: Applied uniformly across all graphics primitives using a common function.
•Line Width: Depends on device capabilities. On raster displays, thick lines are rendered as adjacent parallel
pixels (multiples of standard width), while devices like pen plotters may use different pen sizes.
•Line Style: Includes options like solid, dashed, or dotted lines. These are created by modifying line-drawing
algorithms (e.g., Bresenham's algorithm) to control the pattern of visible segments and gaps.
Additional effects like pen or brush strokes can also be applied for artistic rendering.
In OpenGL, the appearance of a straight-line segment is controlled using three key attributes: line color, line
width, and line style.
•Line Width: Set using glLineWidth(width); where width is a float rounded to the nearest non-negative integer.
A value of 0.0 defaults to standard width (1.0). Antialiasing allows smoother and fractional-width lines, but
hardware support may vary.
•Line Style: Set using glLineStipple(repeatFactor, pattern);, where:
•pattern is a 16-bit integer (e.g., 0x00FF) indicating dash/dot pattern.
•repeatFactor specifies how many times each bit is repeated.
•Line stippling must be activated using glEnable(GL_LINE_STIPPLE); and can be disabled using
glDisable(GL_LINE_STIPPLE);.
Line-Drawing Algorithms
• Line Definition:A line segment is defined by the coordinates of its two endpoints.
•In Raster Display Process:
•Endpoints are projected to integer screen coordinates.
•Nearest pixel positions along the line path are determined.
•Line color is loaded into the frame buffer at these pixel positions.
•Digitization Effect:
•Rounding (e.g., (10.48, 20.51) → (10, 21)) causes approximation.
•Results in a stair-step appearance ("the jaggies"), especially for non-horizontal/vertical lines.
•Impact:
•More visible on low-resolution screens.
•Reduced by using high-resolution displays or pixel intensity adjustments (antialiasing techniques).
1. DDA (Digital Differential Analyzer) Algorithm
Concept:
•Sample at unit intervals.
•Calculate corresponding pixel positions.
DDA (Digital Differential Analyzer) is an algorithm used in computer graphics for scan-converting and drawing
straight lines.
It works by calculating pixel positions along a line by sampling at unit intervals in one coordinate (either x or y)
and computing the corresponding value of the other coordinate using the line's slope.
The DDA algorithm incrementally generates points between two specified endpoints, using simple addition
operations and rounding to the nearest integer pixel location.
Purpose:
2.Bresenham Line Drawing Algorithm-
To draw a straight line between two points on a grid (like pixels on a
screen) using only integer calculations (no floating-point arithmetic).
Pk = 2ΔY – ΔX
Step-03:Suppose the current point is
(Xk, Yk)
and the next point is (Xk+1, Yk+1).
Find the next point depending on the value of decision parameter Pk.
Follow the below two cases-
Step-04:
Keep repeating Step-03 until the end point is reached or number of iterations equals to (ΔX-1) times.
Problem-01:Calculate the points between the starting coordinates (9, 18) and ending coordinates (14, 22).
Solution-
Given- Starting coordinates = (X0, Y0) = (9, 18)
Ending coordinates = (Xn, Yn) = (14, 22)
Step-01:
Calculate ΔX and ΔY from the given input. ΔX = Xn – X0 = 14 – 9 = 5
ΔY =Yn – Y0 = 22 – 18 = 4
Step-02:
Calculate the decision parameter. Pk= 2ΔY – ΔX= 2 x 4 – 5= 3
So, decision parameter Pk = 3
Step-03:
As Pk >= 0, so case-02 is satisfied.
Thus,
Similarly, Step-03 is executed until the end point is reached or number of iterations equals to 4 times.
Circle-Generating Algorithms
Ellipse-Generating Algorithms
The equation for the ellipse can be written in terms of the ellipse center coordinates and parameters
rx and
ry as
Midpoint Ellipse Drawing Algorithm
In raster graphics, pen and brush shapes are defined using pixel masks,
which specify the pattern of pixels to be drawn along a line. For example,
a rectangular pen moves along the line path, applying the mask at each
step. To avoid redrawing pixels, the system tracks and merges horizontal
pixel spans for each scan line. Line thickness can be adjusted by
changing the mask size—a 2×2 mask makes a thinner line, while a 4×4
mask creates a thicker one. To add patterns, the pattern can be combined
with the pen mask, allowing for custom line textures and styles.
Curve Attributes :
Curve-drawing in raster graphics can be adapted to display different widths and styles, just like line drawing.
To create thick curves, we use vertical pixel spans where the slope is ≤1 and horizontal spans where the
slope >1. Another method is to draw two parallel curves on either side of the original path, separated by half
the desired width. For example, to draw a thick circular arc, we use two concentric arcs.
Pixel masks (like 11100) can be used to add dashed or dotted patterns to curves, but the length of dashes
varies with curve slope unless adjusted. To keep dashes uniform, we can draw them along equal angular
intervals.
Pen or brush shapes (e.g., rectangular or circular) can also be replicated along a curve's path to draw thick or
styled curves. For even thickness, a circular pen or a rotated pen that aligns with the curve's slope is preferred.
General Scan-Line Polygon-Fill Algorithm
Scan-Line Fill for Convex Polygons
In a convex polygon, only one interior span exists per scan line.
•For each scan line crossing the polygon, we only need to find two edge intersections (left and right boundaries).
•Vertex crossings are treated as single intersection points.
•If a scan line intersects a single vertex (like an apex), we plot only that point.
•The algorithm is simpler than for general polygons because:
• There are no complex intersections or multiple spans.
• Fewer checks are needed for edge processing.
•Some systems simplify further by using triangles only, which makes edge processing even easier (only three
edges to handle).