0% found this document useful (0 votes)
17 views

Computer Graphics

Uploaded by

vanshgagrani0
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Computer Graphics

Uploaded by

vanshgagrani0
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

BCA – V Semester PBCA501: Computer Graphics

Max Marks: 100 (ESE: 70 CIA: 30) Passing marks:40

Question Paper pattern for End Semester Exam (ESE)

Max Marks: 70

Part-A: - will contain 12 very short questions of 1 mark each (attempt any 10).

Part-B: - will contain 4 questions (1 from each unit) of 5 marks.

Part-C: - will contain 4 questions (1 from each unit with internal choice) of 10 marks.

UNIT-I

Introduction to Computer Graphics: Definition, Application areas of Computer graphics, Graphical


user interface, Cathode ray tubes, Random scan displays, Raster scan displays, Colour CRT monitors,
Flat panel displays (Plasma Panels, Liquid crystal displays, Electroluminescent displays, etc.),
Graphics software (GKS, PHIGS), Color Models (RGB, CMYK, HSV), Color Lookup table.

UNIT-II

Raster Graphics Algorithms: Line drawing algorithms (DDA, Bresenham's algorithm), Circle and
Ellipse drawing algorithms, Filling (Scan-converting Polygon filling, Inside outside tests boundary fill,
flood fill and area fill algorithm). Transformations: 2-D transformations (Translation, Rotation,
Reflection, shearing, scaling), Homogeneous coordinate representation, 3-D transformations.

UNIT-III

Two dimensional Clipping and visible surface detection methods: Viewing pipeline, window and
viewport, Sutherland-Cohen Line Clipping algorithm, Cyrus-beck algorithm, classification of visible
surface detection algorithm, Backface algorithm, Depth sorting method, Area subdivision method.

UNIT-IV

Introduction to Digital Image Processing: Definition, application areas. File forms, Basic digital Image
Processing Techniques-Antialiasing, Convolutions, Thresholding, Image Enhancement.

Notes:-
UNIT I

Introduction to Computer Graphics


Creation of picture with help of computer. The graphics technology used to represent your view or
ideas in pictorial format so that easy to understand. Graphics or image that we can design or made
on the two plans 2D or 3D.

2 Dimensional graphics plane have only x and y direction whereas 3 dimensional graphics plane has
x, y and z direction.
Figure 2 2D Bar Chart Figure 1 3D Bar Chart

Types of computer graphics


1. Interactive computer graphics: - user have some control over the picture.
2. Non-interactive computer graphics: - user cannot make any change over the picture.
3. Generative computer: - user can generate the picture by using any geometrical shapes.
4. Image analysis: - generate picture images.

Application Areas of Computer Graphics:


- Entertainment: Video games, movies, animation, and virtual reality.

- Education and Training: Simulations, e-learning, and computer-aided instruction.

- Medicine: Medical imaging, surgical simulations, and educational models.

- Engineering and Design: CAD (Computer-Aided Design), CAM (Computer-Aided Manufacturing),


and virtual prototyping.

- Business: Data visualization, information graphics, and presentation graphics.

- Art and Design: Digital art, graphic design, and illustration.

- Scientific Visualization: Visualizing complex scientific data and simulations.

Computer Graphics Software


1. Graphics Kernel System (GKS):- This is the first standard software that was ISO and ASCII.
This was developed for 2D graphics further its 3D version was developed. GKS two major
advantage over the graphics packages. First it provide real portability of software among
different hardware system. Second is device independent programme.
2. Programmer’s Hierarchical Interactive Standard (PHIGS):- It is the updated version of GKS. Ii
have the capabilities for object modelling, colour specification and picture manipulation but
it not have the capability of 3D surface handling so that its updated version was developed
called PHIGS+

Graphical User Interface (GUI):


Before some time ago the computer program work on the text commands. It is very hard to learn
and remember so that it is converted into graphical interface that is much easier to learn and
execution just need to click on the graphical buttons.

The reason behind the popularity of the window is that the first operating system that introduced
the commands in the form of graphical form so that graphics. A GUI allows users to interact with
electronic devices using graphical icons and visual indicators rather than text-based interfaces, typed
command labels, or text navigation.

Components of graphical user interface

1. Desktop: - it is the area of the display screen where user can work.
2. Window: - it is the area of the application on the screen.
3. Menu: - it is the collection of options.
4. Graphics pointer: - it is a mouse cursor or a symbol that appears on the screen and you can
move to select an object or option in the menu.
5. Pointing device: - it is a device that used to move the cursor on the screen as like mouse,
touchpad for selecting the menu, icon on the graphics screen.

Graphical Devices
Graphics system are used to display pictures that is basically needs two things first hardware and
second software. Hardware includes all types of hardware as input, output and process. The
standard input output device that is used in computer graphics system are as:
Keybord
OMR
Mouse
OCR
Scanner
MICR
Light Pen
BCR
Input Devices Joystic

Data Glove

Touch Screen

Micophone
Graphical Devices

Web Camera

CRT

Raster Scan Display

Rondom scan
Moniter
display

DVST
Emmissive LED
Flate Panel Display
Non- emmisive LCD
Output Devices
Impact
Printer
Non- impact
Plotter

Sound System

Figure 3 Graphical Devices

Input Device: - A device which used to transfer/insert the data into the computer is
known as input device. The input devices are classified by PHIGS and GKS into six major
categories:-
1. Stroke: - The device that return the coordinate position(x, y). It returned the x, y
position on the click of button. A mouse, joysticks or touchpad can be programmed
to send the coordinate position.
2. Locator: - All the device that return the coordinate position on the graphics display
continuously as they move. Mouse, touchpad, keyboard, 3d mouse etc.
3. String: - the device that take input text and send it to CPU as like keyboard.
4. Pick: - When a device is used to select any object.
5. Valuator: - This kind of device is used to send scalar data to CPU.
6. Choice: - That is used to select menu item or anything on the screen.
Here is the description of some commonly use input devices:-
1. Keyboard: - The commonly used input device that is used to entering text string. A
normal keyboard have 102 -108 buttons. It contain alphabets, numbers, and special
symbol, function, control and arrow keys. It have two circuits plates that are
separated to each other and overlapped when we press a specific key then internal
plates completes the circuit and send a signal to CPU that is proceed by the
operating system and find out the specific character into the symbol or ASCII table
according to the receiving signal.
2. Mouse: - A mouse is a small, handheld, cursor and pointing device. It have 2-3
button at the top that is used to execute any task.
3. Touch pad: - It is is input and pointing device that commonly used in laptop. They are
used to move cursor using motions of the user’s finger.
4. Joystick: - A joystick consisting of a handheld stick. It is used to control video games
and have one or more push, bottom. It select the screen position by moving the stick
other responded to pressure on the stick.
5. Scanner: - A scanner is scan images, printed text, and handwriting. There are many
kinds of scanners are available as like OMR, OCR, MICR and barcode reader. OMR is
used to check the examination OMR sheet or any type of OMR. Barcode scanner
read the information from the barcode. MICR is used to scan magnetic ink character,
bank cheque, DD and find out any fraud.
6. Touch Screen: - It is a display device which can detect the location of touches within
the display area. It allow the display device to use as input device.
7. Light Pen: - it is a pencil type input device that used to select the screen position. It is
made by light sensitive stick. It allow the user to point to display object or draw on
the screen. It can work any CRT display but not with LCD screen projector and other
display devices. It can work sensing the light where we move the light pen on the
screen the pixels off.
8. Web Cam: - it made by lenses, an image sensor and some supporting electronic. The
refresh rate around 25 frames per second. It can record up to 25-50 images per
second.
9. Microphone: - it convert sound into an electrical signal. It is a device made to
capture sound waves in air, water, hard material and translate them into electronic
signal.

Output Device: - A device which is used to show the result of input data after processing
is known as output device. Here the brief explanation of some majorly use output devices:-
1. Monitor: - The primary output device is monitor. There are many kinds of monitors
are available.
Moniter

CRT Color CRT Flat Panel


Moniter Moniter Display

Random Raster Scan Non-


Immisive
Scan Display Display immisive

LCD LED

Figure 4 Monitor

Cathode Ray Tubes (CRTs):

- Random Scan Displays: Also known as vector displays, they draw images line by line, directly
controlling the electron beam to create lines and shapes. Used in early CAD systems.

- Raster Scan Displays: The electron beam sweeps across the screen in a pattern, creating an image
composed of pixels. Widely used in standard computer monitors and TVs.

Colour CRT Monitors:

These use three electron guns (red, green, blue) and a phosphorescent screen to produce colour
images by varying the intensity of each primary colour.

Flat Panel Displays:

- Plasma Panels: Use small cells containing electrically charged ionized gases to produce images.
Known for high brightness and contrast ratios.

- Liquid Crystal Displays (LCDs): Use liquid crystals modulated by electric fields to control light
passage and create images. Widely used in monitors, TVs, and mobile devices.

- Electroluminescent Displays: Emit light in response to an electric current or field. Used in


industrial and military applications due to their ruggedness and clarity.

2. Printer:- The device s that are used taken our images and pictures printout on the paper so
that the images or photograph always visible until it is not destroyed. The quatity of picture
depends on the output divece , size, no of dot per inch , line per inch, that ccan be displayed
on the paper. The hardcopy devices as printer and plotters. The printer prduces the output
by two methods:-impact and non impact.
Impact- ink ribbon on to the paper. A dot-metrix printer or line printers are impact
printer.
Non-impact- it is use laser technique. Inkjet is the example of non-impact printer.

Colour Models
RGB (Red, Green, Blue):

- A colour model based on the additive colour theory, where colours are created by combining red,
green, and blue light. Commonly used in digital displays.

CMYK (Cyan, Magenta, Yellow, Black):

- A subtractive colour model used in colour printing. It represents colours by the absorption and
reflection of light, combining cyan, magenta, yellow, and black inks.

HSV (Hue, Saturation, Value):

- A cylindrical color model that describes colors in terms of their hue, saturation, and brightness.
Useful for color selection in graphics design and image editing.

Color Lookup Table (CLUT):

- A mechanism to map a set of color indices to RGB values, allowing for efficient color management
and image processing. Commonly used in indexed color images.
Unit II

Raster Graphics Algorithms

Raster graphics algorithms are fundamental to computer graphics and involve the techniques
for drawing basic shapes and performing geometric transformations on the screen. Below are
the key concepts and algorithms related to raster graphics.

1. Line Drawing Algorithms

a) Digital Differential Analyzer (DDA) Algorithm

 Purpose: To draw a line between two points in a pixel-based display.


 Method:
o The DDA algorithm calculates intermediate points along a line using floating-
point arithmetic.
o It incrementally calculates the y-coordinate for each x-coordinate (or vice
versa) and rounds the result to the nearest integer.
 Steps:

Increment the x-coordinate by 1 (for ∣m∣≤1)or the y-coordinate by 1 (for


1. Calculate the slope m of the line: m=y2−y1/x2−x1.

∣m∣>1).
2.

3. Compute the corresponding y (or x) value using the equation of the line.
4. Plot the pixel at the calculated coordinates.

b) Bresenham's Line Algorithm

 Purpose: Efficiently draw a line between two points with integer arithmetic.
 Method:
o Bresenham's algorithm avoids floating-point operations and rounding by using
an error term to determine when to increment the y-coordinate as x
increments.
 Steps:

1. Initialize decision parameter ppp and starting point.


2. Iterate over x-coordinates, calculating the corresponding y-coordinate based
on the decision parameter.
3. Adjust the decision parameter to minimize the distance from the actual line.
4. Plot the pixel at the calculated coordinates.

2. Circle and Ellipse Drawing Algorithms

a) Midpoint Circle Drawing Algorithm

 Purpose: To draw a circle centered at a given point with a specified radius.


 Method:
o This algorithm uses symmetry and a decision parameter to plot points in each
octant of the circle.
 Steps:
1. Initialize the starting point on the circle (radius at 0 degrees).
2. Use a decision parameter to determine whether the next point is directly above
or diagonally above the current point.
3. Use the symmetry of the circle to plot points in all octants.

b) Midpoint Ellipse Drawing Algorithm

 Purpose: To draw an ellipse centered at a given point with specified semi-major and
semi-minor axes.
 Method:
o Similar to the circle algorithm, it uses decision parameters and symmetry to
efficiently plot the ellipse.
 Steps:

1. Initialize decision parameters for the two regions (where the slope is <1 and
>1).
2. Plot points based on the decision parameters, adjusting for the ellipse's
different radii.

3. Filling Algorithms

a) Scan-Conversion Polygon Filling

 Purpose: To fill a polygon with a solid color or pattern by determining the pixels
inside the polygon.
 Method:
o The polygon is scanned line-by-line (scanlines), and the intersections of the
polygon edges with each scanline are determined.
o Pixels between pairs of intersections are filled.

b) Inside-Outside Tests

 Purpose: To determine whether a given point lies inside or outside a polygon.


 Method:
o Various methods, like the even-odd rule or winding number, are used to
count the number of edge crossings to decide the point's position relative to
the polygon.

c) Boundary Fill Algorithm

 Purpose: To fill a region bounded by a specific color.


 Method:
o The algorithm starts at a seed point inside the boundary and spreads outward,
filling until it reaches the boundary.
o Recursive and non-recursive (using stacks) versions exist.

d) Flood Fill Algorithm

 Purpose: To fill a connected region with a specified color.


 Method:
o Similar to boundary fill but does not require a predefined boundary color.
o The algorithm spreads from a seed point to fill all connected pixels of the
same color.

4. Transformations

a) 2-D Transformations

 Translation:
o Moves every point of an object by the same distance in a given direction.
1 0 Tx
o Transformation matrix: 0 1 Ty
0 0 1
 Rotation:
o Rotates an object about a fixed point (origin) by a specified angle.
cos θ −sinθ 0
o Transformation matrix: sin θ cosθ 0
0 0 1
 Scaling:
o Resizes an object by scaling factors along the x and y axes.
Sx 0 0
o Transformation matrix: 0 Sy 0
0 0 1
 Shearing:
o Distorts an object by shifting its points along one axis, creating a slanted
shape.
1 Shx 0
o Transformation matrix: Shy 1 0
0 0 1
 Reflection:
o Flips an object across a specified axis.
1 0 0
o For x-axis reflection: 0 −1 0
0 0 1

b) Homogeneous Coordinate Representation

 Used to represent transformations in a unified way.


 A point (x, y) is represented as (x, y, 1) in homogeneous coordinates, which allows
for the combination of transformations using matrix multiplication.

c) 3-D Transformations

 Translation:
o Moves an object in 3D space along the x, y, and z axes.
1 0 0 Tx
o Transformation matrix: 0 1 0 Ty
0 0 1 Tz
0 0 0 1
 Rotation:
o Rotates an object about a specified axis (x, y, or z).
o Rotation about x-axis:

 Scaling:
o Resizes an object by scaling factors along x, y, and z axes.
o Transformation matrix:

 Reflection and Shearing:


o Similar concepts to 2D but extended into the third dimension.

Summary

Raster graphics algorithms are essential for rendering basic shapes and performing geometric
transformations on a display screen. Understanding these algorithms and their applications in
2D and 3D transformations is crucial for any competitive exam related to computer graphics.

Unit III

Two-Dimensional Clipping and Visible Surface Detection Methods

In computer graphics, clipping refers to the process of confining drawing operations to a


specified region, usually called a clipping window. Visible surface detection involves
determining which parts of objects are visible in a scene when viewed from a particular
perspective.

1. Viewing Pipeline

The viewing pipeline is a series of steps that transforms 3D objects into a 2D image on the
screen.

1. Modeling Transformation: Converts objects from model coordinates to world


coordinates.
2. Viewing Transformation: Maps world coordinates to viewing coordinates, setting up
a view volume.
3. Projection Transformation: Projects 3D objects onto a 2D plane (parallel or
perspective projection).
4. Clipping: Trims parts of objects outside the viewing volume or viewport.
5. Viewport Transformation: Maps the 2D viewing plane to a viewport on the screen.

2. Window and Viewport

 Window: A rectangular region in world coordinates that defines the area to be


displayed.
 Viewport: A rectangular region on the display device where the window is mapped.

3. Line Clipping Algorithms

a) Sutherland-Cohen Line Clipping Algorithm

 Purpose: To clip a line segment against a rectangular clipping window.


 Method:
1. Divide: The plane is divided into nine regions around the window (inside, top,
bottom, left, right, and corners).
2. Assign Outcodes: Each endpoint of the line is assigned a 4-bit outcode
representing its position relative to the clipping window.
3. Test:
 If both endpoints of the line have an outcode of 0000 (inside the
window), the line is fully visible.
 If the logical AND of both outcodes is not 0000, the line is completely
outside the window.
 If partially inside, the line is clipped where it intersects the window.
4. Clip: Calculate intersection points with the window's edges and draw the
visible portion.

b) Cyrus-Beck Line Clipping Algorithm

 Purpose: Generalized algorithm for clipping a line segment against a convex


polygon.
 Method:
1. Parametric Representation: The line segment is represented parametrically
as P(t)=P1+t(P2−P1)P(t) = P_1 + t(P_2 - P_1)P(t)=P1+t(P2−P1), where
0≤t≤10 \leq t \leq 10≤t≤1.
2. Calculate Intersection: Determine intersections of the line segment with the
polygon's edges by varying ttt.
3. Determine Visible Segment: The visible segment is determined by finding
the values of ttt where the line enters and exits the polygon.

4. Classification of Visible Surface Detection Algorithms

Visible surface detection algorithms are used to determine which surfaces or parts of surfaces
are visible from a given viewpoint.
a) Object-Space Methods

 Concept: Compare objects directly in the object space to determine which surfaces
are visible.
 Example: Backface Culling.

b) Image-Space Methods

 Concept: Determine visibility pixel by pixel on the image plane.


 Example: Depth-Buffer (Z-Buffer) Method.

c) Hybrid Methods

 Concept: Combine features of both object-space and image-space methods.


 Example: Area Subdivision Method.

5. Visible Surface Detection Algorithms

a) Backface Culling Algorithm

 Purpose: To quickly eliminate surfaces of objects that are not visible to the viewer.
 Method:
o For a polygonal surface, if the dot product of the surface normal and the
viewing direction is positive, the surface is facing away and is not visible (a
backface).
o Such faces are culled (removed) from the rendering pipeline.

b) Depth-Sorting Method (Painter’s Algorithm)

 Purpose: To render polygons in a back-to-front order, ensuring that closer objects


obscure farther ones.
 Method:
1. Sort: Sort all surfaces by depth (z-coordinate).
2. Render: Start rendering surfaces from the farthest to the nearest.
3. Resolve Overlaps: Handle cases where surfaces overlap by splitting them and
sorting again.

c) Area Subdivision Method

 Purpose: To recursively divide the image space (screen) into smaller areas until they
are easily resolved as visible or hidden.
 Method:
1. Classify Areas: Determine if an area contains parts of one surface, is empty,
or contains multiple surfaces.
2. Subdivide: If multiple surfaces are involved, subdivide the area into smaller
regions.
3. Render: Continue subdividing until regions are either fully visible or fully
obscured, then render.

Summary
In computer graphics, two-dimensional clipping and visible surface detection are crucial
processes for rendering scenes correctly. Clipping ensures that only the visible portions of
objects are drawn, while visible surface detection algorithms determine which parts of the
objects are visible from the viewer’s perspective. Mastery of these concepts is essential for
understanding how computer graphics systems render complex scenes.

Unit IV

Introduction to Digital Image Processing

Digital Image Processing refers to the manipulation of digital images using computer
algorithms. It involves various techniques to improve the quality of an image, extract useful
information, or prepare an image for analysis.

1. Definition

Digital Image Processing (DIP) is the process of using computers to perform operations on
an image in order to enhance it, analyze it, or extract information from it. The image is
represented as a matrix of pixel values, where each pixel represents the intensity (and color,
in the case of color images) at that point.

2. Application Areas

Digital Image Processing is used in a wide variety of fields, including:

 Medical Imaging: Enhancing images from X-rays, MRI, or CT scans to help doctors
diagnose conditions more accurately.
 Remote Sensing: Processing satellite images for environmental monitoring, weather
forecasting, and disaster management.
 Computer Vision: Enabling machines to "see" and interpret visual data, such as in
autonomous vehicles, facial recognition systems, and industrial inspection.
 Robotics: Helping robots navigate and understand their environment by processing
visual inputs.
 Photography: Enhancing and editing photos to improve their quality or artistic
appeal.
 Security and Surveillance: Analyzing video feeds for threat detection, intrusion
detection, and more.
 Entertainment: Creating special effects in movies and games, and restoring old films
and photos.

3. File Formats

In digital image processing, images are stored in various file formats, each with its own
characteristics:

 JPEG (Joint Photographic Experts Group): A widely used format for photographs,
it uses lossy compression to reduce file size.
 PNG (Portable Network Graphics): A lossless format that supports transparency,
commonly used for web graphics.
 TIFF (Tagged Image File Format): A lossless format that is often used in
professional photography and printing.
 BMP (Bitmap): A simple, uncompressed format that is used for basic image storage.
 GIF (Graphics Interchange Format): A lossless format that supports simple
animations, often used for small web graphics.
 RAW: A format used in digital cameras to store unprocessed sensor data, providing
the highest possible quality for post-processing.

4. Basic Digital Image Processing Techniques

a) Anti-aliasing

 Purpose: To reduce the visual defects (aliasing) that occur when high-frequency
detail is sampled at a lower resolution.
 Method:
o Anti-aliasing smooths out the edges of objects in an image, making them
appear less jagged.
o Techniques include supersampling, subpixel rendering, and filtering.

b) Convolution

 Purpose: A fundamental operation in image processing used to apply filters to an


image for tasks such as blurring, sharpening, edge detection, and noise reduction.
 Method:
o Convolution involves sliding a kernel (a small matrix) over an image and
performing element-wise multiplication and summing the results to produce a
new pixel value.
o For example, a blur effect can be achieved by using a kernel where all
elements have the same positive value.

c) Thresholding

 Purpose: To create a binary image from a grayscale image by converting all pixels
above a certain threshold to white and all others to black.
 Method:
o A global threshold is chosen, and each pixel in the image is compared to this
threshold.
o If a pixel's intensity is greater than the threshold, it is set to the maximum
value (white); otherwise, it is set to the minimum value (black).
o Otsu's method is a popular algorithm for automatically selecting an optimal
threshold.

d) Image Enhancement

 Purpose: To improve the visual appearance of an image or to prepare it for further


analysis by enhancing certain features.
 Techniques:
o Histogram Equalization: Adjusts the contrast of an image by redistributing
the intensity levels to cover a broader range.
o Noise Reduction: Removes unwanted noise from an image using filters like
Gaussian, median, or bilateral filters.
o Sharpening: Enhances the edges in an image to make features more distinct,
typically using convolution with a kernel designed to highlight edges.
o Contrast Stretching: Expands the range of intensity levels in an image to
improve visibility of details.

Summary

Digital Image Processing is a crucial technology in many fields, enabling us to manipulate,


enhance, and analyze images with a range of techniques and algorithms. By understanding
the basic principles and methods, you can effectively process images for various applications,
from medical imaging to computer vision and beyond.

4o

You might also like