0% found this document useful (0 votes)
80 views8 pages

Krishna Engineering College: Computer Graphics Compendium UNIT-1

This document contains summaries of key concepts from a computer graphics compendium covering topics like computer graphics, screen refreshing, raster scanning, aspect ratios, frame buffers, transformations, clipping, and 3D modeling. Some key points covered include: 1) Computer graphics involves pictorial representation of objects using computers. Refreshing the screen involves quickly redirecting the electronic beam to the same points to maintain the image. 2) Raster scanning involves sweeping electrons from top to bottom and left to right, turning pixel intensity on and off. Aspect ratio refers to the ratio of vertical to horizontal points needed to represent lines. 3) Frame buffers store the picture definition, with bitmaps used for black and white and
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views8 pages

Krishna Engineering College: Computer Graphics Compendium UNIT-1

This document contains summaries of key concepts from a computer graphics compendium covering topics like computer graphics, screen refreshing, raster scanning, aspect ratios, frame buffers, transformations, clipping, and 3D modeling. Some key points covered include: 1) Computer graphics involves pictorial representation of objects using computers. Refreshing the screen involves quickly redirecting the electronic beam to the same points to maintain the image. 2) Raster scanning involves sweeping electrons from top to bottom and left to right, turning pixel intensity on and off. Aspect ratio refers to the ratio of vertical to horizontal points needed to represent lines. 3) Frame buffers store the picture definition, with bitmaps used for black and white and
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Krishna Engineering College

Department of Computer Science Engineering


3rd Year

Computer Graphics
Compendium
UNIT-1
 Computer Graphics.
Computer graphics remains one of the most existing and rapidly growing computer fields.
Computer graphics may be defined as a pictorial representation or graphical representation of
objects in a computer.
 Scan code
When a key is pressed on the keyboard, the keyboard controller places a code carry to the key
pressed into a part of the memory called as the keyboard buffer. This code is called as the scan
code.
 Refreshing of the screen
Some method is needed for maintaining the picture on the screen. Refreshing of y keeping the
phosphorus glowing to redraw the picture repeatedly. (i.e.) By quickly directing the electronic
beam back to the same points.
 Random scan/Raster scan displays.
Random scan is a method in which the display is made by the electronic beam which is directed
only to the points or part of the screen where the picture is to be drawn. The Raster scan system
is a scanning technique in which the electrons sweep from top to bottom and from left to right. The
intensity is turned on or off to light and unlight the pixel.
 Random scan/Raster scan displays.
Random scan is a method in which the display is made by the electronic beam which is directed
only to the points or part of the screen where the picture is to be drawn.The Raster scan system
is a scanning technique in which the electrons sweep from top to bottom and from left to right.
The intensity is turned on or off to light and unlight the pixel.
 Aspect ratio
The ratio of vertical points to the horizontal points necessary to produce length of lines in both
directions of the screen is called the Aspect ratio. Usually the aspect ratio is ¾.

 Frame buffer
Picture definition is stored in a memory area called frame buffer or refresh buffer.
 bitmap and what is pixmap
The frame buffer used in the black and white system is known as bitmap which take one bit per
pixel. For systems with multiple bits per pixel, the frame buffer is often referred to as a pixmap.
 lines.
A line is of infinite extent can be defined by an angle of slope . and one point on the line P=P(x,y).
This can also be defined as Y=mx+C where C is the Y- intercept.
 Circle.
Circle is defined by its center xc, yc and its radius in user coordinate units. The equation of the
circle is (x-xc) + (y-yc) = r2.

UNIT-2
 Transformation
Transformation is the process of introducing changes in the shape size and orientation of the
object using scaling rotation reflection shearing & translation etc.

 Active and passive transformations.


In the active transformation the points x and y represent different coordinates of the same
coordinate system. Here all the points are acted upon by the same transformation and hence the
shape of the object is not distorted.

 Translation
Translation is the process of changing the position of an object in a straight-line path from one
coordinate location to another. Every point (x, y) in the object must undergo a displacement to (x
´,y´). the transformation is:

x´ = x + tx y´ = y+ty

 Rotation
A 2-D rotation is done by repositioning the coordinates along a circular path, in X = rcos (q + f) and
Y = r sin (q + f).

 Scaling
The scaling transformations changes the shape of an object and can be carried out by multiplying
each vertex (x,y) by scaling factor Sx,Sy where Sx is the scaling factor of x and Sy is the scaling
factor of y.
 Shearing
The shearing transformation actually slants the object along the X direction or the Y direction as
required.ie; this transformation slants the shape of an object along a required plane.

 Reflection
The reflection is actually the transformation that produces a mirror image of an object. For this use
some angles and lines of reflection.

 Window port & view port


A portion of a picture that is to be displayed by a window is known as window port. The display
area of the part selected or the form in which the selected part is viewed is known as view port.

 Clipping And types of clipping.


Clipping is the method of cutting a graphics display to neatly fit a predefined graphics region or the
view port.
Point clipping
Line clipping
Area clipping
Curve clipping
Text clipping

 Covering (exterior clipping)


This is just opposite to clipping. This removes the lines coming inside the windows and displays
the remaining. Covering is mainly used to make labels on the complex pictures.

 Homogeneous coordinates
To perform more than one transformation at a time, use homogeneous coordinates or matrixes.
They reduce unwanted calculations intermediate steps saves time and memory and produce a
sequence of transformations.

 Point scaling
The location of a scaled object can be controlled by a position called the fixed point that is to
remain unchanged after the scaling transformation. .

 Affine transformation.
A coordinate transformation of the form X= axxx +axyy+bx, y ‟ayxx+ayy y+by is called a two-
dimensional affine transformation. Each of the transformed coordinates x „and y „is a linear
function of the original coordinates x and y, and parameters aij and bk are constants determined
by the transformation type.
How will you clip a point?(may/june 2013)
Assuming that the clip window is a rectangle in standard position, we save a point P=(x,y) for
display if the following inequalities are satisfied:
xwmin ≤ x≤ xwmax ywmin ≤ y≤ ywmax

where the edges of the clip window (xwmin ,xwmax, ywmin, ywmax) can be either the world-
coordinate window boundaries or viewport boundaries. If any one of these inequalities is not
satisfied, the points are clipped (not saved for display).
What are the various representation schemes used in three dimensional objects?
Boundary representation (B-res) – describe the 3 dimensional object as a set of surfaces that
separate the object interior from the environment.

Space-portioning representation – describe interior properties, by partitioning the spatial region


containing an object into a set of small, no overlapping, contiguous solids.
UNIT-4
 Bezier Basis Function
Bezier Basis functions are a set of polynomials, which can be used instead of the primitive
polynomial basis, and have some useful properties for interactive curve design.

 Surface patch
A single surface element can be defined as the surface traced out as two parameters (u, v) take
all possible values between 0 and 1 in a two-parameter representation. Such a single surface
element is known as a surface patch.

 Scan line method


The max and min values of the scan were easily found.
The intersection of scan lines with edges is easily calculated by a simple incremental method.
The depth of the polygon at each pixel is easily calculated by an incremental method.

 Patch splitting
It is fast-especially on workstations with a hardware polygon-rendering pipeline.
It‟s speed can be varied by altering the depth of sub-division.

 B-Spline curve.
A B-Spline curve is a set of piecewise(usually cubic) polynomial segments that pass close to a set
of control points. However the curve does not pass through these control points, it only passes
close to them.

 spline
To produce a smooth curve through a designed set of points, a flexible strip called spline is used.
Such a spline curve can be mathematically described with a piecewise cubic polynomial function
whose first and second derivatives are continuous across various curve section.

 Control points
Spline curve can be specified by giving a set of coordinate positions called control points, which
indicates the general shape of the curve, can specify spline curve.

 Spline curve
Using a set of boundary conditions that are imposed on the spline.
Using the state matrix that characteristics the spline
Using a set of blending functions that calculate the positions along the curve path by specifying
combination of geometric constraints on the curve.

 Bezier Curve
It needs only four control points
It always passes through the first and last control points
The curve lies entirely within the convex half formed by four control points.

 Interpolation spline and approximation spline.


When the spline curve passes through all the control points then it is called interpolate. When the
curve is not passing through all the control points then that curve is called approximation spline
.
 Blobby object
Some objects do not maintain a fixed shape, but change their surface characteristics in certain
motions or when in proximity to other objects. That is known as blobby objects. Example –
molecular structures, water droplets.
UNIT-3
 3D transformation
Modeling Transformation
Viewing Transformation
Projection Transformation
Workstation Transformation

 View plane
A view plane is nothing but the film plane in camera which is positioned and oriented for a
particular shot of the scene.

 View-plane no normal vector


This normal vector is the direction perpendicular to the view plane and it is called as [DXN DYN
DZN]

 View distance
The view plane normal vector is a directed line segment from the view plane to the view reference
point. The length of this directed line segment is referred to as view distance.

 Surface detection methods.


Back-face detection, depth-buffer method, A-buffer method, scan-line method, depth-sorting
method, BSP-tree method, area subdivision, octree method, ray casting.

 Parallel projection
Parallel projection is one in which z coordinates is discarded and parallel lines from each vertex on
the object are extended until they intersect the view plane.

 Perspective projection
Perspective projection is one in which the lines of projection are not parallel. Instead, they all
converge at a single point called the center of projection.

 Projection reference point


In Perspective projection, the lines of projection are not parallel. Instead, they all converge at a
single point called Projection reference point.
 Projection reference point
In Perspective projection, the object positions are transformed to the view plane along these
converged projection line and the projected view of an object is determined by calculating the
intersection of the converged projection lines with the view plane.

 Parallel projections
The parallel projections are basically categorized into two types, depending on the relation
between the direction of projection and the normal to the view plane. They are orthographic
parallel projection and oblique projection.

 Orthographic parallel projection


When the direction of the projection is normal (perpendicular) to the view plane then the projection
is known as orthographic parallel projection

 Oblique projection
When the direction of the projection is not normal (not perpendicular) to the view plane then the
projection is known as oblique projection.

 Axonometric orthographic projection


The orthographic projection can display more than one face of an object. Such an orthographic
projection is called axonometric orthographic projection.

 Cavalier projection
The cavalier projection is one type of oblique projection, in which the direction of projection makes
a 45-degree angle with the view plane.

 Cabinet projection
The cabinet projection is one type of oblique projection, in which the direction of projection makes
a n angle of arctan (2)=63.4- with the view plane.

 Vanishing point
The perspective projections of any set of parallel lines that are not parallel to the projection plane
converge to appoint known as vanishing point.

 Principle vanishing point.


The vanishing point of any set of lines that are parallel to one of the three principle axes of an
object is referred to as a principle vanishing point or axis vanishing point.

 View reference point


The view reference point is the center of the viewing coordinate system. It is often chosen to be
close to or on the surface of the some object in the scene.

UNIT-5
 CMY and HSV color models
The HSV (Hue,Saturation,Value) model is a color model which uses color descriptions that have a
more intuitive appeal to a user. To give a color specification, a user selects a spectral color and
the amounts of white and black that is to be added to obtain different shades, tint, and tones.
A color model defined with the primary colors cyan, magenta, and yellow is useful for describing
color output to hard-copy devices.

 subtractive colors
RGB model is an additive system, the Cyan-Magenta-Yellow (CMY) model is a subtractive color
model. In a subtractive model, the more that an element is added, the more that it subtracts from
white. So, if none of these are present the result is white, and when all are fully present the result
is black.
 YIQ color model
In the YIQ color model, luminance (brightness) information in contained in the Y parameter,
chromaticity information (hue and purity) is contained into the I and Q parameters.
A combination of red, green and blue intensities are chosen for the Y parameter to yield the
standard luminosity curve. Since Y contains the luminance information, black and white TV
monitors use only the Y signal.

 shading of objects
A shading model dictates how light is scattered or reflected from a surface. The shading models
described here focuses on achromatic light. Achromatic light has brightness and no color; it is a
shade of gray so it is described by a single value its intensity.
A shading model uses two types of light source to illuminate the objects in a scene : point light
sources and
ambient light.

 texture
The realism of an image is greatly enhanced by adding surface texture to various faces of a mesh
object. The basic technique begins with some texture function, texture(s,t) in texture space , which
has two parameters s and t. The function texture(s,t) produces a color or intensity value for each
value of s and t between 0(dark)and 1(light).

 reflection of incident light


There are two different types of reflection of incident light
Diffuse scattering.
Specular reflections.

 rendering
Rendering is the process of generating an image from a model (or models in what collectively
could be called a scenefile), by means of computer programs. Also, the results of such a model
can be called a rendering.

 flat and smooth shading


The main distinction is between a shading method that accentuates the individual polygons (flat
shading) and a method that blends the faces to de-emphasize the edges between them (smooth
shading).

 shading
Shading is a process used in drawing for depicting levels of darkness on paper by applying media
more densely or with a darker shade for darker areas, and less densely or with a lighter shade for
lighter areas.

 shadow
Shadows make an image more realistic. The way one object casts a shadow on another object
gives important visual clues as to howw twhe twwo.osbjtecuts
dareeponsittiosnefdowcithuressp.ecctotomeach other. Shadows conveys
 smooth shading.
Gouraud shading.Phong shading.

 color model
A color model is a method for explaining the properties or behavior of color within
some particular context. Example: XYZ model, RGB model.

 intensity of light.
Intensity is the radiant energy emitted per unit time, per unit solid angle, and per unit
projected area of source.

 hue
The perceived light has a dominant frequency (or dominant wavelength). The
dominant frequency is also called as hue or simply as color.

 purity of light
Purity describes how washed out or how “pure” the c olor of the light appears. pastels
and pale colors are
described as less pure.

 term chromacity.
The term chromacity is used to refer collectively to the two properties describing color
characteristics: purity and dominant frequency.

 purity or saturation.
Purity describes how washed out or how "pure" the color of the light appears.

 complementary colors.
If the two color sources combine to produce white light, they are referred to as
'complementary colors. Examples of complementary color pairs are red and cyan,
green and magenta, and blue and yellow.

You might also like