Graphics M-1 (1) (AutoRecovered)
Graphics M-1 (1) (AutoRecovered)
Computer Graphics provide the facility of viewing object It is a transformation which produces a mirror image of an
Computer Graphics involves technology to access. CRT stands for Cathode Ray Tube. CRT is a The Flat-Panel display refers to a class of video from different angles. The architect can study building from object. The mirror image can be either about x-axis or y-
The Process transforms and presents information technology used in traditional computer monitors devices that have reduced volume, weight and different angles i.e. axis. The object is rotated by180°.
Front Evaluation Types of Reflection:
in a visual form. The role of computer graphics and televisions. The image on CRT display is power requirement compare to CRT. Side elevation Reflection about the x-axis
insensible. In today life, computer graphics has created by firing electrons from the back of the Example: Small T.V. monitor, calculator, pocket Top plan Reflection about the y-axis
A Cartographer can change the size of charts and Reflection about an axis perpendicular to xy plane and
now become a common element in user tube of phosphorus located towards the front of video games, laptop computers, an advertisement topographical maps. So if graphics images are coded as passing through the origin
interfaces, T.V. commercial motion pictures. the screen. board in elevator. numbers, the numbers can be stored in memory. These Reflection about line y=x
numbers are modified by mathematical operations called as 1. Reflection about x-axis: The object can be reflected
Computer Graphics is the creation of pictures with Once the electron heats the phosphorus, they Transformation.The purpose of using computers for drawing about x-axis with the help of the following matrix
the help of a computer. The end product of the light up, and they are projected on a screen. The is to provide facility to user to view the object from different
angles, enlarging or reducing the scale or shape of object
computer graphics is a picture it may be a color you view on the screen is produced by a called as Transformation.Two essential aspects of
business graph, drawing, and engineering. blend of red, blue and green light. transformation are given below:
In computer graphics, two or three-dimensional Each transformation is a single entity. It can be denoted by
a unique name or symbol.
pictures can be created that are used for It is possible to combine two transformations, after In this transformation value of x will remain same whereas
research. Many hardware devices algorithm has connecting a single transformation is obtained, e.g., A is a the value of y will become negative. Following figures shows
transformation for translation. The B transformation the reflection of the object axis. The object will lie another
been developing for improving the speed of performs scaling. The combination of two is C=AB. So C is side of the x-axis.
picture generation with the passes of time. It obtained by concatenation property.
There are two complementary points of view for describing
includes the creation storage of models and 1. Emissive Display: The emissive displays are object transformation.
image of objects. These models for various fields devices that convert electrical energy into light. Geometric Transformation: The object itself is transformed
relative to the coordinate system or background. The
like engineering, mathematical and so on. Examples are Plasma Panel, thin film mathematical statement of this viewpoint is defined by
Today computer graphics is entirely different from electroluminescent display and LED (Light geometric transformations applied to each point of the
object.
the earlier one. It is not possible. It is an Emitting Diodes). Coordinate Transformation: The object is held stationary
interactive user can control the structure of an Components of CRT: 2. Non-Emissive Display: The Non-Emissive while the coordinate system is transformed relative to the
object of various input devices. 1. Electron Gun: Electron gun consisting of a object. This effect is attained through the application of
displays use optical effects to convert sunlight or coordinate transformations.
Why computer graphics used? series of elements, primarily a heating filament light from some other source into graphics An example that helps to distinguish these two viewpoints:
Suppose a shoe manufacturing company want to (heater) and a cathode. The electron gun creates patterns. Examples are LCD (Liquid Crystal The movement of an automobile against a scenic
background we can simulate this by 2. Reflection about y-axis: The object can be reflected
show the sale of shoes for five years. For this vast a source of electrons which are focused into a Device). Moving the automobile while keeping the background fixed- about y-axis with the help of following transformation
amount of information is to store. So a lot of time narrow beam directed at the face of the CRT. Plasma Panel Display: (Geometric Transformation) matrix
We can keep the car fixed while moving the background
and memory will be needed. This method will be 2. Control Electrode: It is used to turn the Plasma-Panels are also called as Gas-Discharge scenery- (Coordinate Transformation)
tough to understand by a common man. In this electron beam on and off. Display. It consists of an array of small lights. Types of Transformations:
Translation , Scaling , Rotating Reflection, Shearing
situation graphics is a better alternative. Graphics 3. Focusing system: It is used to create a clear Lights are fluorescent in nature. The essential Translation
tools are charts and graphs. Using graphs, data picture by focusing the electrons into a narrow components of the plasma-panel display are: It is the straight line movement of an object from one
position to another is called Translation. Here the object is Here the values of x will be reversed, whereas the value of
can be represented in pictorial form. A picture can beam. Cathode: It consists of fine wires. It delivers positioned from one coordinate location to another. y will remain the same. The object will lie another side of
be understood easily just with a single look. 4. Deflection Yoke: It is used to control the negative voltage to gas cells. The voltage is Translation of point: the y-axis.
direction of the electron beam. It creates an To translate a point from coordinate position (x, y) to The following figure shows the reflection about the y-axis
Interactive computer graphics work using the released along with the negative axis. another (x1 y1), we add algebraically the translation
concept of two-way communication between electric or magnetic field which will bend the Anode: It also consists of line wires. It delivers distances Tx and Ty to original coordinate.
computer users. The computer will receive signals electron beam as it passes through the area. In a x1=x+Tx
positive voltage. The voltage is supplied along y1=y+Ty
from the input device, and the picture is modified conventional CRT, the yoke is linked to a sweep positive axis. The translation pair (Tx,Ty) is called as shift vector.
accordingly. Picture will be changed quickly when or scan generator. The deflection yoke which is Fluorescent cells: It consists of small pockets of Translation is a movement of objects without deformation.
Every position or point is translated by the same amount.
we apply command. connected to the sweep generator creates a gas liquids when the voltage is applied to this When the straight line is translated, then it will be drawn
fluctuating electric or magnetic potential. liquid (neon gas) it emits light. using endpoints.
For translating polygon, each vertex of the polygon is
5. Phosphorus-coated screen: The inside front Glass Plates: These plates act as capacitors. The converted to a new position. Similarly, curved objects are
surface of every CRT is coated with phosphors. voltage will be applied, the cell will glow translated. To change the position of the circle or ellipse its
center coordinates are transformed, then the object is
Phosphors glow when a high-energy electron continuously. drawn using new coordinates.
beam hits them. Phosphorescence is the term The gas will slow when there is a significant Let P is a point with coordinates (x, y). It will be translated
as (x1 y1).
used to characterize the light given off by a voltage difference between horizontal and vertical
phosphor after it has been exposed to an electron wires. The voltage level is kept between 90 volts
Video Display Devices: 3. Reflection about an axis perpendicular to xy plane
beam. to 120 volts. Plasma level does not require and passing through origin:
The most commonly used display device is a Color CRT Monitors: refreshing. Erasing is done by reducing the In the matrix of this transformation is given below
video monitor. The operation of most video The CRT Monitor display by using a combination voltage to 90 volts.
monitors based on CRT (Cathode Ray Tube). The of phosphors. The phosphors are different colors. Each cell of plasma has two states, so cell is said
following display devices are used: There are two popular approaches for producing to be stable. Displayable point in plasma panel is
Refresh Cathode Ray Tube color displays with a CRT are: made by the crossing of the horizontal and
Random Scan and Raster Scan Beam Penetration Method vertical grid. The resolution of the plasma panel
Color CRT Monitors Matrix for Translation:
Shadow-Mask Method can be up to 512 * 512 pixels.
Direct View Storage Tubes 1. Beam Penetration Method: Advantage:
Flat Panel Display The Beam-Penetration method has been used High Resolution,Large screen size is also possible.
Lookup Table with random-scan monitors. In this method, the Less Volume,Less weight,Flicker Free Display
Random Scan Display: CRT screen is coated with two layers of phosphor, Disadvantage:
Random Scan System uses an electron beam red and green and the displayed color depends on Poor Resolution
which operates like a pencil to create a line image how far the electron beam penetrates the Wiring requirement anode and the cathode is
on the CRT screen. The picture is constructed out phosphor layers. This method produces four complex. Scaling:
of a sequence of straight-line segments. Each line colors only, red, green, orange and yellow. A It is used to alter or change the size of objects. The change
Its addressing is also complex. is done using scaling factors. There are two scaling factors,
segment is drawn on the screen by directing the beam of slow electrons excites the outer red layer LED (Light Emitting Diode): i.e. Sx in x direction Sy in y-direction. If the original position
beam to move from one point on the screen to only; hence screen shows red color only. A beam In an LED, a matrix of diodes is organized to form is x and y. Scaling factors are Sx and Sy then the value of
coordinates after scaling will be x1 and y1. In this value of x and y both will be reversed. This is also
the next, where its x & y coordinates define each of high-speed electrons excites the inner green the pixel positions in the display and picture If the picture to be enlarged to twice its original size then called as half revolution about the origin.
point. After drawing the picture. The system layer. Thus screen shows a green color. definition is stored in a refresh buffer. Data is Sx = Sy =2. If Sxand Sy are not equal then scaling will occur 4. Reflection about line y=x: The object may be reflected
but it will elongate or distort the picture. about line y = x with the help of following transformation
cycles back to the first line and design all the read from the refresh buffer and converted to If scaling factors are less than one, then the size of the matrix
lines of the image 30 to 60 time each second. The voltage levels that are applied to the diodes to object will be reduced. If scaling factors are higher than
process is shown in fig: one, then the size of the object will be enlarged.
produce the light pattern in the display. If Sxand Syare equal it is also called as Uniform Scaling. If
LCD (Liquid Crystal Display): not equal then called as Differential Scaling. If scaling
factors with values less than one will move the object closer
Liquid Crystal Displays are the devices that to coordinate origin, while a value higher than one will
produce a picture by passing polarized light from move coordinate position farther from origin.
Advantages: the surroundings or from an internal light source
Inexpensive through a liquid-crystal material that transmits
Disadvantages: the light.
Only four colors are possible Enlargement: If T1= ,If (x1 y1)is original First of all, the object is rotated at 45°. The direction of
LCD uses the liquid-crystal material between two position and T1is translation vector then (x2 y2) are rotation is clockwise. After it reflection is done concerning
Quality of pictures is not as good as with another glass plates; each plate is the right angle to each coordinated after scaling x-axis. The last step is the rotation of y=x back to its
method. other between plates liquid is filled. One glass original position that is counterclockwise at 45°.
Shearing:
plate consists of rows of conductors arranged in
Random-scan monitors are also known as vector 2. Shadow-Mask Method: vertical direction. Another glass plate is consisting It is transformation which changes the shape of object. The
sliding of layers of object occur. The shear can be in one
displays or stroke-writing displays or calligraphic Shadow Mask Method is commonly used in of a row of conductors arranged in horizontal direction or in two directions.
displays. Raster-Scan System because they produce a direction. The pixel position is determined by the Shearing in the X-direction: In this horizontal shearing
sliding of layers occur. The homogeneous matrix for
Advantages: much wider range of colors than the beam- intersection of the vertical & horizontal conductor. The image will be enlarged two times shearing in the x-direction is shown below:
A CRT has the electron beam directed only to the penetration method. This position is an active part of the screen.
parts of the screen where an image is to be It is used in the majority of color TV sets and Liquid crystal display is temperature dependent.
drawn. monitors. It is between zero to seventy degree Celsius. It is
Produce smooth line drawings. Construction: A shadow mask CRT has 3 flat and requires very little power to operate.
High Resolution phosphor color dots at each pixel position. Advantage:
Disadvantages: One phosphor dot emits: red light Low power consumption.,Small Size,Low Cost
Random-Scan monitors cannot display realistic Another emits: green light Disadvantage:
Matrix for Scaling:
shades scenes. Third emits: blue light LCDs are temperature-dependent (0-70°C)
Raster Scan Display: This type of CRT has 3 electron guns, one for LCDs do not emit light; as a result, the image has
A Raster Scan Display is based on intensity each color dot and a shadow mask grid just very little contrast.
control of pixels in the form of a rectangular box behind the phosphor coated screen. LCDs have no color capability.
called Raster on the screen. Information of on Shadow mask grid is pierced with small round The resolution is not as good as that of a CRT.
and off pixels is stored in refresh buffer or Frame holes in a triangular pattern. Input Devices Rotation:
The Input Devices are the hardware that is used to transfer It is a process of changing the angle of the object. Rotation
buffer. Televisions in our house are based on Figure shows the delta-delta shadow mask can be clockwise or anticlockwise. For rotation, we have to
transfers input to the computer. The data can be in the specify the angle of rotation and rotation point. Rotation
Raster Scan Method. The raster scan system can method commonly used in color CRT system. form of text, graphics, sound, and text. Output device
Shearing in the Y-direction: Here shearing is done by
point is also called a pivot point. It is print about which sliding along vertical or y-axis.
store information of each pixel position, so it is display data from the memory of the computer. Output can object is rotated.
suitable for realistic display of objects. Raster be text, numeric data, line, polygon, and other objects. Types of Rotation:
Anticlockwise
Scan provides a refresh rate of 60 to 80 frames Counterclockwise
per second. The positive value of the pivot point (rotation angle) rotates
an object in a counter-clockwise (anti-clockwise) direction. Shearing in X-Y directions: Here layers will be slided in
Frame Buffer is also known as Raster or bit map. The negative value of the pivot point (rotation angle) both x as well as y direction. The sliding will be in horizontal
In Frame Buffer the positions are called picture rotates an object in a clockwise direction. as well as vertical direction. The shape of the object will be
When the object is rotated, then every point of the object is
elements or pixels. Beam refreshing is of two These Devices include: rotated by the same angle.
distorted. The matrix of shear in both directions is given by:
types. First is horizontal retracing and second is Keyboard, Mouse ,Trackball , Spaceball , Joystick Straight Line: Straight Line is rotated by the endpoints
vertical retracing. When the beam starts from the with the same angle and redrawing the line between new
Light Pen, Digitizer , Touch Panels , Voice Recognition , endpoints.
top left corner and reaches the bottom right Working: Triad arrangement of red, green, and Image Scanner Polygon: Polygon is rotated by shifting every vertex using
scale, it will again return to the top left side called Keyboard: the same rotational angle.
blue guns. The most commonly used input device is a keyboard. The Curved Lines: Curved Lines are rotated by repositioning of
at vertical retrace. Then it will again more The deflection system of the CRT operates on all data is entered by pressing the set of keys. All keys are all points and drawing of the curve at new positions. Matrix Representation of 2D Transformation
horizontally from top to bottom call as horizontal Circle: It can be obtained by center position by the Homogeneous Coordinates
3 electron beams simultaneously; the 3 electron labeled. A keyboard with 101 keys is called a QWERTY specified angle. The rotation of a point, straight line or an entire image on
retracing shown in fig: beams are deflected and focused as a group onto keyboard. Ellipse: Its rotation can be obtained by rotating major and the screen, about a point other than origin, is achieved by
The keyboard has alphabetic as well as numeric keys. Some minor axis of an ellipse by the desired angle. first moving the image until the point of rotation occupies
the shadow mask, which contains a sequence of the origin, then performing rotation, then finally moving the
special keys are also available.
holes aligned with the phosphor- dot patterns. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
image to its original position.
The moving of an image from one place to another in a
When the three beams pass through a hole in the Alphabetic keys: a to z (lower case), A to Z (upper case) straight line is called a translation. A translation may be
shadow mask, they activate a dotted triangle, Special Control keys: Ctrl, Shift, Alt done by adding or subtracting to each point, the amount,
Special Symbol Keys: ; , " ? @ ~ ? : by which picture is required to be shifted.
which occurs as a small color spot on the screen. Translation of point by the change of coordinate cannot be
Cursor Control Keys: ↑ → ← ↓
The phosphor dots in the triangles are organized combined with other transformation by using simple matrix
Function Keys: F1 F2 F3....F9. application. Such a combination is essential if we wish to
so that each electron beam can activate only its Numeric Keyboard: It is on the right-hand side of the rotate an image about a point other than origin by
corresponding color dot when it passes through keyboard and used for fast entry of numeric data. translation, rotation again translation.
the shadow mask. Advantage: To combine these three transformations into a single
Types of Scanning or travelling of beam in transformation, homogeneous coordinates are used. In
Advantage: Suitable for entering numeric data.
Raster Scan homogeneous coordinate system, two-dimensional
Function keys are a fast and effective method of using coordinate positions (x, y) are represented by triple-
Interlaced Scanning Realistic image,Million different colors to be commands, with fewer errors. coordinates.
Non-Interlaced Scanning generated,Shadow scenes are possible Mouse: Homogeneous coordinates are generally used in design and
In Interlaced scanning, each horizontal line of the Disadvantage: A Mouse is a pointing device and used to position the construction applications. Here we perform translations,
rotations, scaling to fit the picture into proper position.
screen is traced from top to bottom. Due to which Relatively expensive compared with the pointer on the screen. It is a small palm size box. There are Example of representing coordinates into a
monochrome CRT. two or three depression switches on the top. The movement homogeneous coordinate system: For two-dimensional
fading of display of object may occur. This of the mouse along the x-axis helps in the horizontal geometric transformation, we can choose homogeneous
Relatively poor resolution Matrix for rotation is a clockwise direction.
problem can be solved by Non-Interlaced movement of the cursor and the movement along the y-axis parameter h to any non-zero value. For our convenience
scanning. In this first of all odd numbered lines Convergence Problem helps in the vertical movement of the cursor on the screen. take it as one. Each two-dimensional position is then
Direct View Storage Tubes: represented with homogeneous coordinates (x, y, 1).
are traced or visited by an electron beam, then in The mouse cannot be used to enter text. Therefore, they Following are matrix for two-dimensional
the next circle, even number of lines are located. DVST terminals also use the random scan are used in conjunction with a keyboard. transformation in homogeneous coordinate:
Advantage: Matrix for rotation is an anticlockwise direction.
For non-interlaced display refresh rate of 30 approach to generate the image on the CRT
Easy to use
frames per second used. But it gives flickers. For screen. The term "storage tube" refers to the
Not very expensive
interlaced display refresh rate of 60 frames per ability of the screen to retain the image which has Trackball
second is used. been projected against it, thus avoiding the need It is a pointing device. It is similar to a mouse. This is
to rewrite the image constantly. mainly used in notebook or laptop computer, instead of a Matrix for homogeneous co-ordinate rotation (clockwise)
Advantages:
Function of guns: Two guns are used in DVST mouse. This is a ball which is half inserted, and by changing
Realistic image fingers on the ball, the pointer can be moved.
Million Different colors to be generated Primary guns: It is used to store the picture
Advantage:
Shadow Scenes are possible. pattern. Trackball is stationary, so it does not require much space to
Disadvantages: Flood gun or Secondary gun: It is used to use it.
Matrix for homogeneous co-ordinate rotation (anticlockwise)
Low Resolution maintain picture display. Compact Size
Spaceball:
Expensive It is similar to trackball, but it can move in six directions
where trackball can move in two directions only. The
movement is recorded by the strain gauge. Strain gauge is
applied with pressure. It can be pushed and pulled in
various directions. The ball has a diameter around 7.5 cm.
The ball is mounted in the base using rollers. One-third of
the ball is an inside box, the rest is outside.
Joystick:
A Joystick is also a pointing device which is used to change
cursor position on a monitor screen. Joystick is a stick
having a spherical ball as its both lower and upper ends as
Advantage: No refreshing is needed,High shown in fig. The lower spherical ball moves in a socket.
The joystick can be changed in all four directions. The
Resolution,Cost is very less
function of a joystick is similar to that of the mouse. It is
Disadvantage: mainly used in Computer Aided Designing (CAD) and
It is not possible to erase the selected part of a playing computer games.
picture.,It is not suitable for dynamic graphics
Hidden Surface Removal Z-Buffer Algorithm Introduction of Shading Binary Space Partitioning What is Ray tracing?
One of the most challenging problems in computer graphics It is also called a Depth Buffer Algorithm. Depth buffer Shading is referred to as the implementation of the binary Space Partitioning is implemented for recursively subdividing Ray tracing is a technique used in graphic design to produce graphics by
is the removal of hidden parts from images of solid objects. algorithm is simplest image space algorithm. For each pixel illumination model at the pixel points or polygon surfaces of a space into two convex sets by using hyperplanes as partitions. This
In real life, the opaque material of these objects obstructs on the display screen, we keep a record of the depth of an the graphics objects. process of subdividing gives rise to the representation of objects within following the route that light follows via individual pixels. The effects of
the light rays from hidden parts and prevents us from object within the pixel that lies closest to the observer. In Shading model is used to compute the intensities and colors the space in the form of tree data structure known as BSP Tree. Binary light rays on the items they come into contact with are simulated by ray
seeing them. addition to depth, we also record the intensity that should to display the surface. The shading model has two primary space partitioning arose in the context of 3D computer graphics in tracing. Ray tracing usually results in higher-quality, more realistic images,
In the computer generation, no such automatic elimination be displayed to show the object. Depth buffer is an ingredients: properties of the surface and properties of the 1969, where the structure of a BSP tree allows for spatial information but it takes longer to process and uses more CPU resources. Therefore, ray
takes place when objects are projected onto the screen extension of the frame buffer. Depth buffer algorithm illumination falling on it. The principal surface property is its about the objects in a scene that is useful in rendering, such as objects tracing is most frequently employed to produce still images. It is a method
coordinate system. requires 2 arrays, intensity and depth each of which is reflectance, which determines how much of the incident being ordered from front-to-back with respect to a viewer at a given
Instead, all parts of every object, including many parts that indexed by pixel coordinates (x, y). light is reflected. If a surface has different reflectance for location, to be accessed rapidly. for simulating the movement of electromagnetic (optical) wavefronts
should be invisible are displayed. Algorithm the light of different wavelengths, it will appear to be Needs of Binary Space Partitioning through a system in photonics/optical engineering software. Rays are
To remove these parts to create a more realistic image, we For all pixels on the screen, set depth [x, y] to 1.0 and colored. Binary space partitioning arose from the computer graphics need to drawn as lines from discrete locations on surfaces that represent the
must apply a hidden line or hidden surface algorithm to set intensity [x, y] to a background value. An object illumination is also significant in computing rapidly draw three-dimensional scenes composed of polygons. A position of the local wavefront as it moves through an optical system. In
of objects. For each polygon in the scene, find all pixels (x, y) that lie intensity. The scene may have to save illumination that is
The algorithm operates on different kinds of scene models, within the boundaries of a polygon when projected onto the uniform from all direction, called diffuse illumination. homogeneous media, these rays that are perpendicular to the local
generate various forms of output or cater to images of screen. For each of these pixels: Shading models determine the shade of a point on the wavefront move in straight lines. According to Snell's Law, the rays will
different complexities. (a) Calculate the depth z of the polygon at (x, y) surface of an object in terms of a number of attributes. The reorient at refractive limits and reflect at boundaries in accordance with
All use some form of geometric sorting to distinguish visible (b) If z < depth [x, y], this polygon is closer to the shading Mode can be decomposed into three parts, a the Law of Reflection. The vector grating diffraction equation predicts that
parts of objects from those that are hidden. observer than others already recorded for this pixel. In this contribution from diffuse illumination, the contribution for they will reverse direction at diffractive interfaces and within
Just as alphabetical sorting is used to differentiate words case, set depth [x, y] to z and intensity [x, y] to a value one or more specific light sources and a transparency
near the beginning of the alphabet from those near the corresponding to polygon's shading. If instead z > depth [x, effect. Each of these effects contributes to shading term E inhomogeneous media using gradient index material equations. The
ends. y], the polygon already recorded at (x, y) lies closer to the which is summed to find the total energy coming from a equations governing scatter will be adjusted as rays come into contact
Geometric sorting locates objects that lie near the observer observer than does this new polygon, and no action is point on an object. This is the energy a display should with scattering surfaces. Intensity, polarisation characteristics, optical
and are therefore visible. taken. generate to present a realistic image of the object. The path, and the physical path can be multiplied by the medium's refractive
Hidden line and Hidden surface algorithms capitalize on 3. After all, polygons have been processed; the intensity energy comes not from a point on the surface but a small
various forms of coherence to reduce the computing array will contain the solution. area around the point. index, and also be connected to rays, and can be suitably altered at
required to generate an image. 4. The depth buffer algorithm illustrates several features interfaces.
Different types of coherence are related to different forms common to all hidden surface algorithms. Image Synthesis: Image synthesis refers to the process of generating
of order or regularity in the image. images from virtual 3D scenes. In the context of ray tracing, image
Scan line coherence arises because the display of a scan synthesis involves casting rays from the camera into the scene, tracing
line in a raster image is usually very similar to the display of simple way to draw such scenes is the painter’s algorithm, which
the preceding scan line. produces polygons in order of distance from the viewer, back to front, their paths as they interact with objects and lights, and computing the
Frame coherence in a sequence of images designed to painting over the background, and previous polygons with each closer color of each pixel based on these interactions. This process includes
show motion recognizes that successive frames are very object. This approach has two disadvantages: the time required to sort handling reflection and refraction rays, calculating shadows, simulating
similar. polygons in back to front order, and the possibility of errors in indirect lighting, and accounting for various material properties such as
Object coherence results from relationships between overlapping polygons. Fuchs and co-authors showed that constructing
different objects or between separate parts of the same a BSP tree solved both of these problems by providing a rapid method diffuse, specular, and glossy reflections. The final image synthesized
objects. of sorting polygons with respect to a given viewpoint (linear in the through ray tracing accurately represents the virtual scene from the
A hidden surface algorithm is generally designed to exploit number of polygons in the scene) and by subdividing overlapping viewer's perspective.
one or more of these coherence properties to increase polygons to avoid errors that can occur with the painter’s algorithm.
efficiency. The simplest form of shading considers only diffuse
illumination: Sampling Techniques: Sampling techniques play a crucial role in ray
Hidden surface algorithm bears a strong resemblance to Wireframe methods in computer graphics are techniques used to
two-dimensional scan conversions. Epd=Rp Id represent 3D objects or scenes using simple lines or wireframes. These
tracing to accurately capture the behavior of light and produce high-
Types of hidden surface detection algorithms 5. First, it requires a representation of all opaque surface in where Epd is the energy coming from point P due to diffuse quality images. In ray tracing, sampling refers to the process of selecting
scene polygon in this case. methods serve as the foundation for more complex rendering techniques
Object space methods illumination. Id is the diffuse illumination falling on the and are often used in early stages of computer graphics development or points within each pixel to determine the color contribution from the
Image space methods 6. These polygons may be faces of polyhedral recorded in entire scene, and Rp is the reflectance coefficient at P which
the model of scene or may simply represent thin opaque for specific purposes such as technical illustration or CAD (Computer-Aided scene. Common sampling techniques include:
Object space methods: In this method, various parts of ranges from shading contribution from specific light sources Design). Here are some common wireframe methods:
objects are compared. After comparison visible, invisible or 'sheets' in the scene. will cause the shade of a surface to vary as to its orientation
7. The IInd important feature of the algorithm is its use of a Basic Wireframe: This method represents objects using only lines and
hardly visible surface is determined. These methods concerning the light sources changes and will also include vertices, with no faces or surfaces. Each line connects two vertices,
Regular Sampling: Dividing each pixel into a grid and sampling at regular
generally decide visible surface. In the wireframe model, screen coordinate system. Before step 1, all polygons in the specular reflection effects. In the above figure, a point P on intervals within each cell.
scene are transformed into a screen coordinate system outlining the edges of the object. It provides a very basic visual
these are used to determine a visible line. So these a surface, with light arriving at an angle of incidence i, the representation of the object's structure and is computationally simple. Random Sampling: Randomly selecting points within each pixel to reduce
algorithms are line based instead of surface based. Method using matrix multiplication. angle between the surface normal Np and a ray to the light
Limitations of Depth Buffer Hidden Line Removal (HLR): In this method, hidden lines that are not sampling artifacts and produce more natural-looking images.
proceeds by determination of parts of an object whose view source. If the energy Ips arriving from the light source is visible from the viewpoint are removed. This technique helps to improve
is obstructed by other object and draws these parts in the The depth buffer Algorithm is not always practical because reflected uniformly in all directions, called diffuse reflection, Stratified Sampling: Dividing each pixel into subregions and sampling
of the enormous size of depth and intensity arrays. the clarity of the wireframe representation by eliminating unnecessary
same color. we have clutter. Algorithms such as the Painter's algorithm or depth buffering are within each subregion to ensure more uniform coverage and reduce noise.
Image space methods: Here positions of various pixels Generating an image with a raster of 500 x 500 pixels Eps=(Rp cos i)Ips Importance Sampling: Biasing samples towards regions of the scene with
requires 2, 50,000 storage locations for each array. often used for hidden line removal.
are determined. It is used to locate the visible surface This equation shows the reduction in the intensity of a Surface Modeling: While wireframes primarily focus on the structure of higher contribution to the final image, such as light sources or areas with
instead of a visible line. Each point is detected for its Even though the frame buffer may provide memory for surface as it's tipped obliquely to the light source. If the
intensity array, the depth array remains large. objects, surface modeling methods add information about the surfaces or high reflectance.
visibility. If a point is visible, then the pixel is on, otherwise angle of incidence i exceeds90°, the surface is hidden from faces of objects. This can include adding shading or color to the wireframe
off. So the object close to the viewer that is pierced by a To reduce the amount of storage required, the image can the light source and we must set Epsto zero. Anti-Aliasing: Anti-aliasing is a technique used to reduce the appearance of
be divided into many smaller images, and the depth buffer to give a more realistic appearance. Surface modeling can be achieved
projector through a pixel is determined. That pixel is drawn Constant Intensity Shading through techniques like polygon meshing, where polygons are used to jagged edges or "aliasing" artifacts in images produced by rendering
is appropriate color. algorithm is applied to each in turn. A fast and straightforward method for rendering an object techniques like ray tracing. Aliasing occurs when high-frequency detail in
For example, the original 500 x 500 faster can be divided approximate the surfaces of objects.
These methods are also called a Visible Surface with polygon surfaces is constant intensity shading, also Curve Modeling: In addition to straight lines, wireframe methods can also
into 100 rasters each 50 x 50 pixels. called Flat Shading. In this method, a single intensity is the scene exceeds the resolution of the final image, resulting in pixelation
Determination. The implementation of these methods on a incorporate curves to represent more complex shapes. Curves can be
computer requires a lot of processing time and processing Processing each small raster requires array of only 2500 calculated for each polygon. All points over the surface of or stair-stepping along edges. Anti-aliasing methods in ray tracing involve
elements, but execution time grows because each polygon defined mathematically or through control points, allowing for greater
power of the computer. the polygon are then displayed with the same intensity flexibility in representing curved surfaces or intricate details. blending or filtering neighboring pixel values to smooth out these jagged
The image space method requires more computations. Each is processed many times. value. Constant Shading can be useful for quickly displaying edges and produce a more visually pleasing result. Common anti-aliasing
Subdivision of the screen does not always increase Rendering Modes: Wireframe methods can utilize different rendering
object is defined clearly. Visibility of each object surface is the general appearances of the curved surface as shown in modes to enhance the visual representation of objects. For example, a techniques include:
also determined. execution time instead it can help reduce the work required fig:
to generate the image. This reduction arises because of "solid" rendering mode can fill in the faces between wireframe edges to
Differentiate between Object space and Image space create a solid appearance, while a "transparent" mode can make certain
method coherence between small regions of the screen. Supersampling: Rendering the scene at a higher resolution and
Painter Algorithm parts of the object see-through for better visualization of internal
Object Space structures. downsampling to the desired output resolution to reduce aliasing artifacts.
1. Image space is object based. It concentrates on It came under the category of list priority algorithm. It is Post-processing Filters: Applying filters such as Gaussian blur or edge
also called a depth-sort algorithm. In this algorithm Dynamic Wireframes: These methods allow for the manipulation of
geometrical relation among objects in the scene. wireframe models in real-time. This is commonly used in applications like detection to the rendered image to smooth out jagged edges.
2. Here surface visibility is determined. ordering of visibility of an object is done. If objects are
reversed in a particular order, then correct picture results. 3D modeling software or video games, where users can interactively Multi-Sampling: Sampling multiple points within each pixel and averaging
3. It is performed at the precision with which each object is modify the wireframe representation of objects.
defined, No resolution is considered. Objects are arranged in increasing order to z coordinate. their colors to produce smoother edges and reduce aliasing.
Rendering is done in order of z coordinate. Further objects Fractal geometry in computer graphics refers to the application of fractal
4. Calculations are not based on the resolution of the concepts and algorithms to create and manipulate digital images. Fractals
display so change of object can be easily adjusted. will obscure near one. Pixels of rear one will overwrite pixels
of farther objects. If z values of two overlap, we can are geometric shapes or structures that exhibit self-similarity at different
5. These were developed for vector graphics system. scales. In computer graphics, fractals are often used to generate complex
6. Object-based algorithms operate on continuous object determine the correct order from Z value as shown in fig
(a). and detailed images that mimic natural phenomena or create visually
data. appealing patterns. Here's an explanation of how fractal geometry is
7. Vector display used for object method has large address If z objects overlap each other as in fig (b) this correct
order can be maintained by splitting of objects. applied in computer graphics:
space. Generation of Fractal Patterns: Fractals can be generated using
8. Object precision is used for application where speed is In general, flat shading of polygon facets provides an mathematical equations or algorithms. These equations typically involve
required. accurate rendering for an object if all of the following recursive or iterative processes that generate self-similar patterns at
9. It requires a lot of calculations if the image is to enlarge. assumptions are valid:- various levels of detail. For example, the Mandelbrot set is a famous
10. If the number of objects in the scene increases, The object is a polyhedron and is not an approximation of fractal generated by iterating a simple mathematical formula.
computation time also increases. an object with a curved surface. Rendering Fractal Images: Once a fractal pattern is generated, it needs to
Image Space All light sources illuminating the objects are sufficiently far be rendered into a digital image. This involves converting the
1. It is a pixel-based method. It is concerned with the final from the surface so that N. L and the attenuation function mathematical description of the fractal into pixel values that can be
image, what is visible within each raster pixel. are constant over the surface (where N is the unit normal to displayed on a computer screen. Rendering techniques may vary
2. Here line visibility or point visibility is determined. a surface and L is the unit direction vector to the point light depending on the complexity of the fractal and the desired level of detail.
3. It is performed using the resolution of the display device. source from a position on the surface). For instance, techniques like ray tracing or iterative function systems (IFS)
4. Calculations are resolution base, so the change is difficult The viewing position is sufficiently far from the surface so are commonly used for rendering fractals.
to adjust. Depth sort algorithm or painter algorithm was developed by
Newell, sancha. It is called the painter algorithm because that V. R is constant over the surface (where V is the unit Fractal Landscapes and Terrains: Fractal geometry is often employed to
5. These are developed for raster devices. vector pointer to the viewer from the surface position and R generate realistic-looking landscapes and terrains in computer graphics. By
6. These operate on object data. the painting of frame buffer is done in decreasing order of
distance. The distance is from view plane. The polygons at represent a unit vector in the direction of ideal specular using fractal algorithms, it's possible to create natural-looking features
7. Raster systems used for image space methods have reflection). such as mountains, valleys, and coastlines with intricate detail. Fractal
limited address space. more distance are painted firstly.
The concept has taken color from a painter or artist. When Gouraud shading terrains can be generated procedurally, allowing for the creation of vast
8. There are suitable for application where accuracy is This Intensity-Interpolation scheme, developed by Gouraud and varied landscapes in virtual environments.
required. the painter makes a painting, first of all, he will paint the
entire canvas with the background color. Then more and usually referred to as Gouraud Shading, renders a Texture Synthesis: Fractal patterns can be used to generate textures for
9. Image can be enlarged without losing accuracy. polygon surface by linear interpolating intensity value surfaces in computer-generated imagery. By applying fractal algorithms,
10. In this method complexity increase with the complexity distance objects like mountains, trees are added. Then rear
or foreground objects are added to picture. Similar across the surface. Intensity values for each polygon are textures with complex and irregular patterns can be created, mimicking
of visible parts. coordinate with the value of adjacent polygons along the the appearance of natural materials like rocks, clouds, or foliage. Fractal-
Similarity of object and Image space method approach we will use. We will sort surfaces according to z
values. The z values are stored in the refresh buffer. common edges, thus eliminating the intensity based textures are commonly used in 3D rendering to add realism and
In both method sorting is used a depth comparison of discontinuities that can occur in flat shading. detail to virtual environments.
individual lines, surfaces are objected to their distances Steps performed in-depth sort
Sort all polygons according to z coordinate. Each polygon surface is rendered with Gouraud Shading by Fractal Compression: Fractals can also be used for image compression
from the view plane. performing the following calculations: purposes. Fractal compression algorithms exploit the self-similar nature of
Find ambiguities of any, find whether z coordinate overlap,
split polygon if necessary. Determining the average unit normal vector at each fractal images to achieve high compression ratios while preserving image
Scan convert each polygon in increasing order of z polygon vertex. quality. Fractal compression is particularly effective for images with
coordinate. Apply an illumination model to each vertex to determine the repetitive patterns or structures, such as textures or natural landscapes.
Painter Algorithm vertex intensity.
Step1: Start Algorithm Linear interpolate the vertex intensities over the surface of
the polygon.
Step2: Sort all polygons by z value keep the largest value At each polygon vertex, we obtain a normal vector by Texture mapping is a fundamental technique in computer graphics used to
of z first. averaging the surface normals of all polygons staring that add detail, surface texture, or color variation to 3D models. It involves
Step3: Scan converts polygons in this order. vertex as shown in fig: applying a 2D image, called a texture, onto the surface of a 3D object to
Test is applied enhance its visual appearance. Texture mapping is widely used in various
Does A is behind and non-overlapping B in the dimension of applications such as video games, visual effects, architectural visualization,
Z as shown in fig (a) and product design. Here's how texture mapping works:
Does A is behind B in z and no overlapping in x or y as Texture Coordinates: Each vertex of a 3D model has associated texture
shown in fig (b) coordinates that define how the texture image is mapped onto the
If A is behind B in Z and totally outside B with respect to surface. Texture coordinates are usually defined in a 2D space, typically
Considerations for selecting or designing hidden view plane as shown in fig (c) ranging from 0 to 1, where (0,0) represents the bottom-left corner of the
surface algorithms: Following three considerations are If A is behind B in Z and B is totally inside A with respect to texture image and (1,1) represents the top-right corner.
taken: view plane as shown in fig (d) Texture Mapping Process: During rendering, when a 3D model is displayed
Sorting The success of any test with single overlapping polygon on the screen, the graphics pipeline interpolates texture coordinates
Coherence allows F to be painted. across the surface of polygons forming the model. At each pixel, the
Machine texture coordinates are used to sample the corresponding texel (texture
Sorting: All surfaces are sorted in two classes, i.e., visible element) from the texture image.
and invisible. Pixels are colored accordingly. Several sorting Texture Filtering: Texture filtering techniques are applied to smooth the
algorithms are available i.e. transition between texels, especially when the texture is mapped onto
Bubble sort Thus, for any vertex position V, we acquire the unit vertex surfaces at oblique angles or when the texture resolution is lower than the
Shell sort normal with the calculation screen resolution. Common filtering methods include nearest-neighbor
Quick sort interpolation, bilinear interpolation, and trilinear interpolation.
Tree sort Texture Mapping Modes: Texture mapping supports different modes to
Radix sort
•
control how textures are applied to surfaces. These modes include:
Different sorting algorithms are applied to different hidden Repeat: The texture image is repeated
surface algorithms. Sorting of objects is done using x and y, or tiled across the surface of the
z co-ordinates. Mostly z coordinate is used for sorting. The
•
object.
efficiency of sorting algorithm affects the hidden surface Clamp: The texture coordinates are
removal algorithm. For sorting complex scenes or hundreds Once we have the vertex normals, we can determine the clamped to the range [0,1], preventing
of polygons complex sorts are used, i.e., quick sort, tree intensity at the vertices from a lighting model. the texture from repeating beyond the
Phong Shading
sort, radix sort.
•
original texture image boundaries.
For simple objects selection, insertion, bubble sort is used. A more accurate method for rendering a polygon surface is Mirror: The texture image is mirrored
Coherence to interpolate the normal vector and then apply the or flipped when the texture
illumination model to each surface point. This method
It is used to take advantage of the constant value of the
•
coordinates exceed the range [0,1].
surface of the scene. It is based on how much regularity developed by Phong Bui Tuong is called Phong Shading or Wrap: Similar to the repeat mode, but
exists in the scene. When we moved from one polygon of normal vector Interpolation Shading. It displays more with an additional wrap-around effect
one object to another polygon of same object color and realistic highlights on a surface and greatly reduces the where the texture image is wrapped
shearing will remain unchanged. Match-band effect. around the object's surface.
A polygon surface is rendered using Phong shading by
Types of Coherence
•
Texture Types: Textures can vary in type and content, including:
1. Edge coherence: The visibility of edge changes when it carrying out the following steps: Color textures: Used to add color
crosses another edge or it also penetrates a visible edge. Scan Line Algorithm Determine the average unit normal vector at each polygon variations, patterns, or images to
vertex.
2. Object coherence: Each object is considered separate
•
It is an image space algorithm. It processes one line at a surfaces.
from others. In object, coherence comparison is done using time rather than one pixel at a time. It uses the concept Linearly & interpolate the vertex normals over the surface Normal maps: Used to simulate surface
an object instead of edge or vertex. If A object is farther area of coherence. This algorithm records edge list, active of the polygon. details, such as bumps or wrinkles, by
Apply an illumination model along each scan line to
from object B, then there is no need to compare edges and
•
edge list. So accurate bookkeeping is necessary. The edge encoding surface normals in a texture.
faces. list or edge table contains the coordinate of two endpoints. calculate projected pixel intensities for the surface points. Specular maps: Used to control the
3. Face coherence: In this faces or polygons which are Active Edge List (AEL) contain edges a given scan line Interpolation of the surface normal along a polynomial edge intensity of specular highlights on
generally small compared with the size of the image. intersects during its sweep. The active edge list (AEL) between two vertices as shown in fig: surfaces, affecting their shininess or
4. Area coherence: It is used to group of pixels cover by
•
should be sorted in increasing order of x. The AEL is reflectivity.
same visible face. dynamic, growing and shrinking. Displacement maps: Used to deform
5. Depth coherence: Location of various polygons has Following figures shown edges and active edge list. The the geometry of surfaces, altering their
separated a basis of depth. Depth of surface at one point is active edge list for scan line AC1contain e1,e2,e5,e6 edges. shape based on the grayscale values
calculated, the depth of points on rest of the surface can The active edge list for scan line AC2contain e5,e6,e1. encoded in the texture.
often be determined by a simple difference equation.
6. Scan line coherence: The object is scanned using one A bump texture, also known as a bump map or height map, is a type of
scan line then using the second scan line. The intercept of texture used in computer graphics to simulate surface details such as
the first line. bumps, wrinkles, or grooves on 3D models without actually altering the
7. Frame coherence: It is used for animated objects. It is geometry of the model itself. It achieves this effect by encoding surface
used when there is little change in image from one frame to height or normal information in a grayscale image, which is then applied to
another. the model during rendering.
8. Implied edge coherence: If a face penetrates in Here's how a bump texture works:
another, line of intersection can be determined from two Grayscale Image: A bump texture is typically a grayscale image where
points of intersection. different shades of gray represent variations in surface height. Lighter
Algorithms used for hidden line surface detection areas correspond to higher elevations, while darker areas correspond to
Back Face Removal Algorithm lower elevations. The grayscale values in the image encode the amount
Z-Buffer Algorithm and direction of surface displacement.
Painter Algorithm Incremental methods are used to evaluate normals between Normal Mapping: During rendering, the grayscale values of the bump
Scan Line Algorithm scan lines and along each scan line. At each pixel position texture are used to perturb the surface normals of the 3D model. By
Subdivision Algorithm along a scan line, the illumination model is applied to modifying the direction of the surface normals at each point on the
Floating horizon Algorithm Scan line can deal with multiple surfaces. As each scan line determine the surface intensity at that point. model's surface according to the grayscale values of the bump texture, the
Back Face Removal Algorithm is processed, this line will intersect many surfaces. The Intensity calculations using an approximated normal vector appearance of surface details such as bumps and wrinkles is simulated.
It is used to plot only surfaces which will face the camera. intersecting line will determine which surface is visible. Lighting Interaction: The altered surface normals resulting from the bump
Depth calculation for each surface is done. The surface rear at each point along the scan line produce more accurate
The objects on the back side are not visible. This method mapping affect how light interacts with the surface of the model. Lighter
will remove 50% of polygons from the scene if the parallel to view plane is defined. When the visibility of a surface is results than the direct interpolation of intensities, as in areas of the bump texture appear to protrude from the surface, causing
projection is used. If the perspective projection is used then determined, then intensity value is entered into refresh Gouraud Shading. The trade-off, however, is that phong them to catch more light and create highlights, while darker areas appear
more than 50% of the invisible area will be removed. The buffer. shading requires considerably more calculations recessed, resulting in shadows and shading effects.
object is nearer to the center of projection, number of Algorithm An environment map in computer graphics is a technique used to simulate
polygons from the back will be removed. Step1: Start algorithm the reflection and illumination of a 3D scene based on its surroundings. It
It applies to individual objects. It does not consider the Step2: Initialize the desired data structure is a texture or image that represents the environment surrounding the
interaction between various objects. Many polygons are Create a polygon table having color, edge pointers, scene, typically in all directions. Environment maps are commonly used to
obscured by front faces, although they are closer to the coefficients create realistic reflections, lighting, and backgrounds in computer-
viewer, so for removing such faces back face removal Establish edge table contains information regarding, the generated imagery (CGI). Here's how environment maps work and how
algorithm is used. endpoint of edges, pointer to polygon, inverse slope. they are utilized:
When the projection is taken, any projector ray from the Create Active edge list. This will be sorted in increasing Representation of Environment: An environment map is essentially a
center of projection through viewing screen to object pieces order of x. panoramic image that captures the entire surrounding environment from a
object at two points, one is visible front surfaces, and Create a flag F. It will have two values either on or off. single viewpoint, often represented as a sphere or cube. This image can be
another is not visible back surface. Step3: Perform the following steps for all scan lines obtained through various means, such as capturing photographs of the
This algorithm acts a preprocessing step for another Enter values in Active edge list (AEL) in sorted order using y real-world environment, digitally painting a scene, or generating synthetic
algorithm. The back face algorithm can be represented as value environments using computer graphics software.
geometrically. Each polygon has several vertices. All Scan until the flag, i.e. F is on using a background color Types of Environment Maps:
vertices are numbered in clockwise. The normal M1 is When one polygon flag is on, and this is for surface S 1enter Cube Map: A cube map is composed of six square images, each
generated a cross product of any two successive edge color intensity as I1into refresh buffer representing one face of an imaginary cube surrounding the scene. These
vectors. M1represent vector perpendicular to face and point When two or image surface flag are on, sort the surfaces images are typically arranged in a specific order (e.g., right, left, top,
outward from polyhedron surface according to depth and use intensity value S n for the nth bottom, front, back) to form a seamless panoramic view.
N1=(v2-v1 )(v3-v2) surface. This surface will have least z depth value Spherical Map: A spherical map is a single panoramic image wrapped
If N1.P≥0 visible Use the concept of coherence for remaining planes. around a sphere, simulating a 360-degree view of the environment.
N1.P<0 invisible Step4: Stop Algorithm Spherical maps are often used when capturing real-world environments or
Advantage creating skyboxes for outdoor scenes.
It is a simple and straight forward method. Reflection Mapping: One common application of environment maps is for
It reduces the size of databases, because no need of store simulating reflections on shiny or reflective surfaces within a 3D scene.
all surfaces in the database, only the visible surface is When rendering a reflective object, such as a mirror or metallic surface,
stored. the environment map is sampled based on the reflected ray direction to
Back Face Removed Algorithm determine the color and intensity of the reflected light. This creates the
Repeat for all polygons in the scene. illusion of the object reflecting its surroundings realistically.
Do numbering of all polygons in clockwise direction i.e. Image-Based Lighting (IBL): Environment maps are also used for image-
v1 v2 v3.....vz based lighting, where the illumination of the scene is computed based on
Calculate normal vector i.e. N1 the lighting information contained in the environment map. By sampling
N1=(v2-v1 )*(v3-v2) the environment map in multiple directions around each point in the
Consider projector P, it is projection from any vertex scene, the incoming light intensity and direction can be determined,
Calculate dot product allowing for realistic lighting effects such as diffuse inter-reflection and
Dot=N.P ambient occlusion.
Test and plot whether the surface is visible or not. Background Rendering: Environment maps can be used as a backdrop or
If Dot ≥ 0 then background for the scene, providing context and realism to the rendered
surface is visible image.
else
Not visible