Unit Merge 199

Download as pdf or txt
Download as pdf or txt
You are on page 1of 81

UNIT 1

Introduction

*Introduction of Computer Graphics

Computer Graphics involves technology to access. The Process transforms and presents information in a visual form. The role
of computer graphics insensible. In today life, computer graphics has now become a common element in user interface.
Computer Graphics is the creation of pictures with the help of a computer. The end product of the computer graphics is a
picture it may be a business graph, drawing, and engineering.

In computer graphics, two or three-dimensional pictures can be created that are used for research. Many hardware devices
algorithm has been developing for improving the speed of picture generation with the passes of time. It includes the creation
storage of models and image of objects. These models for various fields like engineering, mathematical and so on..

*Definition of Computer Graphics:

It is the use of computers to create and manipulate pictures on a display device. It comprises of software techniques to
create, store, modify, represents pictures.

Graphics is defined as any sketch or a drawing or a special network that pictorially represents some meaningful information.
Computer Graphics is used where a set of image needs to be manipulated or the creation of the image in the form of pixels
and is drawn on the computer. Computer Graphics can be used in digital photography, film, entertainment, electronic
gadgets and all other core technologies which are required. It is a vast subject and area in the field of computer science.
Computer Graphics can be used in UI design, rendering, geometric object, animation and many more. In most area, computer
graphics is an abbreviation of CG. There are several tools used for implementation of Computer Graphics. The basic is the
<graphics.h> header file in Turbo-C, Unity for advanced and even OpenGL can be used for it’s Implementation. It was
invented in 1960 by great researchers Verne Hudson and William Fetter from Boeing.

Computer Graphics refers to several things:


• The manipulation and the representation of the image or the data in a graphical manner.
• Various technology required for the creation and manipulation.
• Digital synthesis and its manipulation.
Types of Computer Graphics
• Raster Graphics: In raster graphics pixels are used for an image to be drawn. It is also known as a bitmap image in which
a sequence of image is into smaller pixels. Basically a bitmap indicates a large number of pixels together.
• Vector Graphics: In vector graphics, mathematical formulae are used to draw different types of shapes, lines, objects and
so on.

*Applications of Computer Graphics


Some applications of computer graphics are mentioned below-

• Graphical User Interface (GUI): It is a way of interacting with a computer using the icon, menu, and other visual, graphics
by which user easily interacts.
• Art: Computer Graphics provides a new way of making designs. Many artists and designers use illustrator, coral draw,
Photoshop, adobe muse, and other types of applications for creating new designs.
• Entertainment: Computer graphics allow the user to make animated movies and games. Computer graphics are used to create
scenes. Computer graphics are also used for special effects and animations.
• Presentations: Computer graphics are used for making charts, bar diagrams, and other images for the presentation purpose,
with the graphical presentation the user, can easily understand the points.
• Engineering Drawings: Computer Graphics has also provided us the flexibility to make 3D models, house circuits and
engineering drawings, etc. which is helpful for us.
• Education and Training: Computer graphics are also used to provide training to students with simulators. The students can
learn about the machines without physically trying them.
• Medical Imaging: MRIs, CT scans, and other internal scans are possible because of computer graphics.
• Flight Simulator: Computer graphic is used to provide training to pilots of aircraft. The pilots give much time to a flight
simulator on the ground instead of real airplanes.
• Printing Technology: Computer graphics are used in textile designing and flex printing.
Pana 1 of 81
• Typography: Use of character pictures to replace the rough form of the past in printing.
• Satellite Imaging: Computer graphics are used to forecast the movement of the cloud and to predict the weather.
• Cartography: Computer graphics are used in map drawing.
• CAD/CAM: CAD/CAM is also known as Computer-aided design and computer-aided manufacturing. CAD/CAM is used to
design and build prototypes, finished products, and manufacturing processes.

Advantages of Computer graphics


Some important benefits of Computer graphics are:
• Increase Productivity
• Computer graphics give us tools for creating pictures of solid objects as well as of theoretical, engineered objects.
• Computer graphics also point out the moving images.
• The computer can store complex drawings and display complex pictures.
• Sound cards are used to make computers produce sound effects led to other uses of graphics.

Disadvantages of Computer graphics:


Some problems with computer graphics are:
• Hardware characteristics and cost.
• Technical issues.
• Coupling issues (display-to-simulation).
• Define the motion.
• Structure of drawings (making the structure explicit).
• Hidden line removal.
• Program instrumentation and visualization.

*History of Computer Graphics

Computer Graphics (CG) was first developed as a visualization tool. Computer graphics were basically introduced for
scientists and engineers in government and corporate research centers, i.e., Bell Labs and Boeing in the 1950s. After then the
tools would be developed at Universities in the 60s and 70s at places, i.e., Ohio State University, MIT, University of Utah,
Cornell, North Carolina, and the New York Institute of Technology. The computer graphics are invented by
researchers Verne Hudson and William Fetter of Boeing. It is often brief as computer graphics.
The early development that took place in academic centers continued at research centers such as the famous Xerox PARC in
the 1970?s. These achievements broke first into broadcast video graphics and then major motion pictures in the late 70?s and
early 1980?s. Computer graphic research continues still today around the world. Companies such as Industrial Light and
Magic by George Lucas are regularly refining the cutting edge of computer graphic technology to present the world with a
new manufactured digital reality.

We can understand it by the following steps:


1940-1941: The first directly digital computer-generated graphics that would associate with today as being actual CG. The
very first radiosity image was invented at MIT in the 1940s.

1946: The images were first presented at the 1946 national technical conference of the Illuminating Engineering Society of
North America.

1948: The images were published in the book: Lighting Design by Moon and D. E. Spencer. 1948.

1950: John Whitney Sr. invents his computer-assisted mechanisms to create some of his graphic artwork and short films.

1951: Vectorscope computer graphics display on the computer at MIT.


The General Motors Research Laboratory also begins the study of computer-aided graphical design applications.

1955: Sage system uses the first light pen as an input device at MIT Lab by Bert Sutherland.

1956: Lawrence Livermore labs associate graphics display with IBM 704 and film recorder for color images.
Bertram Herzog uses analog computers to create CRT graphics in the behavior of military vehicles at the University of
Michigan computing center.

1957: In the National Bureau of Standards first image-processed photo was used.
The IBM 740 created a sequence of points on a CRT monitor to represent lines and shapes.
Pana 2 of 81
1958: Steven Coons, Ivan Sutherland, and Timothy Johnson started working with the TX-2 computer system to manipulate
the drawn pictures.

1959: The first commercial film recorder produced in San Diego, CA.
Don Hart and Ed Jacks invented the first computer-aided drawing system at General Motors Research Laboratory and IBM.

1960: William Fetter was first termed “Computer Graphics” for cockpit drawing.
John Whitney Sr. invents motion graphics in LA.

1962: In MIT Lincoln Laboratory Ivan Sutherland produced a man-machine graphical communication system.

1963: An analog computer was developed by Charles Csuri and used to transform a drawing.
Edgar Horwood introduced a computer graphics mapping system which is used by U. S. Housing and urban development.

1965: IBM 2250, the first graphics computer available.

1966: Ralph Baer developed the first consumer computer graphics game, “Odyssey.”

1968: Tony Pritchett made the first computer animation “FLEXIPEDE” in the UK.

1972: Nolan Bushnell, “the father of Electronic games,” developed PONG game.

1973: The concept of Z-buffer algorithm and texture mapping were developed by Edwin Catmull.

1974: The Phong shading method is developed by Phong Bui-Toung.

1975: Dr. Edwin Catmull introduced the Tween animation system.

1976: The first 3D appearance film was created by Gary Demos, John Whitey Jr. at Triple-I.

1978: For the mechanical Universe Jim Blinn produced the first series of animation. Jim Blinn has also published the
technique of bump mapping.

1979: Ray tracing created at Bell Laboratory & Cornell University.

1980: The first digital computer was used in computer graphics in the Digital Equipment Corporation(DEC).

1981: The making of computer graphics for IMAX film format is done by Nelson Max at Lawrence Liver more National
Laboratory. The Donkey Kong video game was introduced by Nintendo.

1982: The first broad use of 3D graphics animation was done in Disney featured film.
AutoCAD 1.0 is launched-It is only used for wire frame representation.

1985: Medical imaging software combined with Voxel technology.

1987: Video graphics array (VGA) standard was introduced.

1989: Super video graphics array (SVGA) was recommended. Tim Berners Lee developed the first website ever, which has
the original URL (Universal resource locator).

1993: Mosaic, the web browser was released by UIUC for general usage. The Codename of mosaic was “Mozilla.”
The Mosaic, the first web browser was released. First public call made by cell phone.

1994: Netscape founded by developers of the Mosaic.

1995: First, fully CGI (Computer-generated imagery) was released. MS Internet Explorer 1.0 released.

2000: The first web-based CAD system Sketchup released.


Pana 3 of 81
2006: Google acquires Sketchup.

2009: The state of the art of computer graphics, as of 2009, was brief in a short video.

2013: Now, it is possible to create graphics on a home computer.

2015: Big data is being used to create animations.

2018: Now, we can create “realistic” graphics on mobile phones. We can also create a completely CGI-based human face in
real-time.

* Computer Aided Design (CAD) :


Computer Aided Design (CAD) is the use of computers for designing models of physical product which means computers
are used to aid in creating the design, modifying the design and analyzing the designing activities. Computer Aided Design is
also known as Computer Aided Drafting. The purpose of CAD is making 2D technical drawings and 3D models. So in
Simple we can say CAD represents your part geometry to the computer. Computer Aided Design (CAD) software is mostly
used by an engineer.
Examples of CAD software includes AutoCAD, Autodesk Inventor, CATIA, Solid Works etc.
Computer + Designed Software = CAD

*Computer Aided Manufacturing (CAM) :


Computer Aided Manufacturing (CAM) is the use of computer software to control machine tools in the manufacturing of
modules. CAM transforms engineering designs into end products. CAM is different than the conventional manufacturing as
it implements automation in the manufacturing process. Computer Aided Manufacturing is also known as Computer Aided
Modeling or Machining. The purpose of CAM is using 3D models to design machining processes. So in Simple we can say
CAM converts the geometry to machine tool. So, without Computer Aided Design (CAD), Computer Aided Manufacturing
(CAM) has no meaning. Computer Aided Manufacturing (CAM) software is mostly used by a trained machinist.
Examples of CAM software includes Work NC, Siemens NX, Power MILL, SolidCAM etc.
Manufacturing Tools + Computer = CAM
*Difference Between CAD and CAM

CAD CAM
1. CAD refers to Computer Aided Design. 1) CAM refers to Computer Aided Manufacturing.
2. CAD is the use of computers for designing means 2) Computer Aided Manufacturing (CAM) is the use of
computers are used to aid in creating the design, computer software to control machine tools in the
modifying and analyzing the designing activities. manufacturing of modules. CAM transforms
engineering designs into end products.
3. Computer Aided Design is also known as Computer 3) Computer Aided Manufacturing is also known as
Aided Drafting. Computer Aided Modeling.
4. The purpose of CAD is making 2D technical 4) The purpose of CAM is using 3D models to design
drawings and 3D models. machining processes.
5. Due to CAD it is much easier, more accurate and 5) Due to CAM atomization in machining process is
faster drafting, making 3D models impossible achieved.
without computers.
6. So in Simple we can say CAD represents your part 6) So in Simple we can say CAM converts the
geometry to the computer. geometry to machine tool.
7. For Computer Aided Design process only a 7) For Computer Aided Manufacturing computer and
computer and CAD software are required for a often a CAM software package along with that for
technician to create a design. manufacturing process requires a CAM machine.
8. Computer Aided Design (CAD) software is mostly 8) Computer Aided Manufacturing (CAM) software is
used by an engineer. mostly used by a trained machinist.
9. Some examples of CAD software includes 9) Some examples of CAM software includes Work
AutoCAD, Autodesk Inventor, CATIA, Solid NC, Siemens NX, Power MILL, SolidCAM.
Works.

Pana 4 of 81
UNIT 2

Graphics Hardware

Graphics hardware is computer hardware that generates computer graphics and allows them to be shown on a display,
usually using a graphics card (video card) in combination with a device driver to create the images on the screen.

*Input hardware

Input hardware is devices that translate data into a form the computer can process. The input hardware are like keyboard,
source entry devices such as scanner, pointing devices which is mouse, and others.

*Mouse

Mouse is the most popular pointing device. It is a very famous cursor-control device having a small palm size box with a
round ball at its base, which senses the movement of the mouse and sends corresponding signals to the CPU when the
mouse buttons are pressed.
Generally, it has two buttons called the left and the right button and a wheel is present between the buttons. A mouse can
be used to control the position of the cursor on the screen, but it cannot be used to enter text into the computer.
*Mechanical mouse

A mechanical mouse is a computer mouse containing a metal or rubber ball on its under side. When the ball is rolled in any
direction, sensors inside the mouse detect the motion and move the on-screen mouse pointer accordingly. The picture is an
example of the bottom of a mechanical mouse with the ball removed. Today, this mouse has been replaced by the optical
mouse. A mechanical mouse is susceptible to dust particles and other debris getting on the ball and preventing the mouse
from working.

*Optical Mouse

The optical mouse is a computer mouse first introduced by Microsoft on April 19, 1999, that utilizes LEDs (light-emitting
diodes) or a laser to track or detect movement. These differing technologies are identified by examining the bottom of the
mouse.The mechanical mouse has a ball, and the optical mouse has a light instead.

How does an optical mouse work?

An optical mouse also has a tiny low-resolution camera that takes a thousand or more pictures every second. In the camera,
the CMOS (complementary metal-oxide semiconductor) sensor sends a signal to a DSP (digital signal processor). The DSP
can then analyze each picture for pattern and light changes, and then moves the mouse cursor on your screen.

Optical mouse disadvantages

Optical mice don't work as well on reflective surfaces such as glass tables, and require a mouse pad to work properly.
However, in comparison to mechanical mouse , optical mouse are a much better solution. If the optical mouse is wireless, it
requires batteries.

Pana 5 of 81
Why is the mouse light red?

Although not all optical mouse use red, it's the most common LED used because it's often the cheaper diode and because the
photodetectors are more sensitive to red light.

*Keyboard

Keyboard is the most common and very popular input device which helps to input data to the computer. The layout of the
keyboard is like that of traditional typewriter, although there are some additional keys provided for performing additional
functions.
Keyboards are of two sizes 84 keys or 101/102 keys, but now keyboards with 104 keys or 108 keys are also available for
Windows and Internet.
The keys on the keyboard are as follows −

S.No Keys & Description

Typing Keys
1
These keys include the letter keys (A-Z) and digit keys (09)
which generally give the same layout as that of typewriters.

Numeric Keypad
It is used to enter the numeric data or cursor movement.
2 Generally, it consists of a set of 17 keys that are laid out in
the same configuration used by most adding machines and
calculators.

Function Keys
The twelve function keys are present on the keyboard which
3 are arranged in a row at the top of the keyboard. Each
function key has a unique meaning and is used for some
specific purpose.

Control keys
These keys provide cursor and screen control. It includes
4 four directional arrow keys. Control keys also include
Home, End, Insert, Delete, Page Up, Page Down,
Control(Ctrl), Alternate(Alt), Escape(Esc).

Special Purpose Keys

5 Keyboard also contains some special purpose keys such as


Enter, Shift, Caps Lock, Num Lock, Space bar, Tab, and
Print Screen.

*Light Pen

Light pen is a pointing device similar to a pen. It is used to select a displayed menu item or draw pictures on the monitor
screen. It consists of a photocell and an optical system placed in a small tube.
When the tip of a light pen is moved over the monitor screen and the pen button is pressed, its photocell sensing element
detects the screen location and sends the corresponding signal to the CPU.
Pana 6 of 81
*Scanner

Scanner is an input device, which works more like a photocopy machine. It is used when some information is available on
paper and it is to be transferred to the hard disk of the computer for further manipulation.
Scanner captures images from the source which are then converted into a digital form that can be stored on the disk. These
images can be edited before they are printed.

What Are Touch Panels?


Quite simply, touch panels, which are also known as touch screens or touch monitors, are tools that allow people to operate
computers through direct touch. More specifically, via the use of internal sensors, a user’s touch is detected, then translated,
into an instructional command that parlays into visible function.

*Optical Touch Panel

The category of optical touch panels includes multiple sensing methods. The number of products employing infrared optical
imaging touch panels based on infrared image sensors to sense position through triangulation.
A touch panel in this category features one infrared LED each at the left and right ends of the top of the panel, along with an
image sensor (camera). Retro reflective tape that reflects incident light along the axis of incidence is affixed along the
remaining left, right, and bottom sides. When a finger or other object touches the screen, the image sensor captures the
shadows formed when the infrared light is blocked. The coordinates of the location of contact are derived by triangulation.

*Sonic Touch Panel

Sonic touch panels were developed mainly to address the drawbacks of low light transmittance in resistive film touch
panels—that is, to achieve bright touch panels with high levels of visibility. These are also called surface wave or acoustic
wave touch panels. Aside from standalone LCD monitors, these are widely used in public spaces, in devices like point-of-
sale terminals, ATMs, and electronic kiosks.

These panels detect the screen position where contact occurs with a finger or other object using the attenuation in ultrasound
elastic waves on the surface. The internal structure of these panels is designed so that multiple piezoelectric transducers
arranged in the corners of a glass substrate transmit ultrasound surface elastic waves as vibrations in the panel surface,
which are received by transducers installed opposite the transmitting ones. When the screen is touched, ultrasound waves
are absorbed and attenuated by the finger or other object. The location is identified by detecting these changes. Naturally,
the user does not feel these vibrations when touching the screen. These panels offer high ease of use.
Pana 7 of 81
*Electrical Touch Panel

Electrical touch panels are often used in relatively large panels. Inside these panels, a transparent electrode film (electrode
layer) is placed atop a glass substrate, covered by a protective cover. Electric voltage is applied to electrodes positioned in
the four corners of the glass substrate, generating a uniform low-voltage electrical field across the entire panel. The
coordinates of the position at which the finger touches the screen are identified by measuring the resulting changes in
electrostatic capacity at the four corners of the panel.

It is structurally difficult to detect contact at two or more points at the same time (multi-touch).
Electrical touch panels are often used for smaller screen sizes than optical touch panel and sonic touch panel. They've
attracted significant attention in mobile devices. The iPhone, iPod Touch, and iPad use this method to achieve high-
precision multi-touch functionality and high response speed.

*Tablets

A graphics tablet (also known as a digitizer, drawing tablet, drawing pad, digital drawing tablet, pen tablet, or digital art
board) is a computer input device that enables a user to hand-draw images, animations and graphics, with a special pen-
like stylus, similar to the way a person draws images with a pencil and paper. These tablets may also be used to capture data
or handwritten signatures. It can also be used to trace an image from a piece of paper that is taped or otherwise secured to
the tablet surface.

*Electrical tablets

A Electrical tablets, commonly shortened to tablet, is a mobile device, typically with a mobile operating
system and touchscreen display processing circuitry, and a rechargeable battery in a single, thin and flat package. Tablets,
being computers, do what other personal computers do, but lack some input/output (I/O) abilities that others have. Modern
tablets largely resemble modern smartphones, the only differences being that tablets are relatively larger than smartphones,
with screens 7 inches (18 cm) or larger, measured diagonally, and may not support access to a cellular network.
Pana 8 of 81
*Sonic tablets

Sonic tablets differ in that the stylus used contains self-powered electronics that generate and transmit a signal to the tablet.
These tablets have an internal battery rather than the tablet for their power, resulting in a bulkier stylus. Eliminating the need
to power the pen means that such tablets may listen for pen signals constantly, as they do not have to alternate between
transmit and receive modes, which can result in less jitter.

*Resistive tablets

Other touch screen devices — whether a smartphone, tablet or human machine interface (HMI) — use a specific
configuration that relies on the user’s electrical charge to identify touch. Being that the human body is an excellent
conductor of electricity, touching the surface of a capacitive touchscreen device disturbs its electrical field, which the device
uses to determine the location of the touch. This is in stark contrast to resistive touchscreen devices, which use pressure
to identify touch instead.
.
*Output hardware

The fundamental of output computer hardware is same as the input computer hardware, but they are vice versa. Output
computer hardware are translating the information processes by computer into form that humans can understand such as
words, sounds, numbers, and images. Generally, it translates the binary code that produced by computer. Today, there are
much output computer hardware in the market such as variety of monitors, speakers, printers, and others.

*Monitors

Monitors, commonly called as Visual Display Unit (VDU), are the main output device of a computer. It forms images
from tiny dots, called pixels that are arranged in a rectangular form. The sharpness of the image depends upon the number
of pixels.
There are two kinds of viewing screen used for monitors.

• Cathode-Ray Tube (CRT)


• Flat-Panel Display

*Printer

A printer is an output device that prints paper documents. This includes text documents, images, or a combination of both.
The two most common types of printers are inkjet and laser printers. Inkjet printers are commonly used by consumers, while
laser printers are a typical choice for businesses. Dot matrix printers, which have become increasingly rare, are still used for
basic text printing.
The printed output produced by a printer is often called a hard copy, which is the physical version of an electronic
document. While some printers can only print black and white hard copies, most printers today can produce color prints. In
fact, many home printers can now produce high-quality photo prints that rival professionally developed photos. This is
because modern printers have a high DPI (dots per inch) setting, which allows documents to printed with a very
fine resolution.

*Plotter

A plotter is a computer hardware device much like a printer that is used for printing vector graphics. Instead of toner,
plotters use a pen, pencil, marker, or another writing tool to draw multiple, continuous lines onto paper rather than a series
of dots like a traditional printer. Though once widely used for computer-aided design, these devices have more or less been
phased out by wide-format printers. Plotters produce a hard copy of schematics and other similar applications.

Pana 9 of 81
*Vector Display

Vector images are mathematically ‘perfect’ formats. They are most popular in graphic design and engineering. They are
much easier to edit than raster images, and do not suffer from loss of quality. Unlike raster images, they are made up
of paths not pixels. Instead of separate blocks of color, vector images are on mathematical coordinates.
This means that no matter how much you alter the scaling of a vector, it won’t lose quality.

Resolution

Unlike raster images, vectors aren’t dependent on pixels. Instead, they rely on pure math. If you increase the size of a vector
image, then the paths increase in size too. This means that when you scale up a vector image, the lines multiply in size.
For example, if you doubled the size of a 10cm line, the entire line would be scaled up as a whole, creating a 20cm line. In a
comparable raster image, meanwhile, each pixel that makes up this line would be individually scaled up. Being able to
change the scale of an entire object means that its quality stays the same. The image, therefore, is infinitely scalable.
Another advantage to vector images is their small file size. Raster images have to store color information about every single
pixel that makes up the image, even if they are blank. Vector images, meanwhile, simply store information about the
relevant paths.

Types of Vector Formats


There are a wide range of vector formats to choose from. Some are generic, whilst some are more software-specific. Some
of the most common formats include:

• AI: this is a native format of Adobe Illustrator, a common vector editing program. It’s used for creating, editing and printing
vector images.
• DXF: this format was created to allow users to exchange CAD files. As a result, it is supported by most CAD programs,
making it widely-used by CAD users. Check out the history of the DXF file format.
• DWG: the native file format for AutoCAD, the most popular CAD software on the market. A proprietary format, it is not
supported by all CAD programs. It is, however, possible to view DWGs without AutoCAD. It has the advantage of
supporting more complex entity types than DXF.
• PDF: this format will be familiar to almost all web users. Due to its support across browsers, it’s widely used for sharing
and printing documents. Within the CAD industry, it’s also used for sharing images of drafts. PDFs aren’t easily editable,
however, so to make edits, users would need to convert it to a CAD format, such as DWG or DXF. Check out our top
tips for more information.
• SVG: this format is used mostly for web graphics and for interactive features. Most major web browsers support SVG.
Which format you choose depends on your needs and software. To exchange images, PDFs are often the most suitable
choice. However, they don’t allow for edits. For CAD users, formats like DXF and DWG are best. Users of Scan2CAD,
meanwhile, can choose from 33 different vector and raster file types.

Vector Images

Pros Cons
• •
They’re high quality and They are less common than
are resolution raster and require vector
independent. editing software.
• They can contain • They aren’t suitable for
additional information typical photo-realistic images.
attached to vector
elements.
• •
Despite the high quality, Compatibility is an issue if
they can be small in file you’re working with native
size. files.

Pana 10 of 81
*Raster display

Raster images are commonly used for photographs, and most of the images you see on your computer are stored in a raster
graphics format. You have no doubt to use them on a regular basis – for example, when you share a GIF.
Raster images are made up of (up to) millions of pixels. There is no structure to a raster image, meaning that every pixel is
essentially a separate, tiny block of color. As a result, when you change the size of a raster image, there is no way of making
it work as a whole. Zooming into or enlarging a raster image simply means you make all the individual pixels within the
image bigger. As you may expect, scaling up (or stretching) a raster image means it loses quality.

Pixels (short for picture elements) are the smallest elements of a raster image. Each pixel is an individual square of color.
However, different image files have different pixel density. Pixel density is simply a measure of how many pixels are in a
given area. If a 100 × 100 pixel image fit within a 1 inch square, for example, the pixel density would be 100 pixels per
inch (also referred to as ‘DPI’, meaning Dots Per Inch.) If your pixel density were too low, then your image would be low in
quality.
Pixel density is determined by the device that created the image – either a physical device, like a scanner or camera, or by
graphics software. It’s also possible to use programs like Photoshop to increase the PPI of an image.
Typically, higher quality will result in a wider range of color palettes and a bigger overall file size.

Compression
In order to reduce an image’s file size for storage, people make use of compression. There are two main
types: lossy and lossless.
• Lossy compression is when a raster image is compressed repeatedly to decrease file size, and loses its quality as a result.
This may not bother you if quality isn’t a priority, or if you want to share it via email, where small file size is advised.
• Lossless compression is when a raster image is compressed, but retains its quality. Typically, these raster images are still
large in size, but are a fraction of their uncompressed counterparts. This compression is advised if quality is a priority.

Types of Raster Formats


• JPEG: this familiar format is typically used for sharing photographs online. They’re small in size, and have a variety of
colors. However, they use lossy compression, which means that they lose quality when edited.
• GIF: this format uses lossless compression, but typically has poor resolution. As a result, the quality of a GIF file can often
be rather low. GIFs are typically used for simple web graphics, and support animation.
• BMP: this is one of the simplest formats in raster graphics. A BMP file is typically uncompressed, though lossless
compression is possible. This means that BMP files support can high-quality images, but have very large file sizes.
• PNG: this format is often used for logos and illustrations. It allows for lossless compression, which can make PNGs a better
alternative to JPEGs. However, this means that they can be large in size.
• TIFF: this format is very popular in graphics design and publishing. Its lossless compression makes it a great choice
for high-quality images. It is also the recommended file format for vectorization. Like other high-quality graphics formats,
however, TIFFs generally have large file sizes.

Raster Images

Pros Cons
• They offer a lot of • Their quality decreases when
detail, and a varied color they’re enlarged or edited.
palette.
• They are easily edited• Due to their many colors, they
and shared. can be complex and difficult to
print.
• Their resolution (or • Higher quality results in a
pixel density) can be larger file size.
increased.

Pana 11 of 81
Conversions:
1. Vector to Raster : Printers and display devices are raster devices. As a result we need to convert vector images to raster
format before they can be used i.e displayed or printed. The required resolution plays an vital role in determining the
size of raster file generated. Here it is important to note that the size of vector image to be converted always remains the
same. It is convenient to convert a vector file to a range of bitmap/raster file formats but going down opposite path is
harder.( because at times we need to edit the image while converting from raster to vector)
2. Raster to Vector : Image tracing in computing can be referred to vectorization and it’s simply the conversion of raster
images to vector images. An interesting application of vectorization is to update images and recover work. Vectorization
can be used to retrieve information that we have lost. Paint in Microsoft Windows produces a bitmap output file. It is
easy to notice jagged lines in Paint. In this kind of a conversion the image size reduces drastically. As a result an exact
conversion is not possible in this scenario. Due to various approximations and editing that is done in the process of
conversion the converted images are not of good quality.

Differences between Vector and Raster graphics


The main difference between vector and raster graphics is that raster graphics are composed of pixels, while vector
graphics are composed of paths. A raster graphic, such as a gif or jpeg, is an array of pixels of various colors, which
together form an image.

( this figure is not for drawing)

Pana 12 of 81
Pana 13 of 81
*Color CRT Monitors:

The CRT Monitor display by using a combination of phosphors. The phosphors are different colors. There are two popular
approaches for producing color displays with a CRT are:

1. Beam Penetration Method


2. Shadow-Mask Method

1. Beam Penetration Method:

The Beam-Penetration method has been used with random-scan monitors. In this method, the CRT screen is coated with two
layers of phosphor, red and green and the displayed color depends on how far the electron beam penetrates the phosphor
layers. This method produces four colors only, red, green, orange and yellow. A beam of slow electrons excites the outer red
layer only; hence screen shows red color only. A beam of high-speed electrons excites the inner green layer. Thus screen
shows a green color.

Advantages:
1. Inexpensive

Disadvantages:
1. Only four colors are possible
2. Quality of pictures is not as good as with another method.

2. Shadow-Mask Method:

o Shadow Mask Method is commonly used in Raster-Scan System because they produce a much wider range of
colors than the beam-penetration method.
o It is used in the majority of color TV sets and monitors.

Construction: A shadow mask CRT has 3 phosphor color dots at each pixel position.

o One phosphor dot emits: red light


o Another emits: green light
o Third emits: blue light

This type of CRT has 3 electron guns, one for each color dot and a shadow mask grid just behind the phosphor coated
screen. Shadow mask grid is pierced with small round holes in a triangular pattern. Figure shows the delta-delta shadow
mask method commonly used in color CRT system.

Pana 14 of 81
Working:

Triad arrangement of red, green, and blue guns. The deflection system of the CRT operates on all 3 electron beams
simultaneously; the 3 electron beams are deflected and focused as a group onto the shadow mask, which contains a
sequence of holes aligned with the phosphor- dot patterns. When the three beams pass through a hole in the shadow mask,
they activate a dotted triangle, which occurs as a small color spot on the screen. The phosphor dots in the triangles are
organized so that each electron beam can activate only its corresponding color dot when it passes through the shadow mask.

Inline arrangement: Another configuration for the 3 electron guns is an Inline arrangement in which the 3 electron guns
and the corresponding red-green-blue color dots on the screen, are aligned along one scan line rather of in a triangular
pattern. This inline arrangement of electron guns in easier to keep in alignment and is commonly used in high-resolution
color CRT's.

Pana 15 of 81
Advantages and disadvantages of color CRT

Advantage:
1. Realistic image
2. Million different colors to be generated
3. Shadow scenes are possible

Disadvantage:
1. Relatively expensive compared with the monochrome CRT.
2. Relatively poor resolution
3. Convergence Problem

Pana 16 of 81
UNIT 3
Two- Dimensional Graphics

Short for two-dimensional, 2-D is any virtual object with no appearance of depth. For example, if a
graphic or image is 2-D, it can only be viewed properly from a straight on viewpoint. However, a
3-D graphic or image is viewable at any angle.

2-D computer graphics is often used in applications that were first developed around traditional
printing and drawing technologies. Typography, cartography, technical drawing, and advertising
are examples of applications and technologies that originally used 2-D computer graphics. In those
applications, the two-dimensional image is not only a representation of a real-world object, but also
an independent artifact with added semantic value. Two-dimensional models are often preferred
because they give more direct control of the image than 3-D computer graphics .With desktop
publishing, engineering, and business, a 2-D computer graphic can be much smaller than the
corresponding digital image, often by a factor of 1/1000 or more. This representation is also more
flexible since it can be rendered in different resolutions for different output devices. For these
reasons, documents and illustrations are often stored or transmitted as 2-D graphic files.
2-D computer graphics started in the 1950s, based on vector graphics devices.

*Line Drawing Algorithms:-


In computer graphics, popular algorithms used to generate lines are-
1. Digital Differential Analyzer (DDA) Line Drawing Algorithm
2. Bresenham Line Drawing Algorithm
3. Mid Point Line Drawing Algorithm

*DDA Algorithm:-
 DDA Algorithm is the simplest line drawing algorithm.
 DDA Algorithm attempts to generate the points between the starting and ending coordinates.

Given the starting and ending coordinates of a line,


 Starting coordinates = (X0 , Y0)
 Ending coordinates = (Xn , Yn)

The points generation using DDA Algorithm involves the following steps-
Pana 17 of 81
Step-01:
Calculate ΔX, ΔY and M from the given input.
These parameters are calculated as-
 ΔX = Xn – X0
 ΔY =Yn – Y0
 M = ΔY / ΔX
Step-02:
Find the number of steps or points in between the starting and ending coordinates.
if (absolute (ΔX) > absolute (ΔY))
Steps = absolute (ΔX);
else
Steps = absolute (ΔY);

Step-03:
Suppose the current point is (Xp, Yp) and the next point is (Xp+1, Yp+1).
Find the next point by following the below three cases-

Step-04:
Keep repeating Step-03 until the end point is reached or the number of generated new points
(including the starting and ending points) equals to the steps count.

PRACTICE PROBLEMS BASED ON DDA ALGORITHM-


Problem-01:
Calculate the points between the starting point (5, 6) and ending point (8, 12).
Pana 18 of 81
Solution-
Given-
Starting coordinates = (X0, Y0) = (5, 6)
 Ending coordinates = (Xn, Yn) = (8, 12)

Step-01:
Calculate ΔX, ΔY and M from the given input.
ΔX = Xn – X0 = 8 – 5 = 3
 ΔY =Yn – Y0 = 12 – 6 = 6

 M = ΔY / ΔX = 6 / 3 = 2

Step-02:
Calculate the number of steps.
As |ΔX| < |ΔY| = 3 < 6, so number of steps = ΔY = 6
Step-03:
As M > 1, so case-03 is satisfied.
Now, Step-03 is executed until Step-04 is satisfied.

Round off
Xp Yp Xp+1 Yp+1 (Xp+1,
Yp+1)

5 6 5.5 7 (6, 7)

6 8 (6, 8)

6.5 9 (7, 9)

7 10 (7, 10)

7.5 11 (8, 11)

8 12 (8, 12)

Pana 19 of 81
Problem-02:
Calculate the points between the starting point (5, 6) and ending point (13, 10).
Solution-
Given-
Starting coordinates = (X0, Y0) = (5, 6)
 Ending coordinates = (Xn, Yn) = (13, 10)
Step-01:
Calculate ΔX, ΔY and M from the given input.
ΔX = Xn – X0 = 13 – 5 = 8
 ΔY =Yn – Y0 = 10 – 6 = 4

 M = ΔY / ΔX = 4 / 8 = 0.50

Step-02:
Calculate the number of steps.
As |ΔX| > |ΔY| = 8 > 4, so number of steps = ΔX = 8
Step-03:
As M < 1, so case-01 is satisfied.
Now, Step-03 is executed until Step-04 is satisfied.

Round off
Xp Yp Xp+1 Yp+1
(Xp+1, Yp+1)

5 6 6 6.5 (6, 7)

7 7 (7, 7)

8 7.5 (8, 8)

9 8 (9, 8)

10 8.5 (10, 9)
Pana 20 of 81
11 9 (11, 9)

12 9.5 (12, 10)

13 10 (13, 10)

Advantages of DDA Algorithm-


The advantages of DDA Algorithm are-
 It is a simple algorithm.
 It is easy to implement.
 It avoids using the multiplication operation which is costly in terms of time complexity.

Disadvantages of DDA Algorithm-


The disadvantages of DDA Algorithm are-
 There is an extra overhead of using round off( ) function.
 Using round off( ) function increases time complexity of the algorithm.
 Resulted lines are not smooth because of round off( ) function.
 The points generated by this algorithm are not accurate.

*Bresenham Line Drawing Algorithm:-


 Bresenham Line Drawing Algorithm attempts to generate the points between the starting and
ending coordinates.
Given the starting and ending coordinates of a line,
 Starting coordinates = (X0, Y0)
 Ending coordinates = (Xn, Yn)
The points generation using Bresenham Line Drawing Algorithm involves the following steps-
Step-01:
Calculate ΔX and ΔY from the given input.
These parameters are calculated as-
Pana 21 of 81
 ΔX = Xn – X0
 ΔY =Yn – Y0

Step-02:
Calculate the decision parameter Pk.
It is calculated as-
Pk = 2ΔY – ΔX
Step-03:
Suppose the current point is (Xk, Yk) and the next point is (Xk+1, Yk+1).
Find the next point depending on the value of decision parameter P k.
Follow the below two cases-

Step-04:
Keep repeating Step-03 until the end point is reached or number of iterations equals to (ΔX-1)
times.
PRACTICE PROBLEMS BASED ON BRESENHAM LINE DRAWING ALGORITHM-
Problem-01:
Calculate the points between the starting coordinates (9, 18) and ending coordinates (14, 22).
Solution-
Given-
 Starting coordinates = (X0, Y0) = (9, 18)
 Ending coordinates = (Xn, Yn) = (14, 22)

Step-01:
Calculate ΔX and ΔY from the given input.
 ΔX = Xn – X0 = 14 – 9 = 5
 ΔY =Yn – Y0 = 22 – 18 = 4

Step-02:
Calculate the decision parameter.
Pana 22 of 81
Pk
= 2ΔY – ΔX
=2x4–5
=3
So, decision parameter Pk = 3

Step-03:
As Pk >= 0, so case-02 is satisfied.
Thus,
 Pk+1 = Pk + 2ΔY – 2ΔX = 3 + (2 x 4) – (2 x 5) = 1
 Xk+1 = Xk + 1 = 9 + 1 = 10
 Yk+1 = Yk + 1 = 18 + 1 = 19
Similarly, Step-03 is executed until the end point is reached or number of iterations equals to 4
times.
(Number of iterations = ΔX – 1 = 5 – 1 = 4)

Pk Pk+1 Xk+1 Yk+1

9 18

3 1 10 19

1 -1 11 20

-1 7 12 20

7 5 13 21

5 3 14 22

Pana 23 of 81
Pana 24 of 81
Problem-02:
Calculate the points between the starting coordinates (20, 10) and ending coordinates (30, 18).
Solution-
Given-
 Starting coordinates = (X0, Y0) = (20, 10)
 Ending coordinates = (Xn, Yn) = (30, 18)

Step-01:
Calculate ΔX and ΔY from the given input.
ΔX = Xn – X0 = 30 – 20 = 10
 ΔY =Yn – Y0 = 18 – 10 = 8

Step-02:
Calculate the decision parameter.
Pk
= 2ΔY – ΔX
= 2 x 8 – 10
=6
So, decision parameter Pk = 6
Step-03:
As Pk >= 0, so case-02 is satisfied.
Thus,
 Pk+1 = Pk + 2ΔY – 2ΔX = 6 + (2 x 8) – (2 x 10) = 2
 Xk+1 = Xk + 1 = 20 + 1 = 21
 Yk+1 = Yk + 1 = 10 + 1 = 11

Similarly, Step-03 is executed until the end point is reached or number of iterations equals to 9
times.
(Number of iterations = ΔX – 1 = 10 – 1 = 9)

Pana 25 of 81
Pk Pk+1 Xk+1 Yk+1

20 10

6 2 21 11

2 -2 22 12

-2 14 23 12

14 10 24 13

10 6 25 14

6 2 26 15

2 -2 27 16

-2 14 28 16

14 10 29 17

10 6 30 18

Advantages of Bresenham Line Drawing Algorithm-


The advantages of Bresenham Line Drawing Algorithm are-
 It is easy to implement.
 It is fast and incremental.
 It executes fast but less faster than DDA Algorithm.
 The points generated by this algorithm are more accurate than DDA Algorithm.
 It uses fixed points only.
Pana 26 of 81
Disadvantages of Bresenham Line Drawing Algorithm-
The disadvantages of Bresenham Line Drawing Algorithm are-
 Though it improves the accuracy of generated points but still the resulted line is not smooth.
 This algorithm is for the basic line drawing.
 It can not handle diminishing jaggies.

*Circle and ellipse drawing algorithm:-


In computer graphics, popular algorithms used to generate circle are-
1) Mid point circle drawing algorithm
2) Bresenham's circle drawing algorithm

*Mid Point Circle Drawing Algorithm:-


Mid Point Circle Drawing Algorithm attempts to generate the points of one octant.
The points for other octacts are generated using the eight symmetry property.
Given the centre point and radius of circle,

The points generation using Mid Point Circle Drawing Algorithm involves the following steps-
Step-01:
Assign the starting point coordinates (X0, Y0) as-
 X0 = 0
 Y0 = R

Step-02:
Calculate the value of initial decision parameter P0 as-
P0 = 1 – R
Step-03:
Suppose the current point is (Xk, Yk) and the next point is (Xk+1, Yk+1).
Find the next point of the first octant depending on the value of decision parameter P k.
Follow the below two cases-

Pana 27 of 81
Step-04:
If the given centre point (X0, Y0) is not (0, 0), then do the following and plot the point-
 Xplot = Xc + X0
 Yplot = Yc + Y0

Here, (Xc, Yc) denotes the current value of X and Y coordinates.


Step-05:
Keep repeating Step-03 and Step-04 until Xplot >= Yplot.
Step-06:
Step-05 generates all the points for one octant.
To find the points for other seven octants, follow the eight symmetry property of circle.
This is depicted by the following figure-

Pana 28 of 81
PRACTICE PROBLEMS BASED ON MID POINT CIRCLE DRAWING ALGORITHM-
Problem-01:
Given the centre point coordinates (0, 0) and radius as 10, generate all the points to form a circle.
Solution-
Given-
 Centre Coordinates of Circle (X0, Y0) = (0, 0)
 Radius of Circle = 10

Step-01:
Assign the starting point coordinates (X0, Y0) as-
 X0 = 0
 Y0 = R = 10

Step-02:
Calculate the value of initial decision parameter P0 as-
P0 = 1 – R
P0 = 1 – 10
P0 = -9
Step-03:
As Pinitial < 0, so case-01 is satisfied.
Thus,
 Xk+1 = Xk + 1 = 0 + 1 = 1
 Yk+1 = Yk = 10
 Pk+1 = Pk + 2 x Xk+1 + 1 = -9 + (2 x 1) + 1 = -6

Step-04:
This step is not applicable here as the given centre point coordinates is (0, 0).

Pana 29 of 81
Step-05:
Step-03 is executed similarly until Xk+1 >= Yk+1 as follows-

Pk Pk+1 (Xk+1, Yk+1)

(0, 10)

-9 -6 (1, 10)

-6 -1 (2, 10)

-1 6 (3, 10)

6 -3 (4, 9)

-3 8 (5, 9)

8 5 (6, 8)
Algorithm calculates all the points of octant-1 and terminates.
Now, the points of octant-2 are obtained using the mirror effect by swapping X and Y coordinates.

Octant-1 Points Octant-2 Points

(0, 10) (8, 6)

(1, 10) (9, 5)

(2, 10) (9, 4)

(3, 10) (10, 3)

(4, 9) (10, 2)

(5, 9) (10, 1)

(6, 8) (10, 0)

These are all points for Quadrant-1.

Now, the points for rest of the part are generated by following the signs of other quadrants.
The other points can also be generated by calculating each octant separately.
Here, all the points have been generated with respect to quadrant-1-
Pana 30 of 81
Quadrant- Quadrant- Quadrant- Quadrant-
1 (X,Y) 2 (-X,Y) 3 (-X,-Y) 4 (X,-Y)

(0, 10) (0, 10) (0, -10) (0, -10)

(1, 10) (-1, 10) (-1, -10) (1, -10)

(2, 10) (-2, 10) (-2, -10) (2, -10)

(3, 10) (-3, 10) (-3, -10) (3, -10)

(4, 9) (-4, 9) (-4, -9) (4, -9)

(5, 9) (-5, 9) (-5, -9) (5, -9)

(6, 8) (-6, 8) (-6, -8) (6, -8)

(8, 6) (-8, 6) (-8, -6) (8, -6)

(9, 5) (-9, 5) (-9, -5) (9, -5)

(9, 4) (-9, 4) (-9, -4) (9, -4)

(10, 3) (-10, 3) (-10, -3) (10, -3)

(10, 2) (-10, 2) (-10, -2) (10, -2)

(10, 1) (-10, 1) (-10, -1) (10, -1)

(10, 0) (-10, 0) (-10, 0) (10, 0)

These are all points of the Circle.

The advantages of Mid Point Circle Drawing Algorithm are-


 It is a powerful and efficient algorithm.
 The entire algorithm is based on the simple equation of circle X 2 + Y2 = R2.
 It is easy to implement from the programmer’s perspective.
 This algorithm is used to generate curves on raster displays.

Pana 31 of 81
Disadvantages of Mid Point Circle Drawing Algorithm-
The disadvantages of Mid Point Circle Drawing Algorithm are-
 Accuracy of the generating points is an issue in this algorithm.
 The circle generated by this algorithm is not smooth.
 This algorithm is time consuming.

*Bresenham Circle Drawing Algorithm-


Bresenham Circle Drawing Algorithm attempts to generate the points of one octant.
Given-
 Centre point of Circle = (X0, Y0)
 Radius of Circle = R
The points generation using Bresenham Circle Drawing Algorithm involves the following steps-
Step-01:
Assign the starting point coordinates (X0, Y0) as-
 X0 = 0
 Y0 = R
Step-02:
Calculate the value of initial decision parameter P0 as-
P0 = 3 – 2 x R
Step-03:
Suppose the current point is (Xk, Yk) and the next point is (Xk+1, Yk+1).
Find the next point of the first octant depending on the value of decision parameter P k.
Follow the below two cases-

Pana 32 of 81
Step-04:
If the given centre point (X0, Y0) is not (0, 0), then do the following and plot the point-
 Xplot = Xc + X0
 Yplot = Yc + Y0
Here, (Xc, Yc) denotes the current value of X and Y coordinates.
Step-05:
Keep repeating Step-03 and Step-04 until Xplot => Yplot.
Step-06:
Step-05 generates all the points for one octant.
To find the points for other seven octants, follow the eight symmetry property of circle.
This is depicted by the following figure-

Pana 33 of 81
PRACTICE PROBLEMS BASED ON BRESENHAM CIRCLE DRAWING ALGORITHM-
Problem-01:
Given the centre point coordinates (0, 0) and radius as 8, generate all the points to form a circle.
Solution-
Given-
 Centre Coordinates of Circle (X0, Y0) = (0, 0)
 Radius of Circle = 8
Step-01:
Assign the starting point coordinates (X0, Y0) as-
 X0 = 0
 Y0 = R = 8
Step-02:
Calculate the value of initial decision parameter P0 as-
P0 = 3 – 2 x R
P0 = 3 – 2 x 8
P0 = -13
Step-03:
As Pinitial < 0, so case-01 is satisfied.
Thus,
 Xk+1 = Xk + 1 = 0 + 1 = 1
 Yk+1 = Yk = 8
 Pk+1 = Pk + 4 x Xk+1 + 6 = -13 + (4 x 1) + 6 = -3
Step-04:
This step is not applicable here as the given centre point coordinates is (0, 0).
Step-05:
Step-03 is executed similarly until Xk+1 >= Yk+1 as follows-

Pk Pk+1 (Xk+1, Yk+1)

(0, 8)

-13 -3 (1, 8)

-3 11 (2, 8)

11 5 (3, 7)

Pana 34 of 81
5 7 (4, 6)

7 (5, 5)

Algorithm Terminates
These are all points for Octant-1.

Algorithm calculates all the points of octant-1 and terminates.


Now, the points of octant-2 are obtained using the mirror effect by swapping X and Y coordinates.

Octant-1 Points Octant-2 Points

(0, 8) (5, 5)

(1, 8) (6, 4)

(2, 8) (7, 3)

(3, 7) (8, 2)

(4, 6) (8, 1)

(5, 5) (8, 0)

These are all points for Quadrant-1.

Now, the points for rest of the part are generated by following the signs of other quadrants.
The other points can also be generated by calculating each octant separately.
Here, all the points have been generated with respect to quadrant-1-

Quadrant-1 Quadrant-2 (- Quadrant-3 (- Quadrant-4


(X,Y) X,Y) X,-Y) (X,-Y)

(0, 8) (0, 8) (0, -8) (0, -8)

(1, 8) (-1, 8) (-1, -8) (1, -8)

(2, 8) (-2, 8) (-2, -8) (2, -8)

(3, 7) (-3, 7) (-3, -7) (3, -7)

Pana 35 of 81
(4, 6) (-4, 6) (-4, -6) (4, -6)

(5, 5) (-5, 5) (-5, -5) (5, -5)

(6, 4) (-6, 4) (-6, -4) (6, -4)

(7, 3) (-7, 3) (-7, -3) (7, -3)

(8, 2) (-8, 2) (-8, -2) (8, -2)

(8, 1) (-8, 1) (-8, -1) (8, -1)

(8, 0) (-8, 0) (-8, 0) (8, 0)

These are all points of the Circle.

Advantages of Bresenham Circle Drawing Algorithm-


The advantages of Bresenham Circle Drawing Algorithm are-
 The entire algorithm is based on the simple equation of circle X 2 + Y2 = R2.
 It is easy to implement.
Disadvantages of Bresenham Circle Drawing Algorithm-
The disadvantages of Bresenham Circle Drawing Algorithm are-
 Like Mid Point Algorithm, accuracy of the generating points is an issue in this algorithm.
 This algorithm suffers when used to generate complex and high graphical images.
 There is no significant enhancement with respect to performance.
*Operations on Matrices
Addition, subtraction and multiplication are the basic operations on the matrix. To add or subtract
matrices, these must be of identical order and for multiplication, the number of columns in the first
matrix equals the number of rows in the second matrix.

 Addition of Matrices
 Subtraction of Matrices
 Scalar Multiplication of Matrices
 Multiplication of Matrices

Pana 36 of 81
Pana 37 of 81
*2D Transformation
In Computer graphics Transformation is a process of modifying and re-positioning the existing
graphics.
 2D Transformations take place in a two dimensional plane.
 Transformations are helpful in changing the position, size, orientation, shape etc of the object.

Transformation Techniques-
In computer graphics, various transformation techniques are-
1. Translation
2. Rotation
3. Scaling
4. Reflection
5. Shear

Pana 38 of 81
*2D Translation:-
In Computer graphics, 2D Translation is a process of moving an object from one position to
another in a two dimensional plane.
Consider a point object O has to be moved from one position to another in a 2D plane.
Let-
 Initial coordinates of the object O = (Xold, Yold)
 New coordinates of the object O after translation = (X new, Ynew)
 Translation vector or Shift vector = (Tx, Ty)

Given a Translation vector (Tx, Ty)-


 Tx defines the distance the Xold coordinate has to be moved.
 Ty defines the distance the Yold coordinate has to be moved.

This translation is achieved by adding the translation coordinates to the old coordinates of the
object as-
 Xnew = Xold + Tx (This denotes translation towards X axis)
 Ynew = Yold + Ty (This denotes translation towards Y axis)

In Matrix form, the above translation equations may be represented as-

 The homogeneous coordinates representation of (X, Y) is (X, Y, 1).


Pana 39 of 81
 Through this representation, all the transformations can be performed using matrix / vector
multiplications.

The above translation matrix may be represented as a 3 x 3 matrix as-

PRACTICE PROBLEMS BASED ON 2D TRANSLATION IN COMPUTER GRAPHICS-


Problem-01:
Given a circle C with radius 10 and center coordinates (1, 4). Apply the translation with distance 5
towards X axis and 1 towards Y axis. Obtain the new coordinates of C without changing its radius.
Solution-
Given-
 Old center coordinates of C = (Xold, Yold) = (1, 4)
 Translation vector = (Tx, Ty) = (5, 1)
Let the new center coordinates of C = (Xnew, Ynew).
Applying the translation equations, we have-
 Xnew = Xold + Tx = 1 + 5 = 6
 Ynew = Yold + Ty = 4 + 1 = 5
Thus, New center coordinates of C = (6, 5).
Alternatively,
In matrix form, the new center coordinates of C after translation may be obtained as-

Pana 40 of 81
Thus, New center coordinates of C = (6, 5).

*2D Rotation:-
In Computer graphics,
2D Rotation is a process of rotating an object with respect to an angle in a two dimensional plane.
Consider a point object O has to be rotated from one angle to another in a 2D plane.
Let-
 Initial coordinates of the object O = (Xold, Yold)
 Initial angle of the object O with respect to origin = Φ
 Rotation angle = θ
 New coordinates of the object O after rotation = (Xnew, Ynew)

This rotation is achieved by using the following rotation equations-


 Xnew = Xold x cosθ – Yold x sinθ
 Ynew = Xold x sinθ + Yold x cosθ

In Matrix form, the above rotation equations may be represented as-


Pana 41 of 81
For homogeneous coordinates, the above rotation matrix may be represented as a 3 x 3 matrix as-

PRACTICE PROBLEMS BASED ON 2D ROTATION IN COMPUTER GRAPHICS-


Problem-01:
Given a line segment with starting point as (0, 0) and ending point as (4, 4). Apply 30 degree
rotation anticlockwise direction on the line segment and find out the new coordinates of the line.
Solution-
We rotate a straight line by its end points with the same angle. Then, we re-draw a line between the
new end points.
Given-
 Old ending coordinates of the line = (X old, Yold) = (4, 4)
 Rotation angle = θ = 30º
Let new ending coordinates of the line after rotation = (X new, Ynew).
Applying the rotation equations, we have-
Xnew
= Xold x cosθ – Yold x sinθ
= 4 x cos30º – 4 x sin30º
= 4 x (√3 / 2) – 4 x (1 / 2)
= 2√3 – 2
= 2(√3 – 1)
= 2(1.73 – 1)
= 1.46
Ynew
= Xold x sinθ + Yold x cosθ
= 4 x sin30º + 4 x cos30º
= 4 x (1 / 2) + 4 x (√3 / 2) Pana 42 of 81
= 2 + 2√3
= 2(1 + √3)
= 2(1 + 1.73)
= 5.46
Thus, New ending coordinates of the line after rotation = (1.46, 5.46).
Alternatively,
In matrix form, the new ending coordinates of the line after rotation may be obtained as-

Thus, New ending coordinates of the line after rotation = (1.46, 5.46).

*2D Scaling:-
In computer graphics, scaling is a process of modifying or altering the size of objects.
 Scaling may be used to increase or reduce the size of object.
 Scaling subjects the coordinate points of the original object to change.
 Scaling factor determines whether the object size is to be increased or reduced.
Pana 43 of 81
 If scaling factor > 1, then the object size is increased.
 If scaling factor < 1, then the object size is reduced.

Consider a point object O has to be scaled in a 2D plane.


Let-
 Initial coordinates of the object O = (Xold, Yold)
 Scaling factor for X-axis = Sx
 Scaling factor for Y-axis = Sy
 New coordinates of the object O after scaling = (Xnew, Ynew)
This scaling is achieved by using the following scaling equations-
 Xnew = Xold x Sx
 Ynew = Yold x Sy
In Matrix form, the above scaling equations may be represented as-

For homogeneous coordinates, the above scaling matrix may be represented as a 3 x 3 matrix as-

PRACTICE PROBLEMS BASED ON 2D SCALING IN COMPUTER GRAPHICS-


Problem-01:
Given a square object with coordinate points A(0, 3), B(3, 3), C(3, 0), D(0, 0). Apply the scaling
parameter 2 towards X axis and 3 towards Y axis and obtain the new coordinates of the object.
Solution-
Given-
 Old corner coordinates of the square = A (0, 3), B(3, 3), C(3, 0), D(0, 0)
 Scaling factor along X axis = 2
 Scaling factor along Y axis = 3
For Coordinates A(0, 3)
Let the new coordinates of corner A after scaling = (X new, Ynew).
Pana 44 of 81
Applying the scaling equations, we have-
 Xnew = Xold x Sx = 0 x 2 = 0
 Ynew = Yold x Sy = 3 x 3 = 9
Thus, New coordinates of corner A after scaling = (0, 9).
For Coordinates B(3, 3)
Let the new coordinates of corner B after scaling = (Xnew, Ynew).
Applying the scaling equations, we have-
 Xnew = Xold x Sx = 3 x 2 = 6
 Ynew = Yold x Sy = 3 x 3 = 9
Thus, New coordinates of corner B after scaling = (6, 9).
For Coordinates C(3, 0)
Let the new coordinates of corner C after scaling = (Xnew, Ynew).
Applying the scaling equations, we have-
 Xnew = Xold x Sx = 3 x 2 = 6
 Ynew = Yold x Sy = 0 x 3 = 0
Thus, New coordinates of corner C after scaling = (6, 0).
For Coordinates D(0, 0)
Let the new coordinates of corner D after scaling = (Xnew, Ynew).
Applying the scaling equations, we have-
 Xnew = Xold x Sx = 0 x 2 = 0
 Ynew = Yold x Sy = 0 x 3 = 0
Thus, New coordinates of corner D after scaling = (0, 0).
Thus, New coordinates of the square after scaling = A (0, 9), B(6, 9), C(6, 0), D(0, 0).

*2D Reflection:-
 Reflection is a kind of rotation where the angle of rotation is 180 degree.
Pana 45 of 81
 The reflected object is always formed on the other side of mirror.
 The size of reflected object is same as the size of original object.
Consider a point object O has to be reflected in a 2D plane.
Let-
 Initial coordinates of the object O = (Xold, Yold)
 New coordinates of the reflected object O after reflection = (X new, Ynew)

Reflection On X-Axis:
This reflection is achieved by using the following reflection equations-
 Xnew = Xold
 Ynew = -Yold
In Matrix form, the above reflection equations may be represented as-

For homogeneous coordinates, the above reflection matrix may be represented as a 3 x 3 matrix as-

Reflection On Y-Axis:
This reflection is achieved by using the following reflection equations-
 Xnew = -Xold
 Ynew = Yold
In Matrix form, the above reflection equations may be represented as-

Pana 46 of 81
For homogeneous coordinates, the above reflection matrix may be represented as a 3 x 3 matrix as-

PRACTICE PROBLEMS BASED ON 2D REFLECTION IN COMPUTER GRAPHICS-


Problem-01:
Given a triangle with coordinate points A(3, 4), B(6, 4), C(5, 6). Apply the reflection on the X axis
and obtain the new coordinates of the object.
Solution-
Given-
 Old corner coordinates of the triangle = A (3, 4), B(6, 4), C(5, 6)
 Reflection has to be taken on the X axis
For Coordinates A(3, 4)
Let the new coordinates of corner A after reflection = (Xnew, Ynew).
Applying the reflection equations, we have-
 Xnew = Xold = 3
 Ynew = -Yold = -4
Thus, New coordinates of corner A after reflection = (3, -4).
For Coordinates B(6, 4)
Let the new coordinates of corner B after reflection = (X new, Ynew).
Applying the reflection equations, we have-
 Xnew = Xold = 6
 Ynew = -Yold = -4
Thus, New coordinates of corner B after reflection = (6, -4).
For Coordinates C(5, 6)
Let the new coordinates of corner C after reflection = (X new, Ynew).
Applying the reflection equations, we have-
 Xnew = Xold = 5
 Ynew = -Yold = -6
Thus, New coordinates of corner C after reflection = (5, -6).
Thus, New coordinates of the triangle after reflection = A (3, -4), B(6, -4), C(5, -6).
Pana 47 of 81
Pana 48 of 81
UNIT 4
Three-Dimensional Graphics

The three-dimensional transformations are extensions of two-dimensional transformation. In 2D two


coordinates are used, i.e., x and y whereas in 3D three co-ordinates x, y, and z are used.
For three dimensional images and objects, three-dimensional transformations are needed. These are
translations, scaling, and rotation. These are also called as basic transformations are represented
using matrix. More complex transformations are handled using matrix in 3D.
The 2D can show two-dimensional objects. Like the Bar chart, pie chart, graphs. But some more
natural objects can be represented using 3D. Using 3D, we can see different shapes of the object in
different sections.
In 3D when a translation is done we need three factors for rotation also, it is a component of three
rotations. Each can be performed along any three Cartesian axis. In 3D also we can represent a
sequence of transformations as a single matrix.
Computer Graphics uses CAD.CAD allows manipulation of machine components which are 3
Dimensional. It also provides automobile bodies, aircraft parts study. All these activities require
realism. For realism 3D is required. In the production of a realistic 3D scene from 2D is tough. It
require three dimension, i.e., depth. Three dimension system has three axis x, y, z. The orientation of
a 3D coordinate system is of two types. Right-handed system and left-handed system.

In the right -handed system thumb of right- hand points to positive z-direction and left- hand system
thumb point to negative two directions. Following figure show right-hand orientation of the cube.

Using right-handed system co-ordinates of corners A, B, C, D of the cube


Point A x, y, z
Point B x, y, 0
Point C 0, y, 0
Point D 0, y, z

Producing realism in 3D: The three-dimensional objects are made using computer graphics. The
technique used for two Dimensional displays of three Dimensional objects is called projection.

Projection:-

It is the process of converting a 3D object into a 2D object. It is also defined as mapping or


transformation of the object in projection plane or view plane. The view plane is displayed surface.
Pana 49 of 81
*Perspective Projection:-

In perspective projection farther away object from the viewer, small it appears. This property of
projection gives an idea about depth. The artist use perspective projection from drawing three-
dimensional scenes.

Two main characteristics of perspective are vanishing points and perspective foreshortening. Due to
foreshortening object and lengths appear smaller from the center of projection. More we increase the
distance from the center of projection, smaller will be the object appear.

Vanishing Point

It is the point where all lines will appear to meet. There can be one point, two point, and three point
perspectives.

One Point: There is only one vanishing point as shown in fig (a)

Two Points: There are two vanishing points. One is the x-direction and other in the y -direction as
shown in fig (b)

Three Points: There are three vanishing points. One is x second in y and third in two directions.
Pana 50 of 81
In Perspective projection lines of projection do not remain parallel. The lines converge at a single
point called a center of projection. The projected image on the screen is obtained by points of
intersection of converging lines with the plane of the screen. The image on the screen is seen as of
viewer's eye were located at the centre of projection, lines of projection would correspond to path
travel by light beam originating from object.

Important terms related to perspective


1. View plane: It is an area of world coordinate system which is projected into viewing plane.
2. Center of Projection: It is the location of the eye on which projected light rays converge.
3. Projectors: It is also called a projection vector. These are rays start from the object scene and
are used to create an image of the object on viewing or view plane.

Anomalies in Perspective Projection

It introduces several anomalies due to these object shape and appearance gets affected.

1. Perspective foreshortening: The size of the object will be small of its distance from the center
of projection increases.
2. Vanishing Point: All lines appear to meet at some point in the view plane.
3. Distortion of Lines: A range lies in front of the viewer to back of viewer is appearing to six
rollers.

Pana 51 of 81
Foreshortening of the z-axis in fig (a) produces one vanishing point , P1. Foreshortening the x and z-
axis results in two vanishing points in fig (b). Adding a y-axis foreshortening in fig (c) adds
vanishing point along the negative y-axis.

*Parallel Projection:-

Parallel Projection use to display picture in its true shape and size. When projectors are
perpendicular to view plane then is called orthographic projection. The parallel projection is formed
by extending parallel lines from each vertex on the object until they intersect the plane of the screen.
The point of intersection is the projection of vertex.

Parallel projections are used by architects and engineers for creating working drawing of the object,
for complete representations require two or more views of an object using different planes.

Pana 52 of 81
1. Isometric Projection: All projectors make equal angles generally angle is of 30°.
2. Dimetric: In these two projectors have equal angles. With respect to two principle axis.
3. Trimetric: The direction of projection makes unequal angle with their principle axis.
4. Cavalier: All lines perpendicular to the projection plane are projected with no change in
length.
5. Cabinet: All lines perpendicular to the projection plane are projected to one half of their
length. These give a realistic appearance of object.

Pana 53 of 81
Pana 54 of 81
*3D Transformation:-
In Computer graphics, Transformation is a process of modifying and re-positioning the existing
graphics.
 3D Transformations take place in a three dimensional plane.
 3D Transformations are important and a bit more complex than 2D Transformations.
 Transformations are helpful in changing the position, size, orientation, shape etc of the object.

Transformation Techniques-
In computer graphics, various transformation techniques are-

1. Translation
2. Rotation
3. Scaling
4. Reflection
5. Shear

*3D Translation:-
In Computer graphics, 3D Translation is a process of moving an object from one position to another
in a three dimensional plane.
Consider a point object O has to be moved from one position to another in a 3D plane.
Let-
 Initial coordinates of the object O = (Xold, Yold, Zold)
 New coordinates of the object O after translation = (X new, Ynew, Zold)
 Translation vector or Shift vector = (Tx, Ty, Tz)

Given a Translation vector (Tx, Ty, Tz)-


 Tx defines the distance the Xold coordinate has to be moved.
 Ty defines the distance the Yold coordinate has to be moved.
 Tz defines the distance the Zold coordinate has to be moved.

Pana 55 of 81
This translation is achieved by adding the translation coordinates to the old coordinates of the object
as-
 Xnew = Xold + Tx (This denotes translation towards X axis)
 Ynew = Yold + Ty (This denotes translation towards Y axis)
 Znew = Zold + Tz (This denotes translation towards Z axis)

In Matrix form, the above translation equations may be represented as-

PRACTICE PROBLEM BASED ON 3D TRANSLATION IN COMPUTER GRAPHICS-


Problem-
Given a 3D object with coordinate points A(0, 3, 1), B(3, 3, 2), C(3, 0, 0), D(0, 0, 0). Apply the
translation with the distance 1 towards X axis, 1 towards Y axis and 2 towards Z axis and obtain the
new coordinates of the object.
Solution-
Given-
 Old coordinates of the object = A (0, 3, 1), B(3, 3, 2), C(3, 0, 0), D(0, 0, 0)
 Translation vector = (Tx, Ty, Tz) = (1, 1, 2)
For Coordinates A(0, 3, 1)

Let the new coordinates of A = (Xnew, Ynew, Znew).

Applying the translation equations, we have-


 Xnew = Xold + Tx = 0 + 1 = 1 Pana 56 of 81
 Ynew = Yold + Ty = 3 + 1 = 4
 Znew = Zold + Tz = 1 + 2 = 3

Thus, New coordinates of A = (1, 4, 3).

For Coordinates B(3, 3, 2)

Let the new coordinates of B = (Xnew, Ynew, Znew).

Applying the translation equations, we have-


 Xnew = Xold + Tx = 3 + 1 = 4
 Ynew = Yold + Ty = 3 + 1 = 4
 Znew = Zold + Tz = 2 + 2 = 4

Thus, New coordinates of B = (4, 4, 4).

For Coordinates C(3, 0, 0)

Let the new coordinates of C = (Xnew, Ynew, Znew).

Applying the translation equations, we have-


 Xnew = Xold + Tx = 3 + 1 = 4
 Ynew = Yold + Ty = 0 + 1 = 1
 Znew = Zold + Tz = 0 + 2 = 2

Thus, New coordinates of C = (4, 1, 2).

For Coordinates D(0, 0, 0)

Let the new coordinates of D = (Xnew, Ynew, Znew).

Applying the translation equations, we have-


 Xnew = Xold + Tx = 0 + 1 = 1
 Ynew = Yold + Ty = 0 + 1 = 1
 Znew = Zold + Tz = 0 + 2 = 2

Thus, New coordinates of D = (1, 1, 2).


Thus, New coordinates of the object = A (1, 4, 3), B(4, 4, 4), C(4, 1, 2), D(1, 1, 2).
Pana 57 of 81
*3D Rotation:-
In Computer graphics,
3D Rotation is a process of rotating an object with respect to an angle in a three dimensional plane.
Consider a point object O has to be rotated from one angle to another in a 3D plane.
Let-
 Initial coordinates of the object O = (Xold, Yold, Zold)
 Initial angle of the object O with respect to origin = Φ
 Rotation angle = θ
 New coordinates of the object O after rotation = (X new, Ynew, Znew)

In 3 dimensions, there are 3 possible types of rotation-


 X-axis Rotation
 Y-axis Rotation
 Z-axis Rotation

For X-Axis Rotation-

This rotation is achieved by using the following rotation equations-


 Xnew = Xold
 Ynew = Yold x cosθ – Zold x sinθ
 Znew = Yold x sinθ + Zold x cosθ

In Matrix form, the above rotation equations may be represented as-

For Y-Axis Rotation-

This rotation is achieved by using the following rotation equations-


 Xnew = Zold x sinθ + Xold x cosθ
 Ynew = Yold
 Znew = Yold x cosθ – Xold x sinθ

In Matrix form, the above rotation equations may be represented as-

Pana 58 of 81
For Z-Axis Rotation-

This rotation is achieved by using the following rotation equations-


 Xnew = Xold x cosθ – Yold x sinθ
 Ynew = Xold x sinθ + Yold x cosθ
 Znew = Zold

In Matrix form, the above rotation equations may be represented as-

PRACTICE PROBLEMS BASED ON 3D ROTATION IN COMPUTER GRAPHICS-


Problem-01:
Given a homogeneous point (1, 2, 3). Apply rotation 90 degree towards X, Y and Z axis and find out
the new coordinate points.

Solution-
Given-
 Old coordinates = (Xold, Yold, Zold) = (1, 2, 3)
 Rotation angle = θ = 90º

For X-Axis Rotation-

Let the new coordinates after rotation = (Xnew, Ynew, Znew).

Applying the rotation equations, we have-


Pana 59 of 81
 Xnew = Xold = 1
 Ynew = Yold x cosθ – Zold x sinθ = 2 x cos90° – 3 x sin90° = 2 x 0 – 3 x 1 = -3
 Znew = Yold x sinθ + Zold x cosθ = 2 x sin90° + 3 x cos90° = 2 x 1 + 3 x 0 = 2

Thus, New coordinates after rotation = (1, -3, 2).

For Y-Axis Rotation-

Let the new coordinates after rotation = (Xnew, Ynew, Znew).

Applying the rotation equations, we have-


 Xnew = Zold x sinθ + Xold x cosθ = 3 x sin90° + 1 x cos90° = 3 x 1 + 1 x 0 = 3
 Ynew = Yold = 2
 Znew = Yold x cosθ – Xold x sinθ = 2 x cos90° – 1 x sin90° = 2 x 0 – 1 x 1 = -1

Thus, New coordinates after rotation = (3, 2, -1).

For Z-Axis Rotation-

Let the new coordinates after rotation = (Xnew, Ynew, Znew).

Applying the rotation equations, we have-


 Xnew = Xold x cosθ – Yold x sinθ = 1 x cos90° – 2 x sin90° = 1 x 0 – 2 x 1 = -2
 Ynew = Xold x sinθ + Yold x cosθ = 1 x sin90° + 2 x cos90° = 1 x 1 + 2 x 0 = 1
 Znew = Zold = 3

Thus, New coordinates after rotation = (-2, 1, 3).

*3D Scaling:-
In computer graphics, scaling is a process of modifying or altering the size of objects.
 Scaling may be used to increase or reduce the size of object.
 Scaling subjects the coordinate points of the original object to change.
 Scaling factor determines whether the object size is to be increased or reduced.
 If scaling factor > 1, then the object size is increased.
 If scaling factor < 1, then the object size is reduced.

Consider a point object O has to be scaled in a 3D plane.


Let-
 Initial coordinates of the object O = (Xold, Yold,Zold)
Pana 60 of 81
 Scaling factor for X-axis = Sx
 Scaling factor for Y-axis = Sy
 Scaling factor for Z-axis = Sz
 New coordinates of the object O after scaling = (X new, Ynew, Znew)

This scaling is achieved by using the following scaling equations-


 Xnew = Xold x Sx
 Ynew = Yold x Sy
 Znew = Zold x Sz

In Matrix form, the above scaling equations may be represented as-

PRACTICE PROBLEMS BASED ON 3D SCALING IN COMPUTER GRAPHICS-


Problem-01:
Given a 3D object with coordinate points A(0, 3, 3), B(3, 3, 6), C(3, 0, 1), D(0, 0, 0). Apply the
scaling parameter 2 towards X axis, 3 towards Y axis and 3 towards Z axis and obtain the new
coordinates of the object.

Solution-
Given-
 Old coordinates of the object = A (0, 3, 3), B(3, 3, 6), C(3, 0, 1), D(0, 0, 0)
 Scaling factor along X axis = 2
 Scaling factor along Y axis = 3
 Scaling factor along Z axis = 3

For Coordinates A(0, 3, 3)

Let the new coordinates of A after scaling = (Xnew, Ynew, Znew).

Applying the scaling equations, we have-


 Xnew = Xold x Sx = 0 x 2 = 0
 Ynew = Yold x Sy = 3 x 3 = 9
 Znew = Zold x Sz = 3 x 3 = 9

Pana 61 of 81
Thus, New coordinates of corner A after scaling = (0, 9, 9).

For Coordinates B(3, 3, 6)

Let the new coordinates of B after scaling = (Xnew, Ynew, Znew).

Applying the scaling equations, we have-


 Xnew = Xold x Sx = 3 x 2 = 6
 Ynew = Yold x Sy = 3 x 3 = 9
 Znew = Zold x Sz = 6 x 3 = 18

Thus, New coordinates of corner B after scaling = (6, 9, 18).

For Coordinates C(3, 0, 1)

Let the new coordinates of C after scaling = (Xnew, Ynew, Znew).

Applying the scaling equations, we have-


 Xnew = Xold x Sx = 3 x 2 = 6
 Ynew = Yold x Sy = 0 x 3 = 0
 Znew = Zold x Sz = 1 x 3 = 3

Thus, New coordinates of corner C after scaling = (6, 0, 3).

For Coordinates D(0, 0, 0)

Let the new coordinates of D after scaling = (Xnew, Ynew, Znew).

Applying the scaling equations, we have-


 Xnew = Xold x Sx = 0 x 2 = 0
 Ynew = Yold x Sy = 0 x 3 = 0
 Znew = Zold x Sz = 0 x 3 = 0

Thus, New coordinates of corner D after scaling = (0, 0, 0).

*3D Reflection:-
 Reflection is a kind of rotation where the angle of rotation is 180 degree.
 The reflected object is always formed on the other side of mirror.
 The size of reflected object is same as the size of original object.
Pana 62 of 81
Consider a point object O has to be reflected in a 3D plane.
Let-
 Initial coordinates of the object O = (Xold, Yold, Zold)
 New coordinates of the reflected object O after reflection = (Xnew, Ynew,Znew)

In 3 dimensions, there are 3 possible types of reflection-

 Reflection relative to XY plane


 Reflection relative to YZ plane
 Reflection relative to XZ plane

Reflection Relative to XY Plane:


This reflection is achieved by using the following reflection equations-
 Xnew = Xold
 Ynew = Yold
 Znew = -Zold

In Matrix form, the above reflection equations may be represented as-

Reflection Relative to YZ Plane:


This reflection is achieved by using the following reflection equations-
 Xnew = -Xold
 Ynew = Yold
 Znew = Zold
Pana 63 of 81
In Matrix form, the above reflection equations may be represented as-

Reflection Relative to XZ Plane:


This reflection is achieved by using the following reflection equations-
 Xnew = Xold
 Ynew = -Yold
 Znew = Zold

In Matrix form, the above reflection equations may be represented as-

PRACTICE PROBLEMS BASED ON 3D REFLECTION IN COMPUTER GRAPHICS-


Problem-01:
Given a 3D triangle with coordinate points A(3, 4, 1), B(6, 4, 2), C(5, 6, 3). Apply the reflection on
the XY plane and find out the new coordinates of the object.

Solution-
Given-
 Old corner coordinates of the triangle = A (3, 4, 1), B(6, 4, 2), C(5, 6, 3)
 Reflection has to be taken on the XY plane

For Coordinates A(3, 4, 1)

Pana 64 of 81
Let the new coordinates of corner A after reflection = (X new, Ynew, Znew).

Applying the reflection equations, we have-


 Xnew = Xold = 3
 Ynew = Yold = 4
 Znew = -Zold = -1

Thus, New coordinates of corner A after reflection = (3, 4, -1).

For Coordinates B(6, 4, 2)

Let the new coordinates of corner B after reflection = (X new, Ynew, Znew).

Applying the reflection equations, we have-


 Xnew = Xold = 6
 Ynew = Yold = 4
 Znew = -Zold = -2

Thus, New coordinates of corner B after reflection = (6, 4, -2).

For Coordinates C(5, 6, 3)

Let the new coordinates of corner C after reflection = (X new, Ynew, Znew).

Applying the reflection equations, we have-


 Xnew = Xold = 5
 Ynew = Yold = 6
 Znew = -Zold = -3

Thus, New coordinates of corner C after reflection = (5, 6, -3).


Thus, New coordinates of the triangle after reflection = A (3, 4, -1), B(6, 4, -2), C(5, 6, -3).

Pana 65 of 81
*3D object representation:-

*Polygon Surface

Pana 66 of 81
*Polygon Tables:-

Hidden Surface Removal

1. One of the most challenging problems in computer graphics is the removal of hidden parts
from images of solid objects.
2. In real life, the opaque material of these objects obstructs the light rays from hidden parts and
prevents us from seeing them.
3. In the computer generation, no such automatic elimination takes place when objects are
projected onto the screen coordinate system.
4. Instead, all parts of every object, including many parts that should be invisible are displayed.
5. To remove these parts to create a more realistic image, we must apply a hidden line or hidden
surface algorithm to set of objects.
Pana 67 of 81
6. The algorithm operates on different kinds of scene models, generate various forms of output
or cater to images of different complexities.
7. All use some form of geometric sorting to distinguish visible parts of objects from those that
are hidden.
8. Just as alphabetical sorting is used to differentiate words near the beginning of the alphabet
from those near the ends.
9. Geometric sorting locates objects that lie near the observer and are therefore visible.
10.Hidden line and Hidden surface algorithms capitalize on various forms of coherence to reduce
the computing required to generate an image.
11.Different types of coherence are related to different forms of order or regularity in the image.
12.Scan line coherence arises because the display of a scan line in a raster image is usually very
similar to the display of the preceding scan line.
13.Frame coherence in a sequence of images designed to show motion recognizes that successive
frames are very similar.
14.Object coherence results from relationships between different objects or between separate
parts of the same objects.
15.A hidden surface algorithm is generally designed to exploit one or more of these coherence
properties to increase efficiency.
16.Hidden surface algorithm bears a strong resemblance to two-dimensional scan conversions.

Types of hidden surface detection algorithms


1. Object space methods
2. Image space methods

Object space methods: In this method, various parts of objects are compared. After comparison
visible, invisible or hardly visible surface is determined. These methods generally decide visible
surface. In the wireframe model, these are used to determine a visible line. So these algorithms are
line based instead of surface based. Method proceeds by determination of parts of an object whose
view is obstructed by other object and draws these parts in the same color.

Image space methods: Here positions of various pixels are determined. It is used to locate the
visible surface instead of a visible line. Each point is detected for its visibility. If a point is visible,
then the pixel is on, otherwise off. So the object close to the viewer that is pierced by a projector
through a pixel is determined. That pixel is drawn is appropriate color.

These methods are also called a Visible Surface Determination. The implementation of these
methods on a computer requires a lot of processing time and processing power of the computer.

The image space method requires more computations. Each object is defined clearly. Visibility of
each object surface is also determined.

Pana 68 of 81
Differentiate between Object space and Image space method

Object Space Image Space

1. Image space is object based. 1. It is a pixel-based method. It


It concentrates on geometrical is concerned with the final
relation among objects in the image, what is visible within
scene. each raster pixel.

2. Here surface visibility is 2. Here line visibility or point


determined. visibility is determined.

3. It is performed at the 3. It is performed using the


precision with which each resolution of the display device.
object is defined, No resolution
is considered.

4. Calculations are not based on 4. Calculations are resolution


the resolution of the display so base, so the change is difficult to
change of object can be easily adjust.
adjusted.

5. These were developed for 5. These are developed for raster


vector graphics system. devices.

6. Object-based algorithms 6. These operate on object data.


operate on continuous object
data.

7. Vector display used for 7. Raster systems used for image


object method has large address space methods have limited
space. address space.

8. Object precision is used for 8. There are suitable for


application where speed is application where accuracy is
required. required.

9. It requires a lot of 9. Image can be enlarged


calculations if the image is to without losing accuracy.
enlarge.

10. If the number of objects in 10. In this method complexity


the scene increases, increase with the complexity of
computation time also visible parts.
increases.

Similarity of object and Image space method

In both method sorting is used a depth comparison of individual lines, surfaces are objected to their
distances from the view plane.
Pana 69 of 81
Considerations for selecting or designing hidden surface algorithms: Following three considerations
are taken:

1. Sorting
2. Coherence
3. Machine

Sorting: All surfaces are sorted in two classes, i.e., visible and invisible. Pixels are colored
accordingly. Several sorting algorithms are available i.e.

1. Bubble sort
2. Shell sort
3. Quick sort
4. Tree sort
5. Radix sort

Different sorting algorithms are applied to different hidden surface algorithms. Sorting of objects is
done using x and y, z co-ordinates. Mostly z coordinate is used for sorting. The efficiency of sorting
algorithm affects the hidden surface removal algorithm. For sorting complex scenes or hundreds of
polygons complex sorts are used, i.e., quick sort, tree sort, radix sort.

For simple objects selection, insertion, bubble sort is used.

Coherence:-

It is used to take advantage of the constant value of the surface of the scene. It is based on how much
regularity exists in the scene. When we moved from one polygon of one object to another polygon of
same object color and shearing will remain unchanged.

Types of Coherence
1. Edge coherence
2. Object coherence
Pana 70 of 81
3. Face coherence
4. Area coherence
5. Depth coherence
6. Scan line coherence
7. Frame coherence
8. Implied edge coherence

1. Edge coherence: The visibility of edge changes when it crosses another edge or it also penetrates a
visible edge.

2. Object coherence: Each object is considered separate from others. In object, coherence
comparison is done using an object instead of edge or vertex. If A object is farther from object B,
then there is no need to compare edges and faces.

3. Face coherence: In this faces or polygons which are generally small compared with the size of the
image.

4. Area coherence: It is used to group of pixels cover by same visible face.

5. Depth coherence: Location of various polygons has separated a basis of depth. Depth of surface at
one point is calculated, the depth of points on rest of the surface can often be determined by a simple
difference equation.

6. Scan line coherence: The object is scanned using one scan line then using the second scan line.
The intercept of the first line.

7. Frame coherence: It is used for animated objects. It is used when there is little change in image
from one frame to another.

8. Implied edge coherence: If a face penetrates in another, line of intersection can be determined
from two points of intersection.

Algorithms used for hidden line surface detection


1. Back Face Removal Algorithm
2. Z-Buffer Algorithm
3. Painter Algorithm
4. Scan Line Algorithm
5. Subdivision Algorithm
6. Floating horizon Algorithm

Back Face Removal Algorithm:-

It is used to plot only surfaces which will face the camera. The objects on the back side are not
visible. This method will remove 50% of polygons from the scene if the parallel projection is used.
If the perspective projection is used then more than 50% of the invisible area will be removed. The
object is nearer to the center of projection, number of polygons from the back will be removed.

Pana 71 of 81
It applies to individual objects. It does not consider the interaction between various objects. Many
polygons are obscured by front faces, although they are closer to the viewer, so for removing such
faces back face removal algorithm is used.

When the projection is taken, any projector ray from the center of projection through viewing screen
to object pieces object at two points, one is visible front surfaces, and another is not visible back
surface.

Advantage
1. It is a simple and straight forward method.
2. It reduces the size of databases, because no need of store all surfaces in the database, only the
visible surface is stored.

Z-Buffer Algorithm:-

It is also called a Depth Buffer Algorithm. Depth buffer algorithm is simplest image space
algorithm. For each pixel on the display screen, we keep a record of the depth of an object within the
pixel that lies closest to the observer. In addition to depth, we also record the intensity that should be
displayed to show the object. Depth buffer is an extension of the frame buffer. Depth buffer
algorithm requires 2 arrays, intensity and depth each of which is indexed by pixel coordinates (x, y).

Painter Algorithm:-

It came under the category of list priority algorithm. It is also called a depth-sort algorithm. In this
algorithm ordering of visibility of an object is done. If objects are reversed in a particular order, then
correct picture results.

Objects are arranged in increasing order to z coordinate. Rendering is done in order of z coordinate.
Further objects will obscure near one. Pixels of rear one will overwrite pixels of farther objects. If z
values of two overlap, we can determine the correct order from Z value as shown in fig (a).

If z objects overlap each other as in fig (b) this correct order can be maintained by splitting of
objects.

Scan Line Algorithm:-

It is an image space algorithm. It processes one line at a time rather than one pixel at a time. It uses
the concept area of coherence. This algorithm records edge list, active edge list. So accurate
bookkeeping is necessary. The edge list or edge table contains the coordinate of two endpoints.
Active Edge List (AEL) contain edges a given scan line intersects during its sweep. The active edge
list (AEL) should be sorted in increasing order of x. The AEL is dynamic, growing and shrinking. Pana 72 of 81
Following figures shown edges and active edge list. The active edge list for scan line AC 1contain
e1,e2,e5,e6 edges. The active edge list for scan line AC 2contain e5,e6,e1.

Scan line can deal with multiple surfaces. As each scan line is processed, this line will intersect
many surfaces. The intersecting line will determine which surface is visible. Depth calculation for
each surface is done. The surface rear to view plane is defined. When the visibility of a surface is
determined, then intensity value is entered into refresh buffer.

Area Subdivision Algorithm:-

It was invented by John Warnock and also called a Warnock Algorithm. It is based on a divide &
conquer method. It uses fundamental of area coherence. It is used to resolve the visibility of
algorithms. It classifies polygons in two cases i.e. trivial and non-trivial.

Trivial cases are easily handled. Non trivial cases are divided into four equal subwindows. The
windows are again further subdivided using recursion until all polygons classified trivial and non
trivial.

Pana 73 of 81
*lighting models:-

Illumination model, also known as Shading model or Lightning model, is used to calculate the
intensity of light that is reflected at a given point on surface. There are three factors on which
lightning effect depends on:
 Light Source :
Light source is the light emitting source. There are three types of light sources:
o Point Sources – The source that emit rays in all directions (A bulb in a room).
o Parallel Sources – Can be considered as a point source which is far from the surface
(The sun).
o Distributed Sources – Rays originate from a finite area (A tubelight).
Their position, electromagnetic spectrum and shape determine the lightning effect.
 Surface :
When light falls on a surface part of it is reflected and part of it is absorbed. Now the surface
structure decides the amount of reflection and absorption of light. The position of the surface
and positions of all the nearby surfaces also determine the lightning effect.
 Observer :
The observer’s position and sensor spectrum sensitivities also affect the lightning effect.

*Introduction of Shading:-

Shading is referred to as the implementation of the illumination model at the pixel points or polygon
surfaces of the graphics objects.

Shading model is used to compute the intensities and colors to display the surface. The shading model has
two primary ingredients: properties of the surface and properties of the illumination falling on it. The
principal surface property is its reflectance, which determines how much of the incident light is reflected. If
a surface has different reflectance for the light of different wavelengths, it will appear to be colored.

An object illumination is also significant in computing intensity. The scene may have to save illumination
that is uniform from all direction, called diffuse illumination.

Shading models determine the shade of a point on the surface of an object in terms of a number of attributes.
The shading Mode can be decomposed into three parts, a contribution from diffuse illumination, the
contribution for one or more specific light sources and a transparency effect. Each of these effects
contributes to shading term E which is summed to find the total energy coming from a point on an object.
This is the energy a display should generate to present a realistic image of the object. The energy comes not
from a point on the surface but a small area around the point.

Pana 74 of 81
*Constant Intensity Shading

A fast and straightforward method for rendering an object with polygon surfaces is constant intensity
shading, also called Flat Shading. In this method, a single intensity is calculated for each polygon.
All points over the surface of the polygon are then displayed with the same intensity value. Constant
Shading can be useful for quickly displaying the general appearances of the curved surface as shown
in fig:

In general, flat shading of polygon facets provides an accurate rendering for an object if all of the
following assumptions are valid:-

 The object is a polyhedron and is not an approximation of an object with a curved surface.
 All light sources illuminating the objects are sufficiently far from the surface so that N. L and the
attenuation function are constant over the surface (where N is the unit normal to a surface and L
is the unit direction vector to the point light source from a position on the surface).
 The viewing position is sufficiently far from the surface so that V. R is constant over the surface
(where V is the unit vector pointer to the viewer from the surface position and R represent a unit
vector in the direction of ideal specular reflection).

*Gouraud shading:-

This Intensity-Interpolation scheme, developed by Gouraud and usually referred to as Gouraud


Shading, renders a polygon surface by linear interpolating intensity value across the surface.
Intensity values for each polygon are coordinate with the value of adjacent polygons along the
common edges, thus eliminating the intensity discontinuities that can occur in flat shading.

Each polygon surface is rendered with Gouraud Shading by performing the following calculations:

1. Determining the average unit normal vector at each polygon vertex.


2. Apply an illumination model to each vertex to determine the vertex intensity.
3. Linear interpolate the vertex intensities over the surface of the polygon.

At each polygon vertex, we obtain a normal vector by averaging the surface normals of all polygons
staring that vertex as shown in fig:

Pana 75 of 81
When surfaces are to be rendered in color, the intensities of each color component is calculated at
the vertices. Gouraud Shading can be connected with a hidden-surface algorithm to fill in the visible
polygons along each scan-line. An example of an object-shaded with the Gouraud method appears in
the following figure:

Gouraud Shading discards the intensity discontinuities associated with the constant-shading model,
but it has some other deficiencies. Highlights on the surface are sometimes displayed with
anomalous shapes, and the linear intensity interpolation can cause bright or dark intensity streaks,
called Match bands, to appear on the surface. These effects can be decreased by dividing the surface
into a higher number of polygon faces or by using other methods, such as Phong shading, that
requires more calculations.

*Phong Shading

A more accurate method for rendering a polygon surface is to interpolate the normal vector and then
apply the illumination model to each surface point. This method developed by Phong Bui Tuong is
called Phong Shading or normal vector Interpolation Shading. It displays more realistic highlights on
a surface and greatly reduces the Match-band effect.

A polygon surface is rendered using Phong shading by carrying out the following steps:

1. Determine the average unit normal vector at each polygon vertex.


2. Linearly & interpolate the vertex normals over the surface of the polygon.
3. Apply an illumination model along each scan line to calculate projected pixel intensities for
the surface points.

Interpolation of the surface normal along a polynomial edge between two vertices as shown in fig:
Pana 76 of 81
Incremental methods are used to evaluate normals between scan lines and along each scan line. At
each pixel position along a scan line, the illumination model is applied to determine the surface
intensity at that point.

Intensity calculations using an approximated normal vector at each point along the scan line produce
more accurate results than the direct interpolation of intensities, as in Gouraud Shading. The trade-
off, however, is that phong shading requires considerably more calculations.

Pana 77 of 81
UNIT 7
Web Graphics Design

*Graphics file Format:-


Graphic images are stored digitally using a small number of standardized graphic file formats, including bit map, TIFF, JPEG,
GIF, PNG; they can also be stored as raw, unprocessed data.
There are likely billions of graphic images available on the World Wide Web, and with few exceptions, almost any user can
view any of them with no difficulty. This is because all those images are stored in what amounts to a handful of file formats.

*BMP:-

The simplest way to define a raster graphic image is by using color-coded information for each pixel on each row. This is the
basic bit-map format used by Microsoft Windows.
The disadvantage of this type of image is that it can waste large amounts of storage. Where there’s an area with a solid color, for
example, we don’t need to repeat that color information for every new contiguous pixel. Instead, we can instruct the computer to
repeat the current color until we change it. This type of space-saving trick is the basis of compression, which allows us to store
the graphic using fewer bytes. Most Web graphics today are compressed so that they can be transmitted more quickly. Some
compression techniques will save space yet preserve all the information that’s in the image. That’s called “lossless”
compression. Other types of compression can save a lot more space, but the price you pay is degraded image quality. This is
known as “lossy” compression.

*TIFF:-

Most graphics file formats were created with a particular use in mind, although most can be used for a wide variety of image
types. Another common bit-mapped image type is Tagged Image File Format, which is used in faxing, desktop publishing and
medical imaging. TIFF is actually a “container” that can hold bit maps and JPEGs and allows (but doesn’t require) various types
of compression.
*JPEG:-

The Joint Photographic Experts Group created the JPEG standard in 1990 for the efficient compression of photographic images.
JPEG allows varying levels of lossy compression, letting you trade off quality against file size. Progressive JPEG is a way to
rearrange the graphic data to permit a rough view of the entire image even when only a small portion of the file has been
downloaded. If an image has flat areas of single color that transition sharply to contiguous areas, JPEG doesn’t work as well as
GIF.

JPEG 2000 is a wavelet-based standard designed to supersede the original. It offers improved compression, including lossless
compression, and supports multiple resolutions in a single file, but it has only limited support in current Web browsers.

*GIF:-

The Graphic Interchange Format takes an image and re-creates it using a palette of no more than 256 colors. These palettes can
be totally different for different images. GIF is a very efficient format that achieves very good compression for non
photographic images. GIF also permits the creation of animated images by allowing a file to contain several different frames
(each with its own palette) and to switch between them with a specified delay. In addition, GIF images are one of the few types
that can have a transparent background, meaning that there’s no need to always display a rectangular area..

*PNG:-

Portable Network Graphics is a standard developed in 1996 as an alternative to and improvement on GIF, but without the patent
issues and palette restrictions. PNG can compress an image more than GIF and supports improved background
transparency/opacity but allows only single images, without animation.

Pana 78 of 81
Common Graphics File Formats Compared

Type File Compression Principal Patented? Originated


Extension Methods application/usage by
Graphics .gif Lempel-Ziv- Flat-color Expired Compuserve
Interchange Welch graphics,
Format (LZW) animation
algorithm
Joint .jpg Loses some Photographic Disputed Joint
Photographic data images Photographic
Experts Experts
Group Group
Portable .png Lossless Replacement for No World Wide
Network GIF Web
Graphics Consortium
Raw Various None High-end digital No Individual
negative cameras equipment
makers
Tagged .tif Various or Document No Adobe
Image File none imaging, scanning Systems Inc.
Format
Windows bit .bmp None On-screen display No Microsoft
map Corp.

*Principles of web graphics design:-

1) Browser safe color:-

Colors in GIF files are normally represented by combining values for red, green, and blue, each of which ranges from 0 to 255
or in hexadecimal notation, from 00 to FF. With maximum values for all three (which can be coded in HTML as #FFFFFF), the
resulting color is white. With minimum values (#000000), the color is black. #FF0000 is red, #00FF00 is green, and #0000FF is
blue. #FFFF00 is yellow. #FF00FF is purple. Many other colors can be made using intermediate colors.

If web pages could use all possible combinations of these, there would be 16 million possible colors. Old, web browsers and
computers will not handle this full range, and there are differences between different browsers and different computers. There is
a limited set of 216 colors that all browsers and most computers can handle cleanly. These are the colors made up of the
hexadecimal values 00, 33, 66, 99, CC, and FF or the equivalent decimal values 0, 51, 102, 153, 204, and 255. These six values
produce 216 possible combinations of red, green, and blue, and they will look the same on nearly all computers.

2) Anti- aliasing:-

Pictures on screen are made up of square pixels. When a diagonal or curved line is represented, what may look like a smooth
line is actually made up of jagged lines, as shown in the magnified picture below:

Pana 79 of 81
This stair-step appearance is called aliasing. Aliasing is an attempt to represent a curved or diagonal line with edges that can
only be horizontal or vertical.

In high resolution images, the aliasing is hard enough to see so that the eye is fooled by the jagged edges. With lower resolution,
the jagged edges can be softened by a technique called anti-aliasing. Anti-aliasing inserts pixels of an intermediate color
between the main image and its background. Anti-aliasing is illustrated in the magnified picture below:

The intermediate color pixels fool the eye even further and give the appearance of smoother edges. Back away from the screen a
bit and compare the aliased and anti-aliased images.

Many graphics programs like PhotoShop automatically anti-alias GIF graphical images by default. There are some potential
problems with this, however. First, by putting in intermediate colors, it does increase the size of a GIF file. This may or may not
be acceptable. But a worse problem occurs when an anti-aliased transparent GIF picture is created against one background, but
then displayed against another color background. See the results below:

The above image is exactly the same as the anti-aliased picture above it. Only the background color has been changed. You will
sometimes see this halo effect around a picture on a web page. It happens when a transparent GIF is created against the wrong
background color. Watch out for it on your own web pages.

3) Browser safe size :-

More people are using their phones or other devices to browse the web. It is important to consider building your website with a
responsive layout where your website can adjust to different screens.

4) Resolution:-

Resolution measures the number of pixels in a digital image or display. It is defined as width by height, or W x H, where W is
the number of horizontal pixels and H is the number of vertical pixels. For example, the resolution of an HDTV is 1920 x 1080.

Image Resolution:-
A digital photo that is 3,088 pixels wide by 2,320 pixels tall has a resolution of 3088 × 2320. Multiplying these numbers
together produces 7,164,160 total pixels. Since the photo contains just over seven million pixels, it is considered a
"7 megapixel" image. Digital camera resolution is often measured in megapixels, which is simply another way to express the
image resolution.

Display Resolution:-
Every monitor and screen has a specific resolution. As mentioned above, an HD display has a resolution of 1920 x 1080 pixels.
A 4K display has twice the resolution of HD, or 3840 x 2160 pixels. It is called "4K" since the screen is nearly 4,000 pixels
across horizontally. The total number of pixels in a 4K display is 8,294,400, or just over eight megapixels.
Monitor resolution defines how many pixels a screen can display, but it does not describe how fine the image is. For example, a
27" iMac 5K display has a resolution of 5120 x 2880 while an older 27" Apple Thunderbolt Display has exactly half the
Pana 80 of 81
resolution of 2560 x 1440. Since the 27" iMac is the same physical size as the Thunderbolt Display but has twice the resolution,
it has twice the pixel density, measured in pixels per inch, or PPI.

5) Background :-

 F-SHAPED PATTERN READING:-

The F- based pattern is the most common way visitors scan text on a website. Eye tracking studies have found that most
of what people see is in the top and left area of the screen. The F’ shaped layout mimics our natural pattern of reading in
the West (left to right and top to bottom). An effective designed website will work with a readers natural pattern of
scanning the page.

 VISUAL HIERARCHY :-

Visual hierarchy is the arrangement of elements is order of importance. This is done either by size, colour, imagery,
contrast, typographically, whitespace, texture and style. One of the most important functions of visual hierarchy is to
establish a focal point; this shows visitors where the most important information is.

 GRID BASED LAYOUT:-

Grids help to structure your design and keep your content organised. The grid helps to align elements on the page and
keep it clean. The grid based layout arranges content into a clean rigid grid structure with columns, sections that line up
and feel balanced and impose order and results in an aesthetically pleasing website.

 LOAD TIME:-

Waiting for a website to load will lose visitors. Nearly half of web visitors expect a site to load in 2 seconds or less and
they will potentially leave a site that isn’t loaded within 3 seconds. Optimising image sizes will help load your site
faster.

Net amount 199

Pana 81 of 81

You might also like