Lec1 CG

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Computer Graphics

Computer graphics is an art of drawing picture, lines and charts on


computer screen by using programming language.
Also, it is a branch of computer science that deals with the theory and
technology for computerized image synthesis.
Computer graphics object is a collection of discrete picture element (pixel).
Pixel is the smallest screen element.
The area of computer graphics that is responsible for converting a continuous
figure, such as a line segment, into its discrete approximation is called scan
conversion.

Tasks of Computer Graphics:


1. Imaging : Representing 2D images
2. Modeling : Representing 3D objects
3. Rendering : Generating an image from a 2D or 3D model by means of a
computer program.
4. Animation : Simulating changes of objects over time.

Computer Graphics Applications:


1. Display of information : displaying scientific, engineering and medical
data.
2. Design : Engineering and Architectural system, Design of building and
etc.
3. Simulation : Computer generated models of physical, financial and
economic system for education aids.
4. Computer Art
5. Entertainment : Movies and Games.

Computer Graphics Dr.Omaima Bahaidara


1
Graphics Models:
 Raster Graphics: In raster, graphics pixels are used for an image to be
drawn. It is also known as a bitmap image in which a sequence of images
is into smaller pixels. Basically, a bitmap indicates a large number of
pixels together.
 Vector Graphics: In vector graphics, mathematical formulae are used to
draw different types of shapes, lines, objects, and so on.

Raster Vector
How to produced Drawn by dots (Pixels) Drawn by sequence of
commands
Resolution Resolution dependent Resolution independent
Scan display Raster scan display Random scan display
Software Photoshop Illustrator
Size Large (depends on resolution) Small
File Extension BMP, TIFF, GIF, JPG, PNG SVG, PDF, AI, EPS, DXF
Details More detailed image Less detailed image

Image Representation
A digital image, or image for short, is composed of discrete pixels or picture
elements. These pixels are arranged in a row-and-column fashion to form a
rectangular picture area, sometimes referred to as a raster. Clearly the total
number of pixels in an image is a function of the size of the image and the
number of pixels per unit length (e.g. inch) in the horizontal as well as the
vertical direction. This number of pixels per unit length is referred to as the
resolution of the image. Thus a 3 x 2 inch image at a resolution of 300pixels per
inch would have a total of 540,000 pixels.

Computer Graphics Dr.Omaima Bahaidara


2
Frequently image size is given as the total number of pixels in the horizontal
direction times the total number of pixels in the vertical direction (e.g., 512 x
512, 640 x 480, or 1024 x 768). Although this convention makes it relatively
straightforward to gauge the total number of pixels in an image, it does not
specify the size of the image or its resolution, as defined in the paragraph above.
A 640 x 480 image would measure 6 inches by 5 inches when presented (e.g.,

displayed or printed) at 96 pixels per inch. On the other hand, it would measure
1.6 inches by 1.2 inches at 400 pixels per inch.
The ratio of an image's width to its height, measured in unit length or number of
pixels, is referred to as its aspect ratio. Both a 2 x 2 inch image and a512x512
image have an aspect ratio of 1/1, whereas both a 6 x 4 inch image and a
1024x768 image have an aspect ratio of 4/3.
Individual pixels in an image can be referenced by their coordinates. Typically,
the pixel at the lower left corner of an image is considered to be at the origin
(0,0) of a pixel coordinate system. Thus the pixel at the lower right corner of a
640 x 480 image would have coordinates (639,0), whereas the pixel at the upper
right corner would have coordinates (639,479).
The task of composing an image on a computer is essentially a matter of setting
pixel values. The collective effects of the pixels taking on different color
attributes give us what we see as a picture.

Computer Graphics Dr.Omaima Bahaidara


3
The RGB Color Model
Color is a complex, interdisciplinary subject spanning from physics to
psychology. In this section we only introduce the basics of the most widely used
color representation method in computer graphics. Figure 1 shows a color
coordinate system with three primary colors: R (red), G (green), and B(blue).
Each primary color can take on an intensity value ranging from 0 (off—lowest)
to 1 (on—highest). Mixing these three primary colors at different intensity
levels produces a variety of colors. The collection of all the colors obtainable by
such a linear combination of red, green, and blue forms the cube-shaped RGB
color space. The corner of the RGB color cube that is at the origin of the
coordinate system corresponds to black, whereas the corner of the cube that is
diagonally opposite to the origin represents white. The diagonal line connecting
black and white corresponds to all the gray colors between black and white. It is
called the gray axis.

Fig. 1 The RGB color space.

Computer Graphics Dr.Omaima Bahaidara


4
Given this RGB color model an arbitrary color within the cubic color space can
be specified by its color coordinates: (r, g, b). For example, we have (0,0,0) for
black, (1,1,1) for white, (1,1,0) for yellow, etc. A gray color at (0.7,0.7,0.7) has
an intensity halfway between one at (0.9,0.9,0.9) and one at (0.5,0.5,0.5).
Color specification using the RGB model is an additive process. We begin with
black and add on the appropriate primary components to yield a desired color.
This closely matches the working principles of the display monitor. On the
other hand, there is a complementary color model, called the CMY color model,
that defines colors using a subtractive process, which closely matches the
working principles of the printer.
In the CMY model we begin with white and take away the appropriate primary
components to yield a desired color. For example, if we subtract red from white,
what remains consists of green and blue, which is cyan. Looking at this from
another perspective, we can use the amount of cyan, the complementary color of
red, to control the amount of red, which is equal to one minus the amount of
cyan. Figure 2 shows a coordinate system using the three primaries'
complementary colors: C (cyan), M (magenta), and Y(yellow). The corner of
the CMY color cube that is at (0,0,0) corresponds to white, whereas the corner
of the cube that is at (1,1,1) represents black (no red, no green, no blue). The
following formulas summarize the conversion between the two-color models:

( )=( ) -( )

( )=( )-( )

Computer Graphics Dr.Omaima Bahaidara


5
Fig. 2 The CMY color space.

1. Direct Coding
Image representation is essentially the representation of pixel colors. Using
direct coding we allocate a certain amount of storage space for each pixel to
code its color. For example, we may allocate 3 bits foreach pixel, with one bit
for each primary color (see Table 1). This 3-bit representation allows each
primary to vary independently between two intensity levels: 0 (off) or 1 (on).
Hence each pixel can take on one of the eight colors that correspond to the
corners of the RGB color cube.

Computer Graphics Dr.Omaima Bahaidara


6
bit 1: r bit 2 : g bit 3 : b Color Name
0 0 0 Black
0 0 1 Blue
0 1 0 Green
0 1 1 Cyan
1 0 0 Red
1 0 1 Magenta
1 1 0 Yellow
1 1 1 White

Table. 1 Direct coding of colors using 3 bits.

A widely accepted industry standard uses 3 bytes, or 24 bits, per pixel, with one
byte for each primary color. This way we allow each primary color to have 256
different intensity levels, corresponding to binary values from 00000000 to
11111111. Thus a pixel can take on a color from 256 x 256 x 256 or 16.7million
possible choices. This 24-bit format is commonly referred to as the true color
representation, for the difference between two colors that differ by one intensity
level in one or more of the primaries is virtually undetectable under normal
viewing conditions. Hence a more precise representation involving more bits is
of little use in terms of perceived color accuracy.
A notable special case of direct coding is the representation of black-and-white
(bi-level) and gray-scale images, where the three primaries always have the
same value and hence need not be coded separately. A black-and-white image
requires only one bit per pixel, with bit value 0 representing black and
1representing white. A gray-scale image is typically coded with 8 bits per pixel
to allow a total of 256 intensity or gray levels.

Computer Graphics Dr.Omaima Bahaidara


7
Although this direct coding method features simplicity and has supported a
variety of applications, we can see a relatively high demand for storage space
when it comes to the 24-bit standard. For example, a1000 x 1000 true color
image would take up three million bytes. Furthermore, even if every pixel in
that image had a different color, there would only be one million colors in the
image. In many applications the number of colors that appear in any one
particular image is much less. Therefore the 24-bit representation's ability to
have 16.7 million different colors appear simultaneously in a single image
seems to be somewhat overkill.

2. Lookup Table
Image representation using a lookup table can be viewed as a compromise
between our desire to have a lower storage requirement and our need to support
a reasonably sufficient number of simultaneous colors. In this approach pixel
values do not code colors directly. Instead, they are addresses or indices into a
table of color values. The color of a particular pixel is determined by the color
value in the table entry that the value of the pixel references.
Figure 3 shows a lookup table with 256 entries. The entries have addresses 0
through 255. Each entry contains a 24-bit RGB color value. Pixel values are
now 1 -byte, or 8-bit, quantities. The color of a pixel whose value is i, where
0 < i < 255, is determined by the color value in the table entry whose address is
i. This 24-bit 256-entry lookup table representation is often referred to as the 8-
bit format. It reduces the storage requirement of a 1000 x 1000 image to one
million bytes plus 768 bytes for the color values in the lookup table. It allows
256 simultaneous colors that are chosen from 16.7 million possible colors.

Computer Graphics Dr.Omaima Bahaidara


8
Fig. 3 A 24-bit 256-entry lookup table.

It is important to remember that, using the lookup table representation, an image


is defined not only by its pixel values (Data matrix) but also by the color values
in the corresponding lookup table. Those color values form a color map for the
image.

Computer Graphics Dr.Omaima Bahaidara


9
Display Monitor
Among the numerous types of image presentation or output devices that convert
digitally represented images into visually perceivable pictures is the display or
video monitor.
We first take a look at the working principle of a monochromatic display
monitor, which consists mainly of a cathode ray tube (CRT) along with related
control circuits. The CRT is a vacuum glass tube with the display screen at one
end and connectors to the control circuits at the other (see Fig. 4). Coated on the
inside of the display screen is a special material, called phosphor, which emits
light for a period of time when hit by a beam of electrons. The color of the light
and the time period vary from one type of phosphor to another.

Fig. 4 Anatomy of a monochromatic CRT

Computer Graphics Dr.Omaima Bahaidara


11
Opposite to the phosphor-coated screen is an electron gun that is heated to send
out electrons. The electrons are regulated by the control electrode and forced by
the focusing electrode into a narrow beam striking the phosphor coating at small
spots. When this electron beam passes through the horizontal and vertical
deflection plates, it is bent or deflected by the electric fields between the plates.
The horizontal plates control the beam to scan from left to right and retrace
from right to left. The vertical plates control the beam to go from the first scan
line at the top to the last scan line at the bottom and retrace from the bottom
back to the top. These actions are synchronized by the control circuits so that
the electron beam strikes each and every pixel position in a scan line by scan
line fashion. As an alternative to this electrostatic deflection method, some
CRTs use magnetic deflection coils mounted on the outside of the glass
envelope to bend the electron beam with magnetic fields.
The intensity of the light emitted by the phosphor coating is a function of the
intensity of the electron beam. The control circuits shut off the electron beam
during horizontal and vertical retraces. The intensity of the beam at a particular
pixel position is determined by the intensity value of the corresponding pixel in
the image being displayed.

Computer Graphics Dr.Omaima Bahaidara


11
Color Display
Moving on to color displays there are now three electron guns instead of one
inside the CRT (see Fig. 5), with one electron gun for each primary color. The
phosphor coating on the inside of the display screen consists of dot patterns of
three different types of phosphors. These phosphors are capable of emitting red,
green, and blue light, respectively. The distance between the center of the dot
patterns is called the pitch of the color CRT.
A thin metal screen called a shadow mask is placed between the phosphor
coating and the electron guns. The tiny holes on the shadow mask constrain
each electron beam to hit its corresponding phosphor dots. When viewed at a
certain distance, light emitted by the three types of phosphors blends together to
give us a broad range of colors.

Fig. 5 Color CRT using a shadow mask.

Computer Graphics Dr.Omaima Bahaidara


12
Printer
Another typical image presentation device is the printer. A printer deposits color
pigments onto a print media, changing the light reflected from its surface and
making it possible for us to see the print result.
Given the fact that the most commonly used print media is a piece of white
paper, we can in principle utilize three types of pigments (cyan, magenta, and
yellow) to regulate the amount of red, green, and blue light reflected to yield all
RGB colors. However, in practice, an additional black pigment is often used
due to the relatively high cost of color pigments and the technical difficulty
associated with producing high-quality black from several color pigments.
While some printing methods allow color pigments to blend together, in many
cases the various color pigments remain separate in the form of tiny dots on the
print media. Furthermore, the pigments are often deposited with a limited
number of intensity levels. There are various techniques to achieve the effect of
multiple intensity levels beyond what the pigment deposits can offer.

Image Files
A digital image is often encoded in the form of a binary file for the purpose of
storage and transmission. Among the numerous encoding formats are BMP
(Windows Bitmap), JPEG (Joint Photographic Experts Group File Interchange
Format), and TIFF (Tagged Image File Format). Although these formats differ
in technical details, they share structural similarities.
Figure 6 shows the typical organization of information encoded in an image file.
The file consists largely of two parts:
header and image data.

Computer Graphics Dr.Omaima Bahaidara


13
In the beginning of the file header a binary code or ASCII string identifies the
format being used, possibly along with the version number. The width and
height of the image are given in numbers of pixels.
Common image types include black and white (1 bit per pixel), 8-bit gray scale
(256 levels along the gray axis), 8-bit color (lookup table), and 24-bit color.
Image data format specifies the order in which pixel values are stored in the
image data section. A commonly used order is left to right and top to bottom.
Another possible order is left to right and bottom to top.

Fig. 6 Typical image file format.

The values in the image data section may be compressed, using such
compression algorithms as run- length encoding (RLE). The basic idea behind
RLE can be illustrated with a character string "xxxxxxyyzzzz", which takes 12
bytes of storage. Now if we scan the string from left to right for segments of

Computer Graphics Dr.Omaima Bahaidara


14
repeating characters and replace each segment by a 1 -byte repeat count
followed by the character being repeated, we convert or compress the given
string to "6x2y4z", which takes only 6 bytes.
This compressed version can be expanded or decompressed by repeating the
character following each repeat count to recover the original string.
The length of the file header is often fixed, for otherwise it would be necessary
to include length information in the header to indicate where image data starts
(some formats include header length anyway).

Setting the Color attributes of pixels


Setting the color attributes of individual pixels is arguably the most primitive
graphics operation. It is typically done by making system library calls to write
the respective values into the frame buffer. An aggregate data structure, such as
a three-element array, is often used to represent the three primary color
components. Regardless of image type (direct coding versus lookup table), there
are two possible protocols for the specification of pixel coordinates and color
values.
 In one protocol the application provides both coordinate information and
color information simultaneously. Thus a call to set the pixel at location
(x, y) in a 24-bit image to color (r, g, b) would look like
setPixel(x, y, rgb)
where rgb is a three-element array with rgb[0] = r, rgb[l] = g, and
rgb[2]=b. On the other hand, if the image uses a lookup table then,
assuming that the color is defined in the table, the call would look like
setPixel(x, y, i)
where i is the address of the entry containing (r, g, b).

Computer Graphics Dr.Omaima Bahaidara


15
 Another protocol is based on the existence of a current color, which is
maintained by the system and can be set by calls that look like
setColor(rgb) for direct coding, or setColor(i) for the lookup table
representation. Calls to set pixels now need only to provide coordinate
information and would look like setPixel(x, y) for both image types. The
graphics system will automatically use the most recently specified current
color to carry out the operation.
Lookup table entries can be set from the application by a call that looks
like setEntry(i, rgb) which puts color (r, g, b) in the entry whose address
is i. Conversely, values in the lookup table can be read back to the
application with a call that looks like getEntry(i, rgb) which returns the
color value in entry i through array parameter rgb.

There are sometimes two versions of the calls that specify RGB values. One
takes RGB values as floating point numbers in the range of [0.0,1.0],
whereas the other takes them as integers in the range of [0,255]. Although
the floating point version is handy when the color values come from some
continuous formula, the floating point values are mapped by the graphics
system into integer values before being written into the frame buffer.
In order to provide basic support for pixel-based image-processing
operations there are calls that look like getPixel(x, y, rgb) for direct coding
or getPixel(x, y, i) for the lookup table representation to return the color or
index value of the pixel at (x, y) back to the application.
There are also calls that read and write rectangular blocks of pixels. A useful
example would be a call to set all pixels to a certain background color.
Assuming that the system uses a current color we would first set the current
color to be the desired background color, and then make a call that looks
like: clear() to achieve the goal.

Computer Graphics Dr.Omaima Bahaidara


16

You might also like