CG Module 3 Notes
CG Module 3 Notes
INPUT DEVICES
✓ Input devices are the devices which provide input to the computer
graphics application program. Input devices can be categorized in two
ways:
➔ These are logical functions that are defined by how they handle input or
output character strings from the perspective of C program.
➔ From logical devices perspective inputs are from inside the application
program. The two major characteristics describe the logical behavior of input
devices are as follows:
• The measurements that the device returns to the user program
• The time when the device returns those measurements
API defines six classes of logical input devices which are given below:
1. STRING: ( A device for specifying text input)A string device is a logical device
that provides the ASCII valuesof input characters to the user program. This logical
device is usually implemented by means of physical keyboard.
2. LOCATOR:( A device for specifying one coordinate position) A locator device
provides a position in world coordinates to the user program. It is usually
implemented by means of pointing devices suchas mouse or track ball.
3. PICK:(A device for selecting a component of a picture) A pick device returns the
identifier of an object on the display to the user program. It is usually
implemented with the same physical device as the locator but has a separate
software interface to the user program. In OpenGL, we can use a process of
4. CHOICE: (A device for selecting a menu option)A choice device allows the user
to select one of a discrete number of options. In OpenGL, we can use various
widgets provided by the window system. A widget is a graphical interactive
component provided by the window system or a toolkit. The Widgets include
menus, scrollbars and graphical buttons. For example, a menu with n selections
acts as a choice device, allowing user to select one of ‘n’ alternatives.
5. VALUATORS: (A device for specifying a scalar value)They provide analog input
to the user program on some graphical systems; there are boxes or dials to
provide value.
6. STROKE: (A device for specifying a set of coordinate positions)A stroke device
returns array of locations. Example, pushing down a mouse button starts the
transfer of data into specified array and releasing of button ends this transfer.
INPUT MODES
Input devices can provide input to an application program in terms of two
entities:
1. Measure of a device is what the device returns to the application
program.
2. Trigger of a device is a physical input on the device with which the user
can send signal to the computer
Example 1: The measure of a keyboard is a single character or array of
characters where as the trigger is the enter key.
Example 2: The measure of a mouse is the position of the cursor whereas the
trigger is when the mouse button is pressed.
The application program can obtain the measure and trigger in three distinct
modes:
1. REQUEST MODE: In this mode, measure of the device is not returned to
the program until the device is triggered.
• For example, consider a typical C program which reads a character input
using scanf(). When the program needs the input, it halts when it
encounters the scanf() statement and waits while user type characters at
the terminal. The data is placed in a keyboard buffer (measure) whose
contents are returned to the program only after enter key (trigger) is
pressed.
• Another example, consider a logical device such as locator, we can move
out pointing device to the desired location and then trigger the device with
its button, the trigger will cause the location to be returned to the
application program.
Both request and sample modes are useful for the situation if and only if there
is a single input device from which the input is to be taken. However, in case of
flight simulators or computer games variety of input devices are used and these
mode cannot be used. Thus, event mode is used.
Basic Positioning Methods: We can interactively choose a coordinate position with a pointing device that
records a screen location. How the position is used depends on the selected processing option. The coordinate location
could be an endpoint position for a new line segment, or it could be used to position some object—for instance, the
selected screen location could reference a new position for the center of a sphere; or the location could be used to
specify the position for a text string, which could at that location or it could be centered on that location.
Dragging : Another interactive positioning technique is to select an object and drag it to a new location. Using a
mouse, for instance, we position the cursor at the object position, press a mouse button, move the cursor to a new
position, and release the button. The object is then displayed at the new cursor location. Usually, the object is
displayed at intermediate positions as the screen cursor moves.
Constraints:
Any procedure for altering input coordinate values to obtain a particular orientation or alignment of an object is
called a constraint. For example, an input line segment can be constrained to be horizontal or vertical, as illustrated in
Figures . To implement this type of constraint, we compare the input coordinate values at the two endpoints. If the
difference in the y values of the two endpoints is smaller than the difference in the x values, a horizontal line is
displayed. Otherwise, a vertical line is drawn. The horizontal-vertical constraint is useful, for instance, in forming
network layouts, and it eliminates the need for precise positioning of endpoint coordinates.
1
Module 5 Computer Animation
Parameter vpArray designates an integer array containing the coordinate position and size parameters for the current
viewport.
2
Module 5 Computer Animation
3
Module 5 Computer Animation
4
Module 5 Computer Animation
5
Module 5 Computer Animation
6
Module 5 Computer Animation
7
Module 5 Computer Animation
8
Module 5 Computer Animation
9
Module 5 Computer Animation
10
Module 5 Computer Animation
DISPLAY LISTS
The original architecture of a graphical system was based on a general-purpose
computer connected to a display. The architecture is shown in the next page.
At that time, the disadvantage is that system was slow and expensive. Therefore,
a special purpose computer is build which is known as “display processor”.
The user program is processed by the host computer which results a compiled
list of instruction that was then sent to the display processor, where the
instruction are stored in a display memory called as “display file” or “display
list”. Display processor executes its display list contents repeatedly at a
sufficient high rate to produce flicker-free image.
➔ Display lists are defined similarly to the geometric primitives. i.e., glNewList()
at the beginning and glEndList() at the end is used to define a display list.
➔ Each display list must have a unique identifier – an integer that is usually
a macro defined in the C program by means of #define directiveto an
appropriate name for the object in the list. For example, the following code
defines red box:
11
Module 5 Computer Animation
➔ The flag GL_COMPILE indicates the system to send the list to the server but
not to display its contents. If we want an immediate display of the contents
while the list is being constructed thenGL_COMPILE_AND_EXECUTE flag
is set.
➔ Each time if the client wishes to redraw the box on the display, it need
not resend the entire description. Rather, it can call the following function:
glCallList(Box)
➔ The Box can be made to appear at different places on the monitor by changing
the projection matrix as shown below:
➢ We can retrieve these values by popping them from the stack, usually the
below function calls are placed at the end of the display list,
12
Module 5 Computer Animation
glPopAttrib();
glPopMatrix();
➢ We can create multiple lists with consecutive identifiers more easily
using:
glGenLists (number)
➢ We can display multiple display lists by using single funciton call:
glCallLists()
➢ From the above figure we can observe to draw letter ‘I’ is fairly simple,
whereas drawing ‘O’ requires many line segments to get sufficiently smooth.
➢ So, on the average we need more than 13 bytes per character to represent
stroke font. The performance of the graphics system will be degraded for the
applications that require large quantity of text.
➢ A more efficient strategy is to define the font once, using a display list for
each char and then store in the server. We define a function OurFont()
which will draw any ASCII character stored in variable ‘c’.
13
Module 5 Computer Animation
➢ Each face has two identical eyes, two identical ears, one nose, one mouth
& an outline. We can specify these parts through display lists which is
given below:
14
Module 5 Computer Animation
Animation
Raster methods of computer animation
Design of animation sequences
Traditional animation techniques
General computer animation function
OpenGL animation procedures
Introduction:
• To 'animate' is literally 'to give life to'.
• 'Animating' is moving something which can't move itself.
• Animation adds to graphics the dimension of time which vastly increases the amount of
information which can be transmitted.
• Computer animation generally refers to any time sequence of visual changes in a picture.
• In addition to changing object positions using translations or rotations, a computer-
generated animation could display time variations in object size, color, transparency, or
surface texture.
• Two basic methods for constructing a motion sequence are
1. real-time animation and
➢ In a real-time computer-animation, each stage of the sequence is viewed as it
is created.
➢ Thus the animation must be generated at a rate that is compatible with the
constraints of the refresh rate.
2. frame-by-frame animation
➢ For a frame-by-frame animation, each frame of the motion is separately
generated and stored.
➢ Later, the frames can be recorded on film, or they can be displayed
consecutively on a video monitor in “real-time playback” mode.
15
Module 5 Computer Animation
Double Buffering
✓ One method for producing a real-time animation with a raster system is to employ two
refresh buffers.
✓ We create a frame for the animation in one of the buffers.
✓ Then, while the screen is being refreshed from that buffer, we construct the next frame in
the other buffer.
✓ When that frame is complete, we switch the roles of the two buffers so that the refresh
routines use the second buffer during the process of creating the next frame in the first
buffer.
✓ When a call is made to switch two refresh buffers, the interchange could be performed at
various times.
✓ The most straight forward implementation is to switch the two buffers at the end of the
current refresh cycle, during the vertical retrace of the electron beam.
✓ If a program can complete the construction of a frame within the time of a refresh cycle,
say 1/60 of a second, each motion sequence is displayed in synchronization with the screen
refresh rate.
✓ If the time to construct a frame is longer than the refresh time, the current frame is displayed
for two or more refresh cycles while the next animation frame is being generated.
16
Module 5 Computer Animation
✓ Similarly, if the frame construction time is 1/25 of a second, the animation frame rate is
reduced to 20 frames per second because each frame is displayed three times.
✓ Irregular animation frame rates can occur with double buffering when the frame
construction time is very nearly equal to an integer multiple of the screen refresh time the
animation frame rate can change abruptly and erratically.
✓ One way to compensate for this effect is to add a small time delay to the program.
✓ Another possibility is to alter the motion or scene description to shorten the frame
construction time.
17
Module 5 Computer Animation
1. Storyboard layout
2. Object definitions.
3. Key-frame specifications
4. Generation of in-between frames.
✓ This approach of carrying out animations is applied to any other applications as well,
although some applications are exceptional cases and do not follow this sequence.
✓ For frame-by-frame animation, every frame of the display or scene is generated separately
and stored. Later, the frame recording can be done and they might be displayed
consecutively in terms of movie.
✓ The outline of the action is storyboard. This explains the motion sequence. The storyboard
consists of a set of rough structures or it could be a list of the basic ideas for the motion.
✓ For each participant in the action, an object definition is given. Objects are described in
terms of basic shapes the examples of which are splines or polygons. The related movement
associated with the objects are specified along with the shapes.
✓ A key frame in animation can be defined as a detailed drawing of the scene at a certain
time in the animation sequence. Each object is positioned according to the time for that
frame, within each key frame.
✓ Some key frames are selected at extreme positions and the others are placed so that the
time interval between two consecutive key frames is not large. Greater number of key
frames are specified for smooth motions than for slow and varying motion.
✓ And the intermediate frames between the key frames are In-betweens. And the Media that
we use determines the number of In-betweens which are required to display the animation.
A Film needs 24 frames per second, and graphics terminals are refreshed at the rate of 30
to 60 frames per second.
✓ Depending on the speed specified for the motion, some key frames are duplicated. For a
one minutes film sequence with no duplication, we would require 288 key frames. We
place the key frames a bit distant if the motion is not too complicated.
✓ A number of other tasks may be carried out depending upon the application requirement
for example synchronization of a sound track.
18
Module 5 Computer Animation
✓ Object movements can also be emphasized by creating preliminary actions that indicate
an anticipation of a coming motion
19
Module 3 Computer Animation
✓ Some animation packages, such as Wavefront for example, provide special functions for
both the overall animation design and the processing of individual objects.
✓ Others are special-purpose packages for particular features of an animation, such as a
system for generating in-between frames or a system for figure animation.
✓ A set of routines is often provided in a general animation package for storing and managing
the object database. Object shapes and associated parameters are stored and updated in the
database. Other object functions include those for generating the object motion and those
for rendering the object surfaces
✓ Another typical function set simulates camera movements. Standard camera motions are
zooming, panning, and tilting. Finally, given the specification for the key frames, the in-
betweens can be generated automatically.
Computer-Animation Languages
20
Module 3 Computer Animation
Key-Frame Systems
A set of in-betweens can be generated from the specification of two (or more) key frames using a
key-frame system. Motion paths can be given with a kinematic description as a set of spline curves,
or the motions can be physically based by specifying the forces acting on the objects to be animated.
For complex scenes, we can separate the frames into individual components or objects called cels
(celluloid transparencies). This term developed from cartoon animation
techniques where the background and each character in a scene were placed on a separate
transparency. Then, with the transparencies stacked in the order from background to foreground,
they were photographed to obtain the completed frame. The specified animation paths are then
used to obtain the next cel for each character, where the positions are interpolated from the key-
frame times.
21
Module 3 Computer Animation
22
Module 3 Computer Animation
23
Module 3 Computer Animation
24
Module 3 Computer Animation
25
Module 3 Computer Animation
26
Module 3 Computer Animation
27
Module 3 Computer Animation
28
Module 3 Computer Animation
29
Module 3 Computer Animation
30
Module 3 Computer Animation
Motion Capture
An alternative to determining the motion of a character computationally is to digitally record the movement
of a live actor and to base the movement of an animated character on that information. This technique, known
as motion capture or mo-cap, can be used when the movement of the character is predetermined (as in a
scripted scene). The animated character will perform the same series of movements as the live actor.
The classic motion capture technique involves placing a set of markers at strategic positions on the actor’s
body, such as the arms, legs, hands, feet, and joints. It is possible to place the markers directly on the actor,
but more commonly
they are affixed to a special skintight body suit worn by the actor. The actor is them filmed performing the
scene. Image processing techniques are then used to identify the positions of the markers in each frame of the
film, and their positions are translated to coordinates. These coordinates are used to determine the positioning
of the body of the animated character. The movement of each marker from frame to frame in the film is tracked
and used to control the corresponding movement of the animated character. Some motion capture systems
record more than just the gross movements of the parts of the actor’s body. It is possible to record even the
actor’s facial movements. Often called performance capture systems, these typically use a camera trained on
the actor’s face and small light-emitting diode (LED) lights that illuminate the face. Small photoreflective
markers attached to the face reflect the light from the LEDs and allow the camera to capture the small
movements of the muscles of the face, which can then be used to create realistic facial animation on a
computer-generatedcharacter.
31
Module 3 Computer Animation
32
Module 3 Computer Animation
33
Module 3 Computer Animation
34
Module 3 Computer Animation
35
Module 3 Computer Animation
36