0% found this document useful (0 votes)
44 views51 pages

CG Module 3 Notes

cg module 3

Uploaded by

Shilpa Vasista
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views51 pages

CG Module 3 Notes

cg module 3

Uploaded by

Shilpa Vasista
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Module 3 Input and Interaction and Computer Animation

MODULE 3: Interactive Input Methods and Graphical User Interfaces


INTERACTION
• In the field of computer graphics, interaction refers to the manner in which
the application program communicates with input and output devices of
the system.
• For e.g. Image varying in response to the input from the user.
• OpenGL doesn’t directly support interaction in order to maintain
portability. However, OpenGL provides the GLUT library. This library
supports interaction with the keyboard, mouse etc and hence enables
interaction. The GLUT library is compatible with many operating systems
such as X windows, Current Windows, Mac OS etc and hence indirectly
ensures the portability of OpenGL.

INPUT DEVICES
✓ Input devices are the devices which provide input to the computer
graphics application program. Input devices can be categorized in two
ways:

1. Physical input devices


2. Logical input devices

PHYSICAL INPUT DEVICES


✓ Physical input devices are the input devices which has the particular
hardware architecture.
✓ The two major categories in physical input devices are:
• Key board devices like standard keyboard, flexible keyboard, handheld
keyboard etc. These are used to provide character input like letters,
numbers, symbols etc.
• Pointing devices like mouse, track ball, light pen etc. These are used
to specify the position on the computer screen.
1. KEYBOARD: It is a general keyboard which has set of characters. We make
use of ASCII value to represent the character i.e. it interacts with the programmer
by passing the ASCII value of key pressed by programmer. Input can be given

CSE, City Engineering College 1


Module 3 Input and Interaction and Computer Animation

either single character of array of characters to the program.


2. MOUSE AND TRACKBALL: These are pointing devices used to specify the
position. Mouse and trackball interacts with the application program by passing
the position of the clicked button. Both these devices are similar in use and
construction. In these devices, the motion of the ball is converted to signal sent
back to the computer by pair of encoders inside the device. These encoders
measure motion in 2-orthogonal directions.The values passed by the pointing
devices can be considered as positions and converted to a 2-D location in either screen
or world co-ordinates. These devices are relative positioning devices because
changes in the position of the ball yield a position in the user program.
3. DATA TABLETS: It provides absolute positioning. It has rows and columns
of wires embedded under its surface. The position of the stylus is determined
through electromagnetic interactions between signals travelling through the wires
and sensors in the stylus.
4. LIGHT PEN: It consists of light-sensing device such as “photocell”. The
light pen is held at the front of the CRT. When the electron beam strikes the
phosphor, the light is emitted from the CRT. If it exceeds the threshold then light
sensing device of the light pen sends a signal to the computer specifying the
positionThe major disadvantage is that it has the difficulty in obtaining a position that
corresponds to a dark area of the screen.
5. JOYSTICK: The motion of the stick in two orthogonal directions is encoded,
interpreted as two velocities and integrated to indentify a screen location. The
integration implies that if the stick is left in its resting position,there is no change
in cursor position. The faster the stick is moved from the resting position; the faster
the screen location changes. Thus, joystick is variable sensitivity device.
The advantage is that it is designed using mechanical elements such as springs
and dampers which offer resistance to the user while pushing it. Such
mechanical feel is suitable for application such as the flight simulators, game
controllers etc.

6. SPACE BALL: It is a 3-Dimensional input device which looks like a


joystick with a ball on the end of the stick.

CSE, City Engineering College 2


Module 3 Input and Interaction and Computer Animation

LOGICAL INPUT DEVICES


➔ These are characterized by its high-level interface with the application
program rather than by its physical characteristics.
➔ Consider the following fragment of C code:
int x;
scanf(“%d”,&x);
printf(“%d”,x);
➔ The above code reads and then writes an integer. Although we run this
program on workstation providing input from keyboard and seeing output
on the display screen, the use of scanf() and printf() requires no knowledge
of the properties of physical devices such as keyboard codes or resolution
of the display.

➔ These are logical functions that are defined by how they handle input or
output character strings from the perspective of C program.
➔ From logical devices perspective inputs are from inside the application
program. The two major characteristics describe the logical behavior of input
devices are as follows:
• The measurements that the device returns to the user program
• The time when the device returns those measurements

API defines six classes of logical input devices which are given below:

1. STRING: ( A device for specifying text input)A string device is a logical device
that provides the ASCII valuesof input characters to the user program. This logical
device is usually implemented by means of physical keyboard.
2. LOCATOR:( A device for specifying one coordinate position) A locator device
provides a position in world coordinates to the user program. It is usually
implemented by means of pointing devices suchas mouse or track ball.
3. PICK:(A device for selecting a component of a picture) A pick device returns the
identifier of an object on the display to the user program. It is usually
implemented with the same physical device as the locator but has a separate
software interface to the user program. In OpenGL, we can use a process of

CSE, City Engineering College 3


Module 3 Input and Interaction and Computer Animation

selection to accomplish picking.

4. CHOICE: (A device for selecting a menu option)A choice device allows the user
to select one of a discrete number of options. In OpenGL, we can use various
widgets provided by the window system. A widget is a graphical interactive
component provided by the window system or a toolkit. The Widgets include
menus, scrollbars and graphical buttons. For example, a menu with n selections
acts as a choice device, allowing user to select one of ‘n’ alternatives.
5. VALUATORS: (A device for specifying a scalar value)They provide analog input
to the user program on some graphical systems; there are boxes or dials to
provide value.
6. STROKE: (A device for specifying a set of coordinate positions)A stroke device
returns array of locations. Example, pushing down a mouse button starts the
transfer of data into specified array and releasing of button ends this transfer.

INPUT MODES
Input devices can provide input to an application program in terms of two
entities:
1. Measure of a device is what the device returns to the application
program.
2. Trigger of a device is a physical input on the device with which the user
can send signal to the computer
Example 1: The measure of a keyboard is a single character or array of
characters where as the trigger is the enter key.
Example 2: The measure of a mouse is the position of the cursor whereas the
trigger is when the mouse button is pressed.

CSE, City Engineering College 4


Module 3 Input and Interaction and Computer Animation

The application program can obtain the measure and trigger in three distinct
modes:
1. REQUEST MODE: In this mode, measure of the device is not returned to
the program until the device is triggered.
• For example, consider a typical C program which reads a character input
using scanf(). When the program needs the input, it halts when it
encounters the scanf() statement and waits while user type characters at
the terminal. The data is placed in a keyboard buffer (measure) whose
contents are returned to the program only after enter key (trigger) is
pressed.
• Another example, consider a logical device such as locator, we can move
out pointing device to the desired location and then trigger the device with
its button, the trigger will cause the location to be returned to the
application program.

2. SAMPLE MODE: In this mode, input is immediate. As soon as the function


call in the user program is executed, the measure is returned. Henceno trigger
is needed.

Both request and sample modes are useful for the situation if and only if there
is a single input device from which the input is to be taken. However, in case of
flight simulators or computer games variety of input devices are used and these
mode cannot be used. Thus, event mode is used.

CSE, City Engineering College 5


Module 3 Input and Interaction and Computer Animation

3. EVENT MODE: This mode can handle the multiple interactions.


• Suppose that we are in an environment with multiple input devices, each with
its own trigger and each running a measure process.
• Whenever a device is triggered, an event is generated.The device measure
including the identifier for the device is placed in an event queue.
• If the queue is empty, then the application program will wait until an event
occurs. If there is an event in a queue, the program can look at the first
event type and then decide what to do.

Another approach is to associate a function when an event occurs, which is called


as “call back.”

Basic Positioning Methods: We can interactively choose a coordinate position with a pointing device that
records a screen location. How the position is used depends on the selected processing option. The coordinate location
could be an endpoint position for a new line segment, or it could be used to position some object—for instance, the
selected screen location could reference a new position for the center of a sphere; or the location could be used to
specify the position for a text string, which could at that location or it could be centered on that location.

Dragging : Another interactive positioning technique is to select an object and drag it to a new location. Using a
mouse, for instance, we position the cursor at the object position, press a mouse button, move the cursor to a new
position, and release the button. The object is then displayed at the new cursor location. Usually, the object is
displayed at intermediate positions as the screen cursor moves.

Constraints:
Any procedure for altering input coordinate values to obtain a particular orientation or alignment of an object is
called a constraint. For example, an input line segment can be constrained to be horizontal or vertical, as illustrated in
Figures . To implement this type of constraint, we compare the input coordinate values at the two endpoints. If the
difference in the y values of the two endpoints is smaller than the difference in the x values, a horizontal line is
displayed. Otherwise, a vertical line is drawn. The horizontal-vertical constraint is useful, for instance, in forming

CSE, City Engineering College 6


Module 3 Input and Interaction and Computer Animation

network layouts, and it eliminates the need for precise positioning of endpoint coordinates.

CSE, City Engineering College 7


Module 3 Input and Interaction and Computer Animation

OpenGL Interactive Input-Device Functions


1. GLUT Mouse Functions
2. GLUT Keyboard Functions
3. GLUT Tablet Functions
4. GLUT Spaceball Functions
5. GLUT Button-Box Function
6. GLUT Dials Function

1.GLUT Mouse Functions

CSE, City Engineering College 8


Module 3 Input and Interaction and Computer Animation

CSE, City Engineering College 9


Module 3 Input and Interaction and Computer Animation

CSE, City Engineering College 10


Module 3 Input and Interaction and Computer Animation

2.GLUT KEYBOARD Functions

CSE, City Engineering College 11


Module 3 Input and Interaction and Computer Animation

CSE, City Engineering College 12


Module 3 Input and Interaction and Computer Animation

CSE, City Engineering College 13


Module 3 Input and Interaction and Computer Animation

3.GLUT TABLET Functions

CSE, City Engineering College 14


Module 3 Input and Interaction and Computer Animation

OpenGL Picking Operations

CSE, City Engineering College 15


Module 5 Computer Animation

1
Module 5 Computer Animation

Parameter vpArray designates an integer array containing the coordinate position and size parameters for the current
viewport.

2
Module 5 Computer Animation

3
Module 5 Computer Animation

4
Module 5 Computer Animation

OpenGL Menu Functions

5
Module 5 Computer Animation

6
Module 5 Computer Animation

7
Module 5 Computer Animation

8
Module 5 Computer Animation

9
Module 5 Computer Animation

Designing a Graphical User Interface


• The User Dialogue
• Windows and Icons
• Accommodating Multiple Skill Levels
• Consistency
• Minimizing Memorization
• Feedback

10
Module 5 Computer Animation

DISPLAY LISTS
The original architecture of a graphical system was based on a general-purpose
computer connected to a display. The architecture is shown in the next page.

At that time, the disadvantage is that system was slow and expensive. Therefore,
a special purpose computer is build which is known as “display processor”.

The user program is processed by the host computer which results a compiled
list of instruction that was then sent to the display processor, where the
instruction are stored in a display memory called as “display file” or “display
list”. Display processor executes its display list contents repeatedly at a
sufficient high rate to produce flicker-free image.

DEFINITION AND EXECUTION OF DISPLAY LISTS

➔ Display lists are defined similarly to the geometric primitives. i.e., glNewList()
at the beginning and glEndList() at the end is used to define a display list.
➔ Each display list must have a unique identifier – an integer that is usually
a macro defined in the C program by means of #define directiveto an
appropriate name for the object in the list. For example, the following code
defines red box:

11
Module 5 Computer Animation

➔ The flag GL_COMPILE indicates the system to send the list to the server but
not to display its contents. If we want an immediate display of the contents
while the list is being constructed thenGL_COMPILE_AND_EXECUTE flag
is set.

➔ Each time if the client wishes to redraw the box on the display, it need
not resend the entire description. Rather, it can call the following function:
glCallList(Box)

➔ The Box can be made to appear at different places on the monitor by changing
the projection matrix as shown below:

➔ OpenGL provides an API to retain the information by using stack – It is a data


structure in which the item placed most recently is removed first [LIFO].
➔ We can save the present values of the attributes and the matrices by pushing
them into the stack, usually the below function calls are placed at the
beginning of the display list,
glPushAttrib(GL_ALL_ATTRIB_BITS);
glPushMatrix();

➢ We can retrieve these values by popping them from the stack, usually the
below function calls are placed at the end of the display list,

12
Module 5 Computer Animation

glPopAttrib();
glPopMatrix();
➢ We can create multiple lists with consecutive identifiers more easily
using:
glGenLists (number)
➢ We can display multiple display lists by using single funciton call:
glCallLists()

TEXT AND DISPLAY LISTS


➢ There are two types of text i.e., raster text and stroke text which can be
generated.
➢ For example, let us consider a raster text character is to be drawn of size
8x13 pattern of bits. It takes 13 bytes to store each character.
➢ If we define a stroke font using only line segments, each character
requires a different number of lines.

➢ From the above figure we can observe to draw letter ‘I’ is fairly simple,
whereas drawing ‘O’ requires many line segments to get sufficiently smooth.
➢ So, on the average we need more than 13 bytes per character to represent
stroke font. The performance of the graphics system will be degraded for the
applications that require large quantity of text.
➢ A more efficient strategy is to define the font once, using a display list for
each char and then store in the server. We define a function OurFont()
which will draw any ASCII character stored in variable ‘c’.

13
Module 5 Computer Animation

DISPLAY LIST AND MODELING


➢ Display list can call other display list. Therefore, they are powerful tools
for building hierarchical models that can incorporate relationships among
parts of a model.
➢ Consider a simple face modeling system that can produce images as
follows:

➢ Each face has two identical eyes, two identical ears, one nose, one mouth
& an outline. We can specify these parts through display lists which is
given below:

14
Module 5 Computer Animation

Animation
Raster methods of computer animation
Design of animation sequences
Traditional animation techniques
General computer animation function
OpenGL animation procedures

Introduction:
• To 'animate' is literally 'to give life to'.
• 'Animating' is moving something which can't move itself.
• Animation adds to graphics the dimension of time which vastly increases the amount of
information which can be transmitted.
• Computer animation generally refers to any time sequence of visual changes in a picture.
• In addition to changing object positions using translations or rotations, a computer-
generated animation could display time variations in object size, color, transparency, or
surface texture.
• Two basic methods for constructing a motion sequence are
1. real-time animation and
➢ In a real-time computer-animation, each stage of the sequence is viewed as it
is created.
➢ Thus the animation must be generated at a rate that is compatible with the
constraints of the refresh rate.
2. frame-by-frame animation
➢ For a frame-by-frame animation, each frame of the motion is separately
generated and stored.
➢ Later, the frames can be recorded on film, or they can be displayed
consecutively on a video monitor in “real-time playback” mode.

15
Module 5 Computer Animation

Raster Methods for Computer Animation


➢ We can create simple animation sequences in our programs using real-time methods.
➢ We can produce an animation sequence on a raster-scan system one frame at a time, so that
each completed frame could be saved in a file for later viewing.
➢ The animation can then be viewed by cycling through the completed frame sequence, or
the frames could be transferred to film.
➢ If we want to generate an animation in real time, however, we need to produce the motion
frames quickly enough so that a continuous motion sequence is displayed.
➢ Because the screen display is generated from successively modified pixel values in the
refresh buffer, we can take advantage of some of the characteristics of the raster screen-
refresh process to produce motion sequences quickly.

Double Buffering
✓ One method for producing a real-time animation with a raster system is to employ two
refresh buffers.
✓ We create a frame for the animation in one of the buffers.
✓ Then, while the screen is being refreshed from that buffer, we construct the next frame in
the other buffer.
✓ When that frame is complete, we switch the roles of the two buffers so that the refresh
routines use the second buffer during the process of creating the next frame in the first
buffer.
✓ When a call is made to switch two refresh buffers, the interchange could be performed at
various times.
✓ The most straight forward implementation is to switch the two buffers at the end of the
current refresh cycle, during the vertical retrace of the electron beam.
✓ If a program can complete the construction of a frame within the time of a refresh cycle,
say 1/60 of a second, each motion sequence is displayed in synchronization with the screen
refresh rate.
✓ If the time to construct a frame is longer than the refresh time, the current frame is displayed
for two or more refresh cycles while the next animation frame is being generated.

16
Module 5 Computer Animation

✓ Similarly, if the frame construction time is 1/25 of a second, the animation frame rate is
reduced to 20 frames per second because each frame is displayed three times.
✓ Irregular animation frame rates can occur with double buffering when the frame
construction time is very nearly equal to an integer multiple of the screen refresh time the
animation frame rate can change abruptly and erratically.
✓ One way to compensate for this effect is to add a small time delay to the program.
✓ Another possibility is to alter the motion or scene description to shorten the frame
construction time.

Generating Animations Using Raster Operations


➢ We can also generate real-time raster animations for limited applications using block
transfers of a rectangular array of pixel values.
➢ A simple method for translating an object from one location to another in the xy plane is to
transfer the group of pixel values that define the shape of the object to the new location
➢ Sequences of raster operations can be executed to produce realtime animation for either
two-dimensional or three-dimensional objects, so long as we restrict the animation to
motions in the projection plane.
➢ Then no viewing or visible-surface algorithms need be invoked.
➢ We can also animate objects along two-dimensional motion paths using color table
transformations.
➢ Here we predefine the object at successive positions along the motion path and set the
successive blocks of pixel values to color-table entries.
➢ The pixels at the first position of the object are set to a foreground color, and the pixels at
the other object positions are set to the background color .
➢ Then the animation is then accomplished by changing the color-table values so that the
object color at successive positions along the animation path becomes the foreground color
as the preceding position is set to the background color

Design of Animation Sequences


➢ Animation sequence in general is designed in the following steps.

17
Module 5 Computer Animation

1. Storyboard layout
2. Object definitions.
3. Key-frame specifications
4. Generation of in-between frames.
✓ This approach of carrying out animations is applied to any other applications as well,
although some applications are exceptional cases and do not follow this sequence.
✓ For frame-by-frame animation, every frame of the display or scene is generated separately
and stored. Later, the frame recording can be done and they might be displayed
consecutively in terms of movie.
✓ The outline of the action is storyboard. This explains the motion sequence. The storyboard
consists of a set of rough structures or it could be a list of the basic ideas for the motion.
✓ For each participant in the action, an object definition is given. Objects are described in
terms of basic shapes the examples of which are splines or polygons. The related movement
associated with the objects are specified along with the shapes.
✓ A key frame in animation can be defined as a detailed drawing of the scene at a certain
time in the animation sequence. Each object is positioned according to the time for that
frame, within each key frame.
✓ Some key frames are selected at extreme positions and the others are placed so that the
time interval between two consecutive key frames is not large. Greater number of key
frames are specified for smooth motions than for slow and varying motion.
✓ And the intermediate frames between the key frames are In-betweens. And the Media that
we use determines the number of In-betweens which are required to display the animation.
A Film needs 24 frames per second, and graphics terminals are refreshed at the rate of 30
to 60 frames per second.
✓ Depending on the speed specified for the motion, some key frames are duplicated. For a
one minutes film sequence with no duplication, we would require 288 key frames. We
place the key frames a bit distant if the motion is not too complicated.
✓ A number of other tasks may be carried out depending upon the application requirement
for example synchronization of a sound track.

18
Module 5 Computer Animation

Traditional Animation Techniques


✓ Film animators use a variety of methods for depicting and emphasizing motion
sequences.
✓ These include object deformations, spacing between animation frames, motion
anticipation and follow-through, and action focusing

✓ One of the most important techniques for


simulating acceleration effects, particularly
for non rigid objects, is squash and stretch.
✓ Figure shows how this technique is used to
emphasize the acceleration and deceleration of a
bouncing ball. As the ball accelerates,
it begins to stretch. When the ball hits the
floor and stops, it is first compressed
(squashed) and then stretched again as it accelerates and bounces upwards.

✓ Another technique used by film animators


is timing, which refers to the spacing
between motion frames. A slower moving object
is represented with more closely spaced frames,
and a faster moving object is displayed with fewer
frames over the path of the motion.

✓ Object movements can also be emphasized by creating preliminary actions that indicate
an anticipation of a coming motion

General Computer-Animation Functions


✓ Typical animation functions include managing object motions, generating views of
objects, producing camera motions, and the generation of in-between frames

19
Module 3 Computer Animation

✓ Some animation packages, such as Wavefront for example, provide special functions for
both the overall animation design and the processing of individual objects.
✓ Others are special-purpose packages for particular features of an animation, such as a
system for generating in-between frames or a system for figure animation.
✓ A set of routines is often provided in a general animation package for storing and managing
the object database. Object shapes and associated parameters are stored and updated in the
database. Other object functions include those for generating the object motion and those
for rendering the object surfaces
✓ Another typical function set simulates camera movements. Standard camera motions are
zooming, panning, and tilting. Finally, given the specification for the key frames, the in-
betweens can be generated automatically.

Computer-Animation Languages

20
Module 3 Computer Animation

Key-Frame Systems
A set of in-betweens can be generated from the specification of two (or more) key frames using a
key-frame system. Motion paths can be given with a kinematic description as a set of spline curves,
or the motions can be physically based by specifying the forces acting on the objects to be animated.
For complex scenes, we can separate the frames into individual components or objects called cels
(celluloid transparencies). This term developed from cartoon animation
techniques where the background and each character in a scene were placed on a separate
transparency. Then, with the transparencies stacked in the order from background to foreground,
they were photographed to obtain the completed frame. The specified animation paths are then
used to obtain the next cel for each character, where the positions are interpolated from the key-
frame times.

21
Module 3 Computer Animation

22
Module 3 Computer Animation

23
Module 3 Computer Animation

24
Module 3 Computer Animation

25
Module 3 Computer Animation

26
Module 3 Computer Animation

27
Module 3 Computer Animation

28
Module 3 Computer Animation

29
Module 3 Computer Animation

30
Module 3 Computer Animation

Motion Capture
An alternative to determining the motion of a character computationally is to digitally record the movement
of a live actor and to base the movement of an animated character on that information. This technique, known
as motion capture or mo-cap, can be used when the movement of the character is predetermined (as in a
scripted scene). The animated character will perform the same series of movements as the live actor.
The classic motion capture technique involves placing a set of markers at strategic positions on the actor’s
body, such as the arms, legs, hands, feet, and joints. It is possible to place the markers directly on the actor,
but more commonly
they are affixed to a special skintight body suit worn by the actor. The actor is them filmed performing the
scene. Image processing techniques are then used to identify the positions of the markers in each frame of the
film, and their positions are translated to coordinates. These coordinates are used to determine the positioning
of the body of the animated character. The movement of each marker from frame to frame in the film is tracked
and used to control the corresponding movement of the animated character. Some motion capture systems
record more than just the gross movements of the parts of the actor’s body. It is possible to record even the
actor’s facial movements. Often called performance capture systems, these typically use a camera trained on
the actor’s face and small light-emitting diode (LED) lights that illuminate the face. Small photoreflective
markers attached to the face reflect the light from the LEDs and allow the camera to capture the small
movements of the muscles of the face, which can then be used to create realistic facial animation on a
computer-generatedcharacter.

31
Module 3 Computer Animation

32
Module 3 Computer Animation

33
Module 3 Computer Animation

34
Module 3 Computer Animation

35
Module 3 Computer Animation

36

You might also like