0% found this document useful (0 votes)
19 views16 pages

CG - Module - 3

Uploaded by

planbwings888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views16 pages

CG - Module - 3

Uploaded by

planbwings888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Rashtreeya Sikshana Samithi Trust

RV Institute of Technology and Management®


(Affiliated to VTU, Belagavi)

JP Nagar, Bengaluru - 560076

Department of Computer Science and Engineering

Course Name: Computer Graphics & Fundamentals of


Image Processing

Course Code: 21CS63


VI Semester
2022 Scheme

Prepared By :
Dr. Deepak N A
Associate Professor,
Department of Computer Science and Engineering
RVITM, Bengaluru – 560076

Email: [email protected]
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
- Module - 3
3.1.1 Logical Classification of Input Devices

1. Pointing Devices:
 Mouse: A hand-held device used to control the movement of a cursor on a screen.
 Trackpad/Touchpad: Commonly found on laptops, it allows users to move the cursor by
sliding their fingers across its surface.
 Trackball: Operated by rotating a ball with the hand or fingers to move the cursor.
2. Keyboard Devices:
 Standard Keyboard: A typewriter-style device with keys for entering letters, numbers, and
other characters.
 Ergonomic Keyboard: Designed to reduce muscle strain and improve comfort during
extended typing sessions.
 Virtual Keyboard: Software-based keyboards displayed on a touchscreen or projected onto
a surface.
3. Voice Input Devices:
 Microphone: Captures audio input which is then processed by speech recognition software
to convert it into text or command inputs.
4. Touch Input Devices:
 Touch screen: Displays visual output and allows users to interact by touching the screen
directly.
 Stylus/Pen Input: A pen-like device used to write or draw directly onto a touchscreen or
graphics tablet.
5. Gesture Input Devices:
 Motion Controllers: Devices that capture hand or body movements to control on-screen
actions, often used in gaming or virtual reality applications.

3.1.2 Input Functions for Graphical Data:


Input functions for graphical data refer to the methods or mechanisms used to acquire data in a format
suitable for visualization or graphical representation. These functions can vary depending on the type
of data being collected, the source of the data, and the intended visualization techniques. Here are
some common input functions for graphical data:
____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 2 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
-  Manual Input: Users directly input data into a graphical tool or software interface. This can
involve typing data into a form, spreadsheet, or data entry software. Manual input is suitable
for small datasets or when data is generated infrequently.

 File Import: Data is imported into a graphical tool from external files. These files can be in
various formats such as CSV (Comma Separated Values), Excel spreadsheets, JSON
(JavaScript Object Notation), XML (eXtensible Markup Language), or database files. File
import allows for quick and easy visualization of large datasets stored in structured formats.

 Database Query: Data is retrieved from a database using SQL (Structured Query Language)
queries. This method is commonly used when dealing with large datasets stored in relational
databases. Database queries allow for efficient retrieval of specific data subsets based on
predefined criteria

 API Integration: Data is fetched from external sources or services using Application
Programming Interfaces (APIs). This could include accessing data from web services, IoT
(Internet of Things) devices, social media platforms, or other online data sources. API
integration enables real-time or near-real-time data visualization and analysis.

 Sensor Data Acquisition: In scenarios involving IoT or sensor networks, data is collected
from physical sensors deployed in the environment. This data could include temperature,
humidity, pressure, motion, or any other measurable quantity. Sensor data acquisition systems
typically involve hardware components for data collection and software interfaces for data
processing and visualization.

 Web Scraping: Data is extracted from websites or online sources using web scraping
techniques. This method is useful for collecting data from websites that do not provide APIs or
structured data formats. Web scraping tools parse HTML or other markup languages to extract
relevant information for visualization.

 Streaming Data Sources: In applications where data is continuously generated in real-time,


such as financial markets, social media streams, or IoT devices, streaming data sources are
used. These sources provide a constant stream of data that can be processed and visualized in
real-time or stored for later analysis.

3.1.3 Interactive Picture-Construction Techniques


Interactive picture-construction techniques involve creating images or graphics in a dynamic and user-
engaging manner. Here are several techniques commonly used in interactive picture construction:

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 3 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
-

1. Vector Graphics Editors:


 Bezier Curves: Users can interactively create and manipulate curves by defining control
points and handles.
2. Raster Graphics Editors:
 Brush Tools: Users can interactively paint or draw on a canvas using different brush sizes,
shapes, and textures.
 Eraser Tool: Enables users to interactively erase parts of the image or artwork.

3. 3D Modeling Software:
 Polygon Modeling: Allows users to create and manipulate 3D models by interactively
editing individual polygons, vertices, and edges.

4. Parametric Design Tools:


 Parametric Modeling: Users can interactively adjust parameters such as dimensions,
angles, and proportions to dynamically modify the design of objects or structures.

5. Animation Software:
 Keyframe Animation: Users can interactively set keyframes to define the position, rotation,
and scale of objects over time.
 Timeline Editor: Allows users to interactively adjust the timing and duration of animations
by manipulating key frames on a timeline.
____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 4 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
-These techniques empower users to actively participate in the creation and exploration of digital
images, artworks, and visualizations, fostering creativity and engagement in various domains such as
graphic design, digital art, animation, and data visualization.

3.1.4 Virtual-Reality Environments


Virtual reality (VR) environments are immersive digital spaces that simulate real-world or imagined
environments, allowing users to interact with and explore virtual worlds.

1. Head-Mounted Displays (HMDs): HMDs are worn on the head and typically consist of a
screen or screens that cover the user's field of view, creating a stereoscopic 3D effect. These
displays are often accompanied by headphones or audio systems to provide spatial audio.

2. Motion Tracking Systems: VR environments often utilize motion tracking technology to


monitor the user's movements and adjust the virtual perspective accordingly. This can include
head tracking for rotational movement and positional tracking for spatial movement.

3. Controllers: Many VR systems incorporate hand controllers or motion controllers that


allow users to interact with objects and navigate within the virtual environment. These
controllers often feature buttons, triggers, and motion sensors for intuitive interaction.

4. Spatial Audio: VR environments often incorporate spatial audio technology to create


realistic soundscapes that change based on the user's position and orientation within the virtual
space. This enhances immersion and presence in the virtual world.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 5 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
- 5. Immersive Content: VR environments can feature a wide range of immersive content,
including interactive experiences, games, simulations, educational applications, virtual tours,
artistic creations, and more. The content is designed to engage users and provide a compelling
virtual experience.

Overall, VR environments offer a unique and immersive way for users to explore virtual worlds,
interact with digital content, and experience new forms of entertainment, education, training, and
communication.

3.1.5 OpenGL Interactive Input-Device Functions

OpenGL, a widely used graphics library, doesn't directly handle input devices. Instead, it relies on
other libraries or frameworks for input handling. However, OpenGL does provide the necessary
functionalities for rendering graphics based on input data received from input devices. Here are some
common input-related functions and concepts used in conjunction with OpenGL:

1. Windowing Systems: OpenGL applications typically run within a windowing system (e.g.,
GLFW, SDL, or GLUT). These libraries provide functions for creating windows, handling
input events, and managing the OpenGL context.

2. Event Handling: Input events, such as keyboard presses, mouse movements, and button
clicks, are handled by the windowing system and passed to the OpenGL application.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 6 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
-

3. Mouse Input: OpenGL applications can capture mouse input events, such as mouse
movement, button presses, and scrolling.

4. Keyboard Input: Keyboard input events, such as key presses and releases, can be captured
by OpenGL applications.

5. Gamepad/Input Device Support: OpenGL applications can support input devices such as
gamepads, joysticks, and other controllers.

3.1.6 OpenGL Menu Functions

OpenGL itself does not provide functions specifically for creating menus, as it is primarily a graphics
rendering library focused on rendering 2D and 3D graphics. However, menus can be implemented in
OpenGL applications using techniques such as rendering textured polygons as buttons, detecting
mouse clicks within specific regions of the window, and responding to those clicks appropriately.

1. Rendering Menu Items: Use OpenGL functions to draw textured polygons or simple
geometric shapes (such as rectangles) to represent menu items.

2. Positioning Menu Items: Determine the positions of menu items within the application
window. Use OpenGL transformations (translations, rotations, and scaling) to position and size
the menu items appropriately.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 7 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
-

3. Detecting Mouse Clicks: Use a windowing library such as GLFW, SDL, or GLUT to
capture mouse click events. Convert mouse coordinates to OpenGL viewport coordinates and
check if they intersect with any of the menu items.

4. Responding to Clicks: When a mouse click is detected within a menu item's region, trigger
the corresponding action. This could involve opening a submenu, executing a command,
changing a setting, or any other functionality associated with the menu item.

5. Handling Keyboard Input: Optionally, implement keyboard shortcuts to navigate through


menus or trigger actions without using the mouse. Use keyboard input events provided by the
windowing library to detect key presses and respond accordingly.

3.1.7 Designing a Graphical User Interface

Designing a graphical user interface (GUI) involves a blend of creativity, functionality, and usability.
Here's a structured approach to guide you through the process:

1. Define Purpose and Audience: Understand the purpose of the application and the needs of
its users. Define the primary goals the GUI should achieve and the target audience it will serve.

2. Research and Gather Requirements: Conduct user research to understand user


preferences, behavior, and pain points. Gather functional requirements from stakeholders to
ensure the GUI meets the application's objectives.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 8 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
- 3. Sketch Wireframes: Start by sketching rough wireframes to visualize the layout and
structure of the GUI. Focus on key elements such as navigation menus, buttons, forms, and
content areas.

4. Create Mockups and Prototypes: Use design tools like Figma, Adobe XD, or Sketch to
create high-fidelity mockups of the GUI. Design multiple iterations, exploring different visual
styles, color schemes, and typography choices.

3.2.1: Design of Animation Sequences


Designing animation sequences involves several key steps to ensure a smooth and engaging visual
experience.

1. Storyboarding: Start by sketching out your ideas in a storyboard format. This helps you
visualize the sequence and plan the flow of action.

2. Scripting: Write a script detailing the actions, dialogues, and scene descriptions. This will
serve as a guide for animators and voice actors (if applicable).

3. Character Design: Develop the appearance and personality of your characters. Consider
their movements, expressions, and mannerisms.

4. Key frames: Identify the key moments in the sequence and create keyframes to represent
them. These frames will serve as the foundation for the animation.

5. Timing and Pacing: Determine the timing and pacing of each action and scene transition.
This will impact the overall rhythm and feel of the animation.

6. Animating: Bring the keyframes to life by filling in the in-between frames. Pay attention to
details such as easing in and out of movements for a more natural look.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 9 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
- 7. Adding Effects: Incorporate visual effects, sound effects, and music to enhance the
atmosphere and convey emotions.

8. Review and Revision: Review the animation sequence and make any necessary revisions to
improve clarity, coherence, and visual appeal.

9. Rendering: Render the final animation in the desired format and resolution.

10. Testing: Test the animation on different devices and screen sizes to ensure compatibility
and optimal viewing experience.

11. Feedback: Gather feedback from colleagues or target audience members and make further
adjustments if needed.

12. Finalization: Once satisfied with the animation, finalize it for distribution or publication.

3.2.2 Traditional Animation Techniques

Traditional animation techniques refer to the methods used to create animation before the advent of
digital technology. Here are some of the key techniques:

1. Hand-Drawn Animation: This is perhaps the most classic form of animation. Each frame is
drawn by hand, typically on paper, and then photographed or scanned onto transparent
celluloid sheets called "cells." The cells are then layered on top of background artwork to
create the final animation sequence.

2. Cell Animation: Cell animation involves drawing characters and objects on transparent
celluloid sheets (cells) and placing them over a static background. This technique allows for
easy manipulation of characters while keeping the background consistent.

3. Rotoscoping: Rotoscoping involves tracing over live-action footage frame by frame to


create realistic animations. It's often used to achieve lifelike movements or special effects.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 10 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
- 4. Stop Motion Animation: In stop motion animation, physical objects or characters are
manipulated frame by frame and photographed to create the illusion of movement. Common
forms include claymation (using clay figures), puppet animation (using articulated puppets),
and object animation (using everyday objects).

5. Pencil Tests: Pencil tests are rough animations created to test the movement and timing of
characters or scenes before finalizing the artwork. They're typically done using simple line
drawings to quickly iterate and refine the animation.

3.2.3General Computer-Animation Functions

Computer animation encompasses a wide range of techniques and functions used to create animated
content using digital tools. Here are some general functions commonly used in computer animation:

1. Modeling: Modeling involves creating 3D models of characters, objects, and environments


using specialized software. There are various techniques for modeling, including polygonal
modeling, spline modeling, and sculpting.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 11 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
-
2. Texturing: Texturing is the process of applying surface textures and colors to 3D models to
give them a realistic appearance. This can involve creating textures from scratch or using pre-
made texture libraries.

3. Rigging: Rigging involves adding a digital skeleton (rig) to a 3D model to enable movement
and animation. This skeleton consists of bones and joints that can be manipulated to pose the
model in different ways.

4. Animation: Animation involves creating movement and motion within a digital


environment. This can include keyframe animation, where animators set key poses at specific
frames, as well as procedural animation, where movement is generated automatically based on
predefined rules.

5. Rendering: Rendering is the process of generating the final images or frames from a 3D
scene. This involves applying lighting, shading, and camera effects to create the desired look.
Rendering can be CPU-based or GPU-based, and there are various rendering engines available
with different features and capabilities.

6. Simulation: Simulation involves creating realistic physical effects such as cloth simulation,
fluid simulation, and particle effects. These effects can add depth and realism to animated
scenes and are often used in visual effects and gaming.
7. Compositing: Compositing is the process of combining multiple layers of visual elements,
such as 3D renders, live-action footage, and special effects, into a single image or sequence.
This is typically done using compositing software and involves adjusting colors, adding visual
effects, and fine-tuning the final output.

3.2.4. Computer-Animation Languages


Computer animation involves various programming languages and frameworks, each serving different
purposes within the animation pipeline. Here are some of the commonly used languages in computer
animation:

1. Python: Python is a versatile and widely used programming language in animation


pipelines. It's commonly used for scripting tasks, automation, and tool development due to its
simplicity and readability. Python is often used in conjunction with software like Autodesk
Maya, Blender, and Houdini to automate repetitive tasks, create custom tools, and control the
animation process.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 12 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
- 2. C++: C++ is a powerful programming language commonly used for developing animation
software and graphics engines. Many animation and visual effects applications, such as
Autodesk Maya, Pixar's RenderMan, and SideFX Houdini, are built using C++ for their core
functionality and performance-critical tasks.

3. OpenGL / WebGL: OpenGL (Open Graphics Library) is a cross-platform graphics API


commonly used for rendering 2D and 3D graphics in animation and gaming applications.
WebGL is a JavaScript API based on OpenGL that enables hardware-accelerated 3D graphics
in web browsers. Both OpenGL and WebGL are used for real-time rendering of animations
and visual effects.

4. GLSL (OpenGL Shading Language): GLSL is a high-level shading language used with
OpenGL and WebGL for writing shaders, which are small programs that run on the GPU to
control the rendering process. GLSL is used to create complex visual effects, shaders, and
materials in computer animation and real-time graphics applications.

5. HLSL (High-Level Shading Language): HLSL is a shading language developed by


Microsoft for use with Direct3D, the graphics API used in Windows-based gaming and
animation applications. HLSL is similar to GLSL but is tailored for use with Direct3D and
Microsoft's graphics technologies.

6. Java Script: JavaScript is commonly used in web-based animation and interactive media
projects. Libraries and frameworks like Three.js and PIXI.js provide tools for creating 3D
graphics, animations, and interactive experiences using JavaScript and HTML5.

7. CUDA (Compute Unified Device Architecture): CUDA is a parallel computing platform


and programming model developed by NVIDIA for harnessing the power of GPUs for general-
purpose computing tasks. CUDA is commonly used for GPU-accelerated rendering,
simulation, and computational tasks in animation and visual effects.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 13 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
-3.2.5: Character Animation

Character animation involves bringing digital or physical characters to life through movement and
expression. Here's an overview of the process and techniques involved in character animation:

1. Character Design: Before animating a character, it's essential to have a well-defined design
that includes aspects like appearance, personality, and backstory. This design will influence
how the character moves and behaves.

2. Story boarding: Storyboarding helps plan out the sequence of actions and expressions for
the character. It provides a visual guide for animators to follow and ensures the animation
flows smoothly.

3. Modeling: In 3D animation, the character is created using modeling software. This involves
shaping the character's geometry and defining its features, such as the face, body, and clothing.

4. Keyframe Animation: Keyframe animation involves setting key poses or positions for the
character at specific frames in the animation timeline. These key poses define the character's
movements and expressions throughout the animation sequence.

7. Facial Animation: Facial animation involves animating the character's facial expressions,
including movements of the eyes, eyebrows, mouth, and other facial features. This can be done
manually using keyframe animation or with techniques like blendshapes or facial motion
capture.

3.2.6 Periodic Motions


Periodic motions in computer animation refer to repetitive movements that occur over regular
intervals, often following a specific pattern or cycle. These motions can be used to animate various
elements in a scene, such as character movements, objects, or environmental effects.

In computer animation, periodic motions are often implemented using a combination of keyframe
animation, procedural techniques, and scripting. Animators may use mathematical functions,
simulation algorithms, or procedural noise patterns to generate and control periodic motions,
depending on the desired effect and level of realism required. Additionally, animation software and
frameworks often provide tools and features specifically designed to facilitate the creation and
manipulation of periodic motions in animations.
____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 14 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
-

3.2.7 OpenGL Animation Procedures

OpenGL is a powerful graphics API commonly used for rendering 2D and 3D graphics in computer
animation and game development. While OpenGL itself does not provide built-in animation
procedures, it offers a framework for implementing animation through various techniques and
approaches. Here's a general overview of the procedures commonly used for animation with OpenGL:

1. Updating Model Transformations: In OpenGL, animations often involve transforming


objects in the scene over time. This can be achieved by updating the model transformation
matrices for each object in the scene. Animators typically calculate these transformations
based on factors such as position, rotation, and scale, and then apply them to the vertices of the
object's geometry.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 15 | 16
RV Institute of Technology & Management®

-----------------------------------------------------------------------------------------------------------------------------------------
-
2. Interpolation: Interpolation is commonly used to create smooth animations between
keyframes. This involves calculating intermediate values between two keyframes to create the
illusion of continuous motion. Linear interpolation (LERP) is a simple interpolation technique
where values are linearly interpolated between two endpoints. Other interpolation techniques,
such as spline interpolation or Bezier curves, can also be used for more complex animations.

3. Frame-by-Frame Animation: Frame-by-frame animation involves rendering a sequence of


frames, with each frame representing a different state of the animation. In OpenGL, this
typically involves updating the scene and rendering each frame in a loop. This approach is
suitable for simple animations or situations where precise control over each frame is required.

4. Animating with Shaders: OpenGL shaders can be used to implement various animation
effects directly on the GPU. For example, vertex shaders can be used to deform geometry or
simulate movement, while fragment shaders can be used to create dynamic color changes or
visual effects. By manipulating shader parameters over time, complex animations can be
achieved with OpenGL.

____________________________________________________________________________
Computer Graphics & Fundamentals of Image Processing (21CS63) Page 16 | 16

You might also like