Multimedia Final Laban Il 01
Multimedia Final Laban Il 01
5.AUDIO PROCESSING
• Click on the “File” menu located at the top left corner of the WavePad Interface.
Select “Open” from the dropdown menu to import the audio file. Alternatively, we
can directly drag and drop the audio file.
a. Amplifying
b. Fade-in
c. Fade-out
d. Pitch changes
e. Speed changes
• Access the amplify tool by navigating to “Effects” in the menu bar and selecting
“Amplify” from the dropdown menu.
• If necessary, select the portion of the audio you want to amplify by using the
selection tool.
• Adjust the amplification level using the provided slider or input field. Increase the
amplification level for a louder sound and decrease it for a softer sound.
Figure: Amplifying
Figure: Fade in
• It’s the same two initial process as for fade in, and we choose “Fade Out”.
• Adjust the duration of the fade out effect if necessary.
• In the Pitch dialog box, you’ll see options to adjust pitch. You can increase or
decrease the pitch by adjusting the semitones or cents value. Positive values raise
the pitch, while negative value lower it
Figure: Pitch
Changing Speed:
• Select the portion of the audio you want to adjust the speed for, if necessary, using
the selection tool.
• In the Time/Pitch dialog box, you’ll see options to adjust the speed. You can
increase or decrease the speed by adjusting the percentage value. Increasing the
percentage will speed up the audio, while decreasing it will slow it down.
• Use the selection tool to highlight the section of the audio you want to keep.
• Use the selection tool to highlight the section of the audio you want to cut.
• Go to “Edit” in the menu bar.
• Ensure that the audio files are aligned properly on the timeline.
• Select “Save Mix As” or “Export Mix” from the dropdown menu.
• Choose the desired file format and location to save the merged audio file on the
computer.
To mix the audio files together:
• Adjust the volume levels of each audio track to achieve the desired balance.
• Select “Text to Speech” from the dropdown menu. This will open the Text
to Speech dialog box.
• In the dialog box, enter or paste the text you want to convert into speech into
the provided text field.
• Choose the desired voice from the available options.
Take any video of your choice. Trim unwanted sections, add some text to be displayed
at the beginning and end of the video. Provide a brief explanation of the steps you
followed and the software used.
Video editing is the process of manipulating and rearranging video footage to create a new
work. It involves selecting and organizing clips, adding effects, transitions, and sound, and
refining the video to achieve a specific flow or narrative.
• Applying visual effects or filters to enhance the look of the video. This can
include color correction, brightness/contrast adjustments, slow-motion,
fastforward, and other creative effects.
5. Audio Editing:
• Syncing audio with video, adjusting audio levels, and adding sound effects
or music. Good audio editing can greatly enhance the overall quality of the
video.
6. Titles and Text:
This can include opening titles, lower thirds, subtitles, and closing credits.
• Color Correction: Adjusting the colors in your video to make them look
natural and consistent.
• Color Grading: Applying a specific color palette to the video to create a
certain mood or atmosphere.
8. Motion Graphics:
• The final step where the edited video is exported to a specific format (e.g.,
MP4, MOV) suitable for the intended platform, such as social media,
YouTube, or broadcast.
• Locate the section of the video you want to trim on the timeline.
• Click on the beginning of the section you want to trim to set the starting point.
• Click on the end of the section you want to trim to set the ending point.
7. Write a program to morph one image into another. Explain the concept of image
morphing and the algorithm used in your implementation. Provide before and after
images as examples.
Image Morphing: Concept and Explanation: Image Morphing is the process of smoothly
transforming one image into another. This technique is commonly used in visual effects in
films, animations, and image processing applications. The goal is to create a smooth
transition between two images over a sequence of frames, so that the start image gradually
deforms and blends into the target image. Steps in Image Morphing
2. Warping: Adjust the shape of the first image to match the shape of the second
image. This involves applying a geometric transformation to the first image based
on the feature correspondences.
3. Cross-Dissolving: Blend the color and intensity of corresponding pixels from both
images. This step gradually shifts the pixel values from those of the first image to
those of the second image.
Algorithm Used
A common algorithm for image morphing is the Delaunay Triangulation combined with
Affine Transformation. 1. Delaunay Triangulation:
o The images are divided into triangles based on the feature points.
These triangles are used as a mesh for the morphing process. o
Corresponding triangles in both images are identified.
2. Affine Transformation:
o An affine transformation is computed for each triangle, which warps it from
the first image's triangle shape to the second image's triangle shape.
o This transformation is applied to all pixels within the triangle.
3. Cross-Dissolve:
o After warping, the pixel values of corresponding triangles from both images
are blended together based on a certain weight.
4. Iterate:
o This process is repeated for each frame, gradually increasing the weight
from 0 to 1 (where 0 represents the first image and 1 represents the second
image).
Program:
import numpy as
np import cv2
import math #read
image
img=cv2.imread(r"D:\BCA8thsem\Multimedia System\morphing\Assets\anil.jpg")
img2 = cv2.imread(r"D:\BCA8thsem\Multimedia
System\morphing\Assets\digvijay.jpg") #lift , right eyes and
mouse pts1 = np.array([[218, 240],[295, 240],[250,
383]],np.float32) pts2 = np.array([[248, 245],[345,
270],[281, 366]],np.float32) pts11
=np.zeros((3,2),np.float32) pts22 =np.zeros((3,2),np.float32)
dis = 100.0 # iterations piece = 1.0 / dis for i in
range(0,int(dis)): for j in range(0,3):
disx = (pts1[j,0] - pts2[j,0])*-1
disy = (pts1[j,1] - pts2[j,1])*-1
#move of first image movex1
=( disx/dis) * (i+1) movey1 =(
disy/dis) * (i+1) #move of
second image movex2 =disx-
movex1 movey2 =disy-
movey1 pts11[j,0] = pts1[j,0]
+ movex1 pts11[j,1] =
pts1[j,1] + movey1 pts22[j,0]
= pts2[j,0] - movex2
pts22[j,1] = pts2[j,1] - movey2
mat1=cv2.getAffineTransform(pts
1, pts11)
mat2=cv2.getAffineTransform(pts
2, pts22)
dst1=cv2.warpAffine(img, mat1,
(img.shape[1],img.shape[0]),None,None,cv2.BORDER_REPLICATE)
dst2=cv2.warpAffine(img2, mat2,
(img.shape[1],img.shape[0]),None,None,cv2.BORDER_REPLICATE)
1.Write a C program for bouncing ball animation using graphics. header file.
Theory:
• Purpose: Initializes the graphics mode. It prepares the system for drawing
graphics by setting up the display mode and the graphics driver.
• Parameters: &gd and &gm: Pointers to integers that detect and set the
graphics driver and mode.
• "C:\\TC\\BGI": The path to the graphics driver files (specific to Turbo C++).
2. getmaxx() and getmaxy()
• Purpose: Retrieve the maximum horizontal (x) and vertical (y) coordinates
of the screen.
• Usage in Program: These functions are used to calculate the mid-point of
the screen and to check if the ball has hit the top or bottom edge.
3. setcolor(color);
• Usage in Program: This function is used to set the color of the ball before
drawing it. The color is chosen from an array that cycles through red, white,
blue, and purple.
4. setfillstyle(SOLID_FILL, color);
• Purpose: Sets the fill pattern and color for the shapes that are drawn.
• Usage in Program: This function is used to fill the ball with a solid color,
matching the current drawing color.
5. circle(x, y, radius);
• Usage in Program: This function draws the ball at the current x and y
coordinates with a radius of 30 pixels.
6. floodfill(x, y, color);
• Purpose: Fills an enclosed area with the current fill pattern and color.
• Usage in Program: After the circle is drawn, this function fills it with the
color set by setfillstyle.
7. delay(milliseconds);
• Usage in Program: The delay(50); function is used to pause the program for
50 milliseconds.
8. cleardevice();
• Usage in Program: This function clears the screen before the next frame is
drawn. Without this, the previous frames would remain on the screen,
leading to a trail effect rather than smooth animation.
9. kbhit()
• Purpose: Closes the graphics mode and returns to the text mode.
• Usage in Program: This function is called at the end of the program to clean up and
exit the graphics mode.
How the Animation Works:
• The ball is drawn at an initial position and then cleared from the screen after a short
delay.
• The ball's position is updated, and it is drawn again in the new position, creating the
illusion of movement.
• The direction of movement is reversed when the ball reaches the top or bottom of
the screen.
• The ball's color is changed with each frame, enhancing the animation with visual
variety.
Program:
#include <stdio.h>
#include <graphics.h>
#include <dos.h>
while (!kbhit()) {
if (y >= getmaxy() - 30 || y <= 30)
flag = !flag;
closegraph();
return 0;
}
Output:
Program
#include <graphics.h>
#include <conio.h>
int main() {
// Initialize the graphics mode and driver
int gd = DETECT, gm;
initgraph(&gd, &gm, NULL); // No need to specify the path
// Animation loop
while (!kbhit()) {
// Clear the previous frame
cleardevice();
Theory Run-Length Encoding (RLE) is a simple form of data compression that reduces the
size of a file by encoding consecutive identical data elements (called "runs") as a single
data value and a count of how many times it appears. It is used in lossless compression of
data allowing 4.259 bytes compression into 3 bytes. In this encoding, frequently (at least 4
times) repeating bytes are replaced depending on their occurrence into three byte, where
first byte depict the repeating byte second is the exclamation and third byte depicts the np.
of occurrence.
Program
def run_length_encoding(input_string):
Huffman Coding
Huffman Coding is an efficient algorithm used for lossless data compression. It is based on
the idea of assigning shorter binary codes to more frequently occurring characters in a data
set, and longer codes to less frequent characters. The technique ensures that no code is a
prefix of another, allowing for unique decodability.
1. Frequency of Characters:
• Nodes: Each character and its frequency are treated as a node in the tree.
• Tree Building: The algorithm builds a binary tree by repeatedly merging the
two nodes with the smallest frequencies, creating a new node whose
frequency is the sum of the two. This process continues until there is only
one node left, which becomes the root of the tree.
3. Code Assignment:
• Left and Right Paths: In the resulting binary tree, the path to each character
from the root determines its binary code. A left branch typically represents
a 0, and a right branch represents a 1.
• Unique Codes: Each character gets a unique binary code based on its
position in the tree. Frequently occurring characters have shorter codes
because they are closer to the root, while less frequent characters have
longer codes.
4. Prefix-Free Property:
• Encoding: To encode data, each character in the data is replaced with its
corresponding Huffman code.
Program
import heapq from collections
import defaultdict # Node class for
Huffman Tree class Node: def
__init__(self, char, freq):
self.char = char
self.freq = freq self.left
= None self.right =
None
# Define comparator methods for priority queue (min heap)
def __lt__(self, other):
return self.freq < other.freq
# Function to build the Huffman Tree def
build_huffman_tree(char_freq):
heap = [Node(char, freq) for char, freq in char_freq.items()]
heapq.heapify(heap) while len(heap) > 1:
# Remove the two nodes of highest priority (lowest frequency)
left = heapq.heappop(heap) right = heapq.heappop(heap)
# Create a new internal node with these two nodes as children and with a frequency
equal to the sum of the two nodes' frequencies merged = Node(None, left.freq +
right.freq) merged.left = left merged.right = right
# Add the new node to the heap
heapq.heappush(heap, merged)
# The remaining node is the root of the Huffman Tree
return heap[0]
8. Create a timeline-based animation using a software tool of your choice. Explain the
concept of timeline-based animation and describe the steps involved in creating the
animation. Provide screenshots or video of the final animation. Concept of Timeline-
Based
Animation: Timeline-based animation involves controlling objects over time. It's a type
of animation where keyframes are placed on a timeline, and objects or images move or
change appearance as time progresses. Each element can have its position, size, opacity, or
other properties modified at different points on the timeline.
• Add elements such as text, images, icons, or shapes from Canva’s library.
4. Animate Elements:
• Select an element on the canvas and click the “Animate” button in the
toolbar.
• Customize the timing of the animation, duration, and delay to ensure it fits
into the timeline correctly.
5. Control the Timing:
• For more detailed timing, click on “Timing” and adjust the entry and exit
time of each element.
• Arrange them on the timeline to create smooth transitions.
• Use the “Play” button to preview your animation. Ensure that all elements
are correctly timed.
• Adjust the timing, animation effects, and placement as necessary.
• Once you're satisfied with your animation, click on “Download” and choose
the file type (MP4 for video, GIF for short animations).
• Export your animation and save it.
Output:
Lab: 9 Date:2081/3/21
• Timeline Animation: The timeline allows you to control the sequence of animation
events, determining when elements enter and exit the scene.
• Tweening Animation: Tweening (short for "in-betweening") is the process of
generating intermediate frames between two keyframes to create smooth motion.
This gives the illusion of an object smoothly moving, scaling, rotating, etc., from
one state to another.
Steps:
TimeLine'
v. Add Video layer from Layer>Video Layers>New Blank Video Layer. vi.
You can draw different shapes in each of the frame in the video layer.
vii. There must be changes of shapes, position, opacity, sizes, so it looks
like animation.
viii. Frame rate can be changed from the menu in the Timeline Panel ix. To
review the animation, click play button on the Timeline.
x. The animation can be exported using desired format.
Output:
Before After
Lab:10 Date:2081/3/21
11.Create an animation GIF. Explain the process of creating GIF animations and
describe the tools used. Provide the final GIF and a brief explanation of each frame.
GIF Animation:
GIF Animation refers to the use of the GIF (Graphics Interchange Format) file format to
create a sequence of images that play in a loop, producing an animated effect. GIF
animations are widely used on the web due to their simplicity and wide support across
different platforms and devices.
1. Frame-Based Animation:
• GIFs can be set to loop indefinitely, meaning the sequence of frames will
continuously repeat. This feature makes GIFs particularly effective for
short, repetitive animations like icons, memes, or small visual effects.
3. Limited Color Palette:
• GIFs support a maximum of 256 colors per frame, making them less suitable
for high-resolution images or animations with complex color schemes.
However, this limitation helps keep file sizes small, which is beneficial for
web use.
4. Transparency:
1. Import any image or we can follow same steps as we have created file for frame
by frame animation,
2. At the top of Timeline panel, click the setting icon and select loop playback to
make the GIF loop continuously,
3. Click the 'Play' button to preview the animation,
4. For export, go to File > Export > Save for Web (Legacy),
3. Frame 3: The ball touches the bottom of the screen and is slightly squashed to
mimic impact.
4. Frame 4: The ball rises halfway up the screen again.
11.Create simple animated banner ads. Describe the design principles and tools used
to create animated banners. Provide examples of the final banner ads and explain the
animation techniques used.
Banner Ads: Banner ads are rectangular or square graphic advertisements displayed on
web pages. They are a form of online advertising that uses visual elements to attract users'
attention and encourage them to click through to the advertiser's website, landing page, or
a specific product offering.
1. Visual Content:
• Banner ads primarily rely on images, graphics, and text to convey a message.
They can also include multimedia elements like animations, videos, or
interactive components.
2. Sizes and Formats:
• A banner ad usually contains a clear and concise CTA, such as "Click Here,"
"Learn More," or "Shop Now." The CTA is designed to prompt the user to
take immediate action.
4. Linking to Destination:
• When users click on a banner ad, they are redirected to a specific URL, such
as a product page, promotional landing page, or the advertiser’s homepage.
5. Targeting and Placement:
• Looping Options.
Frame-by-Frame Animation:
1. Individual Frames:
• Each frame is a static image, slightly different from the one before it. When
played in sequence, these frames create the illusion of continuous
movement.
2. Frame Rate:
• The frame rate, typically measured in frames per second (fps), determines how
• The technique allows for a high degree of creativity and control over every
aspect of the animation. The animator can precisely dictate the movement,
timing, and expression of characters or objects.
5. Keyframes and Inbetweens:
• The timing (how long a frame is shown) and spacing (the position of objects
between frames) are crucial to conveying motion and emotion in frame-
byframe animation. Proper timing and spacing create the illusion of
acceleration, deceleration, weight, and impact.
Steps:
4. Click drop-down arrow in the middle of Timeline panel and select 'Create Video
5. Timeline'
7. You can draw different shapes in each of the frame in the video layer