0% found this document useful (0 votes)
5 views18 pages

Multimedia Systems - Unit-III - Notes

This document covers the fundamentals of animation and video in multimedia systems, detailing principles, techniques, and various methods of creating animations such as cel and computer animation. It also discusses video technology, including digital versus analog formats, video compression, and the importance of understanding video characteristics for effective multimedia projects. The document emphasizes the need for careful planning and execution in both animation and video to enhance multimedia presentations.

Uploaded by

pmanimegalai123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views18 pages

Multimedia Systems - Unit-III - Notes

This document covers the fundamentals of animation and video in multimedia systems, detailing principles, techniques, and various methods of creating animations such as cel and computer animation. It also discusses video technology, including digital versus analog formats, video compression, and the importance of understanding video characteristics for effective multimedia projects. The document emphasizes the need for careful planning and execution in both animation and video to enhance multimedia presentations.

Uploaded by

pmanimegalai123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

MULTIMEDIA SYSTEMS

UNIT-III

Animation
Animation makes static presentations come alive. It is visual change over time
and can add great power to our multimedia projects. Carefully planned, well-executed
video clips can make a dramatic difference in a multimedia project. Animation is created
from drawn pictures and video is created using real time visuals.

Principles of Animation
Animation is the rapid display of a sequence of images of 2-D artwork or model
positions in order to create an illusion of movement. It is an optical illusion of motion due
to the phenomenon of persistence of vision, and can be created and demonstrated in a
number of ways. The most common method of presenting animation is as a motion
picture or video program, although several other forms of presenting animation also exist

Animation is possible because of a biological phenomenon known as persistence of


vision and a psychological phenomenon called phi. An object seen by the human eye
remains chemically mapped on the eye’s retina for a brief time after viewing. Combined
with the human mind’s need to conceptually complete a perceived action, this makes it
possible for a series of images that are changed very slightly and very rapidly, one after the
other, to seemingly blend together into a visual illusion of movement. The following shows
a few cells or frames of a rotating logo. When the images are progressively and rapidly
changed, the arrow of the compass is perceived to be spinning.

Television video builds entire frames or pictures every second; the speed with which each
frame is replaced by the next one makes the images appear to blend smoothly into
movement. To make an object travel across the screen while it changes its shape, just
change the shape and also move or translate it a few pixels for each frame.

Animation Techniques
When you create an animation, organize its execution into a series of logical steps. First,
gather up in your mind all the activities you wish to provide in the animation; if it is
complicated, you may wish to create a written script with a list of activities and required
objects. Choose the animation tool best suited for the job. Then build and tweak your
sequences; experiment with lighting effects. Allow plenty of time for this phase when you
are experimenting and testing. Finally, post-process your animation, doing any special
rendering and adding sound effects.

Cel Animation
The term cel derives from the clear celluloid sheets that were used for drawing each frame,
which have been replaced today by acetate or plastic. Cels of famous animated cartoons have
become sought-after, suitable-for-framing collector’s items.

Cel animation artwork begins with keyframes (the first and last frame of an action). For
example, when an animated figure of a man walks across the screen, he balances the weight of
his entire body on one foot and then the other in a series of falls and recoveries, with the
opposite foot and leg catching up to support the body. The animation techniques made
famous by Disney use a series of progressively different on each frame of movie film which
plays at 24 frames per second.

A minute of animation may thus require as many as 1,440 separate frames. The term cel derives
from the clear celluloid sheets that were used for drawing each frame, which is been replaced today
by acetate or plastic.Cel animation artwork begins with keyframes.

Computer Animation
Computer animation programs typically employ the same logic and procedural concepts as
cel animation, using layer, keyframe, and tweening techniques, and even borrowing from the
vocabulary of classic animators. On the computer, paint is most often filled or drawn with tools
using features such as gradients and anti- aliasing. The word links, in computer animation
terminology, usually means special methods for computing RGB pixel values, providing edge
detection, and layering so that images can blend or otherwise mix their colors to produce special
transparencies, inversions, and effects.

Computer Animation is same as that of the logic and procedural concepts as cel
animation and use the vocabulary of classic cel animation – terms such as layer, Keyframe,
and tweening.The primary difference between the animation software program is in how
much must be drawn by the animator and how much is automatically generated by the
software. In 2D animation the animator creates an object and describes a path for the

object to follow. The software takes over, actually creating the animation on the fly as
the program is being viewed by your user. In 3D animation the animator puts his
effort in creating the models of individual and designing the characteristic of their
shapes and surfaces.Paint is most often filled or drawn with tools using features such as
gradients and anti- aliasing.

Kinematics
It is the study of the movement and motion of structures that have joints, such as a walking man.

Inverse Kinematics is in high-end 3D programs, it is the process by which you link objects
such as hands to arms and define their relationships and limits. Once those relationships
are set you can drag these parts around and let the computer calculate the result.
Morphing
Morphing is popular effect in which one image transforms into another.Morphing
application and other modeling tools that offer this effect can perform transition not
only between still images but often between moving images as well.

The morphed images were built at a rate of 8 frames per second, with each
transition taking a total of 4 seconds.

Some product that uses the morphing features are as follows

o Black Belt’s Easy Morph and WinImages,

o Human Software’s Squizz

o Valis Group’s Flo , MetaFlo, and MovieFlo.

Animation File Formats


Some file formats are designed specifically to contain animations and the
can be ported among application and platforms with the proper
translators.

Director *.dir, *.dcr

AnimationPro *.fli, *.flc

3D
Studi
o
Max *.max SuperCard and Director *.pics CompuServe
*.gif

Flash *.fla, *.swf


Making Animations That Work
Animation catches the eye and makes things noticeable. But, like sound,
animation quickly becomes trite if it is improperly applied. Unless your
project has a backbone of movie-like, animated imagery, use animation
carefully (and sparingly) like spice to achieve the greatest impact. Your
screens may otherwise become busy and “noisy.”
Multimedia authoring systems typically provide tools to simplify
creating animations within that authoring system, and they often have a
mechanism for playing the special animation files created by dedicated
animation software. Today, the most widely used tool for creating multi-
media animations for Macintosh and Windows environments and for
the Web is Adobe’s Flash. Flash directly supports several 2½-D features,
including z-axis positioning, automatic sizing and perspective adjust-
ment, and kinematics. External libraries can extend Flash’s capabilities:
open-source Papervision3D (http://blog.papervision3d.org) provides
extensive support for true 3-D modeling and animation; Figure 5-5
shows GreenSock’s TweenMax (www.greensock.com/tweenmax) pro-
viding sophisticated tweening capabilities within Flash.

A Rolling Ball

First, create a new, blank image file that is 100 × 100 pixels, and fill it with
a sphere.
Create a new layer in Photoshop, and place some white text on this
layer at the center of the image.
Make the text spherical using Photoshop’s “Spherize” distortion filter,
and save the result.
To animate the sphere by rolling it across the screen, you first need
to make a number of rotated images of the sphere. Rotate the image in
45-degree increments to create a total of eight images, rotating a full circle of 360 degrees.

For a realistic rolling effect, the circumference (calculated at pi times


100, or about 314 pixels) is divided by 8 (yielding about 40 pixels). As each
image is successively displayed, the ball is moved 40 pixels along a line.
Being where the rubber meets the road, this math applies when you roll
any round object in a straight line perpendicular to your line of sight.

A Bouncing Ball

With the simplest tools, you can make a bouncing ball to animate your
web site using GIF89a, an image format that allows multiple images to
be put into a single file and then displayed as an animation in a web
browser or presentation program that recognizes the format. The indi-
vidual frames that make up the animated GIF can be created in any
paint or image-processing program, but it takes a specialized program
to put the frames together into a GIF89a animation. As with the rolling ball example,
you simply need to flash a ball on the computer screen rapidly and in a
different place each time to make it bounce up and down. And as with the
rolling ball, where you should compute the circumference of the ball and
divide by the number of images to determine how far it rolls each time
it flashes, there are some commonsense computations to consider with a
bouncing ball, too.

Gravity makes your bouncing ball accelerate on its downward course


and decelerate on its upward course (when it moves slower and slower
until it actually stops and then accelerates downward again). As Galileo
discovered while dropping feathers and rocks from the Leaning Tower
of Pisa, a beach ball and a golf ball accelerate downward at the same rate
until they hit the ground. But the real world of Italy is full of air, so the
feather falls gently while the rock pounds dirt. It is in this real world that
you should compose your animations, tempering them always with com-
monsense physics to give them the ring of truth.

Video
The term "video" commonly refers to related types of carrier formats —which can
either be digital (DVD, Quicktime, Ogg) or analog videotape (VHS, Betamax).

Television broadcasting and home movies have long been the traditional application of
video technology

The Internet has made possible the rise of compressed video file formats to syndicate
video files to a global audience.

Video is also used in scientific engineering,

manufacturing, and security applications.

Video-Consideration
To get the highest video performance, we should

 Use video compression hardware to allow you to work with full-screen, full-motion
video.

 Use a sophisticated audio board to allow you to use CD-quality sounds.

 Install a Super fast RAID (Redundant Array of Independent Disks) system that will
support high-speed data transfer rates.

Video-Basics

 Light passes through the camera lens and is converted to an electronic signal by a Charge
Coupled Device (CCD)

 Most consumer-grade cameras have a single CCD.

 Professional–grade cameras have three CCDs, one for each Red, Green and Blue color
information (RGB)

 The output of the CCD is processed by the camera into a signal containing three channels of
color information and synchronization pulse (sync).

Video-Compression
Because of the large sizes associated with video files, video compression/decompression programs
(Codec) have been developed.

 Lossless compression à preserve image throughout the compress/decompress


process.
 Lossy compression à eliminates some of the data in the image (greater compression
ratios) à usually used for video as some drop in quality is not noticable in moving
images

 Trade-off is file size versus image quality.

 Common compression standards

 MPEG (Motion Picture Experts Group)

 JPEG (Joint Photographic Experts Groups)

Using Video
Carefully planned, well-executed video clips can make a dramatic differ-
ence in a multimedia project. Before deciding whether to add
video to your project, however, it is essential to have an understanding of
the medium, its limitations, and its costs. This chapter provides a founda-
tion to help you understand how video works, the different formats and
standards for recording and playing video, and the differences between
computer and television video. The equipment needed to shoot and edit
video, as well as tips for adding video to your project, are also covered.
Video standards and formats are still being refined as transport, stor-
age, compression, and display technologies take shape in laboratories and
in the marketplace and while equipment and post-processing evolves from
its analog beginnings to become fully digital, from capture to display.
Working with multimedia video today can be like a Mojave Desert camp-
ing trip: you may pitch your tent on comfortable high ground and find
that overnight the shifting sands have buried both your approach and your
investment.
Of all the multimedia elements, video places the highest perfor-
mance demand on your computer or device—and its memory and storage.

Digital Video
Digital video is an electronic representation of moving visual images (video) in the form of
encoded digital data. This is in contrast to analog video, which represents moving visual images in
the form of analog signals. Digital video comprises a series of digital images displayed in rapid
succession.

Digital Video is audio and visual mixed together to make a production. the data gathered
used to create a video, rather than a series of photos put together. Digital video have many
advantages such as easy copying, multicasting, easy sharing and storage. Video recorded on tape is
used on a computer on media player. Digital video is made of images displayed rapidly frequencies
of 15, 24,30, and 60 frames per second. There is a saying "A picture is worth a thousand words."
Pertaining to Digital Video the saying is "a video represents a million of those words strung together"

Digital video was first introduced commercially in 1986 with the Sony D1 format,[1] which
recorded an uncompressed standard-definition component video signal in digital form. In addition to
uncompressed formats, popular compressed digital video formats today include H.264 and MPEG-4.
Modern interconnect standards used for playback of digital video include HDMI, DisplayPort, Digital
Visual Interface (DVI) and serial digital interface (SDI).

Digital video can be copied and reproduced with no degradation in quality. In contrast, when
analog sources are copied, they experience generation loss. Digital video can be stored on digital
media such as Blu-ray Disc, on computer data storage, or streamed over the Internet to end
users who watch content on a desktop computer screen or a digital smart TV. Today, digital video
content such as TV shows and movies also include a digital audio soundtrack.

Analog versus Digital


Digital video has supplanted analog video as the method of choice for making video
for multimedia use. While broadcast stations and professional production and post-
production houses remain greatly invested in analog video hardware (according to Sony,
there are more than 350,000 Betacam SP devices in use today), digital video gear
produces excellent finished products at a fraction of the cost of analog. A digital
camcorder directly connected to a computer workstation eliminates the image-degrading
analog-to-digital conversion step typically performed by expensive video capture cards, and
brings the power of nonlinear video editing and production to everyday users.
Broadcast Video Standards

Four broadcast and video standards and recording formats are commonly in use
around the world: NTSC, PAL, SECAM, and HDTV. Because these standards and formats are
not easily interchangeable, it is important to know where your multimedia project will be
used.

NTSC

The United States, Japan, and many other countries use a system for broadcasting and
displaying video that is based upon the specifications set forth by the 1952

National Television Standards Committee. These standards define a method for

encoding information into the electronic signal that ultimately creates a television picture.
As specified by the NTSC standard, a single frame of video is made up of 525 horizontal scan
lines drawn onto the inside face of a phosphor-coated picture tube every 1/30th of a second
by a fast-moving electron beam.

PAL

The Phase Alternate Line (PAL) system is used in the United Kingdom, Europe, Australia,
and South Africa. PAL is an integrated method of adding color to a black-and-white
television signal that paints 625 lines at a frame rate 25 frames per second.

SECAM

The Sequential Color and Memory (SECAM) system is used in France, Russia, and few
other countries. Although SECAM is a 625-line, 50 Hz system, it differs greatly from both the
NTSC and the PAL color systems in its basic technology and broadcast method.

HDTV

High Definition Television (HDTV) provides high resolution in a 16:9 aspect ratio (see
following Figure). This aspect ratio allows the viewing of Cinemascope and Panavision
movies. There is contention between the broadcast and computer industries about whether
to use interlacing or progressive-scan technologies.

Digital Video Containers


When talking about video file types, most people are referring to file containers. A container is
the file that contains your video, audio streams, and any closed caption files as well.

It’s common for a container to be called a file extension since they are often seen at the end of file
names (e.g. filename.mp4). Popular video (visuals only) containers include .mp4, .mov, or .avi, but
there are many more.

Audio actually uses its own codecs. Oftentimes, your video camera will determine the container for
your original video file as well. Our Canon DSLRs record .mov to the memory card, however, our
Canon camcorders can do AVCHD or MP4, which can be changed in the camera settings menu.

Modern video editors will be happy to accept all kinds of containers, especially from well-known
camera brands.

Codecs (for compression)


You may have heard the phrase video codec when referring to video files.

A codec is simply the software that compresses your video so it can be stored and played back. It can
digitize and compress an audio or video signal for transmission and convert an incoming signal to
audio or video for reception.

While the word “compression” can conjure images of pixelated video, the process is both necessary
and efficient with modern digital cameras. It gives you much smaller file sizes with minimal quality
loss. Compression is your friend! In order to compress a video, your file must also have a
corresponding codec.

The codec of your original video file is often determined by your camera or screen recorder, which
you may or may not have control over in your camera settings.

The most common codec includes h.264, which is often used for high-definition digital video and
distribution of video content. It is also important to note the bit rate, which refers to the amount of
data stored for each second of media that is played.

The higher the bit rate, the less compression, which results in higher quality overall. Be aware that
the higher the bit rate, the larger the file size. Larger files on their own may be no problem, but
when multiplied by the size of the audience, it can cause bandwidth problems that affect internet
service providers and users.

Choosing a container for exporting


When it’s time to export your video after editing, you’ll most likely be tasked with choosing a file
type (container). When exporting a video for the web, MP4 will be your best bet!

Occasionally, you may need to use a different container depending on where you plan to host your
video. If you’re creating a video for a client, always check to see if they have any specific file type
needs. If you’re unsure, an MP4 will work for just about any platform.

MPEG standards for (Moving Picture Experts Group)

The MPEG standards are an evolving set of standards for video and audio compression and
for multimedia delivery developed by the Moving Picture Experts Group (MPEG).

MPEG-1 was designed for coding progressive video at a transmission rate of about 1.5 million bits
per second. It was designed specifically for Video-CD and CD-i media. MPEG-1 audio layer-3 (MP3)
has also evolved from early MPEG work.

MPEG-2 was designed for coding interlaced images at transmission rates above 4 million bits per
second. MPEG-2 is used for digital TV broadcast and DVD. An MPEG-2 player can handle MPEG-1
data as well.

MPEG-1 and -2 define techniques for compressing digital video by factors varying from 25:1 to 50:1.
The compression is achieved using five different compression techniques:

1. The use of a frequency-based transform called Discrete Cosine Transform (DCT).

2. Quantization, a technique for losing selective information (sometimes known as lossy


compression) that can be acceptably lost from visual information.

3. Huffman coding, a technique of lossless compression that uses code tables based on statistics
about the encoded data.

4. Motion compensated predictive coding, in which the differences in what has changed between
an image and its preceding image are calculated and only the differences are encoded.

5. Bi-directional prediction, in which some images are predicted from the pictures immediately
preceding and following the image.

The first three techniques are also used in JPEG file compression.

A proposed MPEG-3 standard, intended for High Definition TV (HDTV), was merged with the MPEG-2
standard when it became apparent that the MPEG-2 standard met the HDTV requirements.
MPEG-4 is a much more ambitious standard and addresses speech and video
synthesis, fractal geometry, computer visualization, and an artificial intelligence (AI) approach to
reconstructing images. MPEG-4 addresses a standard way for authors to create and define the media
objects in a multimedia presentation, how these can be synchronized and related to each other in
transmission, and how users are to be able to interact with the media objects.

MPEG-21 provides a larger, architectural framework for the creation and delivery of multimedia. It
defines seven key elements:

Digital item declaration

 Digital item identification and declaration

 Content handling and usage

 Intellectual property management and protection

 Terminals and networks

 Content representation

 Event reporting

The details of various parts of the MPEG-21 framework are in various draft stages.

Obtaining Video Clips


 If using analog video, we need to convert it to digital format first (in other words, need to
digitize the analog video first).

 Source for analog video can come from:

 Existing video content / clips

 beware of licensing and copyright issues

 Take a new footage (i.e. shoot your own video)

 Ask permission from all the persons who appear or speak, as well as the permission
for the audio or music used.

Shooting and Editing Video


To add full-screen, full-motion video to your multimedia project, you will need to invest
in specialized hardware and software or purchase the services of a professional video production
studio. In many cases, a professional studio will also provide editing tools and post-production
capabilities that you cannot duplicate with your Macintosh or PC.

Video Tips

A useful tool easily implemented in most digital video editing applications is “blue screen,”
“Ultimate,” or “chromo key” editing. Blue screen is a popular technique for making
multimedia titles because expensive sets are not required. Incredible backgrounds
can be generated using 3-D modeling and graphic software, and one or more actors,
vehicles, or other objects can be neatly layered onto that background. Applications such as
VideoShop, Premiere, Final Cut Pro, and iMovie provide this capability.

In S-VHS video, color and luminance information are kept on two separate tracks. The
result is a definite improvement in picture quality. This standard is also used in Hi-8. still, if
your ultimate goal is to have your project accepted by broadcast stations, this would not be
the best choice.

Component (YUV)

In the early 1980s, Sony began to experiment with a new portable professional video
format based on Betamax. Panasonic has developed their own standard based on a similar
technology, called “MII,” Betacam SP has become the industry standard for professional
video field recording. This format may soon be eclipsed by a new digital version called
“Digital Betacam.”

Digital Video

Full integration of motion video on computers eliminates the analog television form of
video from the multimedia delivery platform. If a video clip is stored as data on a hard disk,
CD-ROM, or other mass-storage device, that clip can be played back on the computer’s
monitor without overlay boards, videodisk players, or second monitors. This playback of
digital video is accomplished using software architecture such as QuickTime or AVI, a
multimedia producer or developer; you may need to convert video source material from its
still common analog form (videotape) to a digital form manageable by the end user’s
computer system. So an understanding of analog video and some special hardware must
remain in your multimedia toolbox.

Analog to digital conversion of video can be accomplished using the video overlay
hardware described above, or it can be delivered direct to disk using FireWire cables. To
repetitively digitize a full-screen color video image every 1/30 second and store it to disk or
RAM severely taxes both Macintosh and PC processing capabilities–special hardware,
compression firmware, and massive amounts of digital storage space are required.

Video Compression

To digitize and store a 10-second clip of full-motion video in your computer requires
transfer of an enormous amount of data in a very short amount of time. Reproducing just one
frame of digital video component video at 24 bits requires almost 1MB of computer data; 30
seconds of video will fill a gigabyte hard disk. Full-size, full-motion video requires that the
computer deliver data at about 30MB per second. This overwhelming technological bottleneck is
overcome using digital video compression schemes or codecs (coders/decoders). A codec is the
algorithm used to compress a video for delivery and then decode it in real-time for fast playback.

Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG, Cinepak,
Sorenson, ClearVideo, RealVideo, and VDOwave are available to compress digital video
information. Compression schemes use Discrete Cosine Transform (DCT), an encoding algorithm
that quantifies the human eye’s ability to detect color and image distortion. All of these codecs
employ lossy compression algorithms.

In addition to compressing video data, streaming technologies are being implemented to


provide reasonable quality low-bandwidth video on the Web. Microsoft, RealNetworks,
VXtreme, VDOnet, Xing, Precept, Cubic, Motorola, Viva, Vosaic, and Oracle are actively pursuing
the commercialization of streaming technology on the Web.

QuickTime, Apple’s software-based architecture for seamlessly integrating sound, animation,


text, and video (data that changes over time), is often thought of as a compression standard, but it
is really much more than that.

MPEG

The MPEG standard has been developed by the Moving Picture Experts Group, a
working group convened by the International Standards Organization (ISO) and the International
Electro-technical Commission (IEC) to create standards for digital representation of moving
pictures and associated audio and other data. MPEG1 and MPEG2 are the current standards.
Using MPEG1, you can deliver 1.2 Mbps of video and

250 Kbps of two-channel stereo audio using CD-ROM technology. MPEG2, a completely
different system from MPEG1, requires higher data rates (3 to 15 Mbps) but delivers higher image
resolution, picture quality, interlaced video formats, multiresolution scalability, and multichannel
audio features.

DVI/Indeo

DVI is a property, programmable compression/decompression technology based on the


Intel i750 chip set. This hardware consists of two VLSI (Very Large Scale Integrated) chips to
separate the image processing and display functions.

Two levels of compression and decompression are provided by DVI: Production Level Video
(PLV) and Real Time Video (RTV). PLV and RTV both use variable compression rates. DVI’s
algorithms can compress video images at ratios between 80:1 and 160:1. DVI will play back video
in full-frame size and in full color at 30 frames per second.

Optimizing Video Files for CD-ROM

CD-ROMs provide an excellent distribution medium for computer-based video: they are
inexpensive to mass produce, and they can store great quantities of information. CD- ROM players
offer slow data transfer rates, but adequate video transfer can be achieved by taking care to
properly prepare your digital video files.

Limit the amount of synchronization required between the video and audio. With
Microsoft’s AVI files, the audio and video data are already interleaved, so this is not a necessity,
but with QuickTime files, you should “flatten” your movie. Flattening means you interleave
the audio and video segments together.

Use regularly spaced key frames, 10 to 15 frames apart, and temporal compression
can correct for seek time delays. Seek time is how long it takes the CD-ROM player to locate
specific data on the CD-ROM disc. Even fast 56x drives must spin up, causing some delay (and
occasionally substantial noise).

The size of the video window and the frame rate you specify dramatically affect
performance. In QuickTime, 20 frames per second played in a 160X120-pixel window is
equivalent to playing 10 frames per second in a 320X240 window. The more data that has to be
decompressed and transferred from the CD-ROM to the screen, the slower the playback.

Video File Formats

These are the most common digital video formats and their most frequent uses.

MP4
MP4 (MPEG-4) is the most common type of video file format. Apple’s preferred format, MP4 can
play on most other devices as well. It uses the MPEG-4 encoding algorithm to store video and audio
files and text, but it offers lower definition than some others. MP4 works well for videos posted on
YouTube, Facebook, Twitter, and Instagram.

MOV
MOV (QuickTime Movie) stores high-quality video, audio, and effects, but these files tend to be quite
large. Developed for QuickTime Player by Apple, MOV files use MPEG-4 encoding to play in
QuickTime for Windows. MOV is supported by Facebook and YouTube, and it works well for TV
viewing.

WMV
WMV (Windows Media Viewer) files offer good video quality and large file size like MOV. Microsoft
developed WMV for Windows Media Player. YouTube supports WMV, and Apple users can view
these videos, but they must download Windows Media Player for Apple. Keep in mind you can’t
select your own aspect ratio in WMV.

AVI
AVI (Audio Video Interleave) works with nearly every web browser on Windows, Mac, and Linux
machines. Developed by Microsoft, AVI offers the highest quality but also large file sizes. It is
supported by YouTube and works well for TV viewing.
AVCHD
Advanced Video Coding High Definition is specifically for high-definition video. Built for Panasonic
and Sony digital camcorders, these files compress for easy storage without losing definition.

FLV, F4V, and SWF


Flash video formats FLV, F4V, and SWF (Shockwave Flash) are designed for Flash Player, but they’re
commonly used to stream video on YouTube. Flash is not supported by iOS devices.

MKV
Developed in Russia, Matroska Multimedia Container format is free and open source. It supports
nearly every codec, but it is not itself supported by many programs. MKV is a smart choice if you
expect your video to be viewed on a TV or computer using an open-source media player like VLC or
Miro.

WEBM or HTML5
These formats are best for videos embedded on your personal or business website. They are small
files, so they load quickly and stream easily.

MPEG-2
If you want to burn your video to a DVD, MPEG-2 with an H.262 codec is the way to go.

You might also like